WO2022133632A1 - Systems and methods for identity risk assessment - Google Patents

Systems and methods for identity risk assessment Download PDF

Info

Publication number
WO2022133632A1
WO2022133632A1 PCT/CN2020/137874 CN2020137874W WO2022133632A1 WO 2022133632 A1 WO2022133632 A1 WO 2022133632A1 CN 2020137874 W CN2020137874 W CN 2020137874W WO 2022133632 A1 WO2022133632 A1 WO 2022133632A1
Authority
WO
WIPO (PCT)
Prior art keywords
verification
target
user account
feature
malicious
Prior art date
Application number
PCT/CN2020/137874
Other languages
French (fr)
Inventor
Zhendong Li
Yunhan YU
Fengyi Liu
Original Assignee
Beijing Didi Infinity Technology And Development Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Didi Infinity Technology And Development Co., Ltd. filed Critical Beijing Didi Infinity Technology And Development Co., Ltd.
Priority to PCT/CN2020/137874 priority Critical patent/WO2022133632A1/en
Publication of WO2022133632A1 publication Critical patent/WO2022133632A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/10Network architectures or network communication protocols for network security for controlling access to devices or network resources
    • H04L63/102Entity profiles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection

Definitions

  • the present disclosure generally relates to identity risk assessment, and in particular, relates to systems and methods for detecting malicious users based on their fill-and-submit behaviors.
  • an on-line service platform requires a user to provide his/her identity (ID) number to be verified for safety and anti-fraud purposes. After the user provides his/her ID number, identity verification may be performed based on the provided ID number.
  • ID identity
  • a method for identity risk assessment may be provided.
  • the method may include receiving a target identity (ID) verification event of a first user account from a user terminal.
  • the target ID verification event may include a target ID number.
  • the method may also include obtaining a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event.
  • the first feature associated with the first user account may include a historical ID verification behavior feature of the first user account.
  • the second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number.
  • the method may also include determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the method may further include blocking at least one of the first user account and the target ID number.
  • the target ID verification event may include submission of an ID verification request for verifying the first user account using the target ID number.
  • the target ID verification event may include information indicating that the first user account is likely to submit an ID verification request for verifying the first user account using the target ID number.
  • the first feature associated with the first user account may include at least one of a count of different ID numbers the first user account has submitted for ID verification, a count of different names the first user account has submitted for ID verification, a count of different dates of birth the first user account has submitted for ID verification, a count of times of successful ID verification of the first user account, a count of times of failed ID verification of the first user account, or reasons of the failed ID verification of the first user account.
  • the first feature associated with the first user account may further include at least one of a real-time interaction feature of the target ID verification event, a geography location of the user terminal when the target ID verification event is received, historical geography locations where the first user account was signed up, an Internet protocol (IP) address of the user terminal, historical IP addresses where the first user account was signed up, or historical suspicious activity the first user account performed.
  • IP Internet protocol
  • the second feature associated with the target ID number may include at least one of a count of the one or more second user accounts that have submitted the target ID number for ID verification, a count of different second user accounts of the one or more second user accounts that have successful ID verification using the target ID number, a count of different second user accounts of the one or more second user accounts that have failed ID verification using the target ID number, a count of different names submitted, along with the target ID number, by the one or more second user accounts for ID verification, a count of times that the target ID number was submitted by the one or more second user accounts, a count of times of successful verification of the target ID number, a count of times of failed verification of the target ID number, or reasons of the failed ID verification of the ID number.
  • the second feature associated with the target ID number may further include at least one of credit information of the target ID number, or criminal information of the target ID number.
  • the risk assessment model may be provided based on positive samples and negative samples.
  • the positive samples and the negative samples may include first positive samples and first negative samples that correspond to the first feature.
  • the positive samples and the negative samples may also include second positive samples and second negative samples that correspond to the second feature.
  • the risk assessment model may include a risk assessment policy.
  • a determination as to whether the ID verification event is a malicious ID verification event may be determined by comparing the first feature and the second feature with the risk assessment policy.
  • the risk assessment policy may include one or more assessment features corresponding to the first feature and the second feature, and one or more thresholds corresponding to the one or more assessment features.
  • the one or more assessment features and the one or more thresholds may be provided by generating a trained machine learning model based on the positive samples and the negative samples.
  • the risk assessment model may include a trained machine learning model.
  • a determination as to whether the ID verification event is a malicious ID verification event may be determined by using the trained machine learning model to analyze the first feature and the second feature.
  • the trained machine learning model may include a decision structure for a malicious ID verification pattern trained based on the positive samples and the negative samples. If matching the malicious ID verification pattern, the decision structure may specify that the target ID verification event is indicative of a malicious ID verification event.
  • a system for identity risk assessment may be provided.
  • the system may include one or more network interfaces and logic circuits coupled to the one or more network interfaces.
  • the one or more network interfaces may be configured to communicate with user terminals registered with an online transportation service platform.
  • the logic circuits the system may verify a target identity (ID) number logging on a customer application executing on a first user terminal by performing a verification process including the following operations.
  • the system may receive a target identity (ID) verification event of a first user account from the first user terminal via the one or more network interfaces.
  • the target ID verification event may include the target ID number.
  • FIG. 5 is a flowchart illustrating an exemplary process for identity risk assessment according to some embodiments of the present disclosure
  • FIG. 6A is a flowchart illustrating an exemplary process for obtaining positive samples according to some embodiments of the present disclosure.
  • FIG. 6B is a flowchart illustrating an exemplary process for obtaining negative samples according to some embodiments of the present disclosure
  • the flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
  • the systems and methods in the present disclosure may be applied to any application scenario in which identity risk assessment is required.
  • the system or method of the present disclosure may be applied to different online service platform.
  • the system or method of the present disclosure may be applied to different transportation systems including land, ocean, aerospace, or the like, or any combination thereof.
  • the vehicle of the transportation systems may include a taxi, a private car, a hitch, a bus, a train, a bullet train, a high speed rail, a subway, a vessel, an aircraft, a spaceship, a hot-air balloon, a driverless vehicle, a bicycle, a tricycle, a motorcycle, or the like, or any combination thereof.
  • the system or method of the present disclosure may be applied to taxi hailing, chauffeur services, delivery service, carpool, bus service, take-out service, driver hiring, vehicle hiring, bicycle sharing service, train service, subway service, shuttle services, location service, or the like.
  • the system or method of the present disclosure may be applied to shopping service, learning service, fitness service, financial service, social service, or the like.
  • the application scenarios of the system or method of the present disclosure may include a web page, a plug-in of a browser, a client terminal, a custom system, an internal analysis system, an artificial intelligence robot, or the like, or any combination thereof.
  • An aspect of the present disclosure relates to systems and methods for identity risk assessment.
  • the systems may receive a target identity (ID) verification event of a first user account from a user terminal.
  • the target ID verification event may relate to an ID verification request using a target ID number.
  • the systems may obtain online behavior features of the first user account and a target ID number based on the target ID verification event.
  • the systems may further determine whether the target ID verification event is a malicious ID verification event based on the online behavior features.
  • the systems may block the target ID number and/or the first user account.
  • the systems may perform ID verification on the target ID number.
  • the systems and methods for identity risk assessment may perform identity risk assessment before the ID verification, such that the identity risk assessment is performed as early as possible.
  • the systems and methods for identity risk assessment may perform the identity risk assessment based on the online behavior features of the first user account and the target ID number, instead of official identity information that is with information delay, thereby making the identity risk assessment more accurate.
  • FIG. 1 is a schematic diagram of an exemplary identity risk assessment system according to some embodiments.
  • the identity risk assessment system 100 may include a server 110, a network 120, a user terminal 140, a storage device 150, and a positioning system 160.
  • the server 110 may be a single server or a server group.
  • the server group may be centralized, or distributed (e.g., server 110 may be a distributed system) .
  • the server 110 may be local or remote.
  • the server 110 may access information and/or data stored in the user terminal 140, and/or the storage device 150 via the network 120.
  • the server 110 may be directly connected to the user terminal 140, and/or the storage device 150 to access stored information and/or data.
  • the server 110 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the server 110 may be implemented on an online transportation service platform and be configured to allocate transportation orders of one or more service requester terminals to one or more service provider terminals.
  • a target identity (ID) number logs on a customer application executing on a user terminal to interact with the online service transportation service platform
  • the server 110 may perform an ID verification process.
  • the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
  • the server 110 may include a processing engine 112.
  • the processing engine 112 may process information and/or data to perform one or more functions described in the present disclosure. For example, the processing engine 112 may receive a target identity (ID) verification event of a first user account from a user terminal. The processing engine 112 may obtain a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. The processing engine 112 may also determine whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the processing engine 112 may further block at least one of the first user account and the target ID number.
  • ID target identity
  • the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) .
  • the processing engine 112 may include one or more hardware processors, such as a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field-programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • ASIP application-specific instruction-set processor
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field-programmable gate array
  • PLD
  • the network 120 may facilitate the exchange of information and/or data.
  • one or more components in the identity risk assessment system 100 e.g., the server 110, the user terminal 140, the storage device 150, and the positioning system 160
  • the processing engine 112 may receive a target identity (ID) verification event of a first user account from the user terminal 140 via the network 120.
  • the network 120 may be any type of wired or wireless network, or a combination thereof.
  • the wearable device may include a bracelet, footgear, glasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof.
  • the mobile equipment may include a mobile phone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, a laptop, a desktop, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof.
  • the virtual reality device and/or the augmented reality device may include a Google Glass TM , a RiftCon TM , a Fragments TM , a Gear VR TM , etc.
  • the user terminal 140 may be a device with positioning technology for locating the position of the user terminal 140.
  • the user terminal 140 may send positioning information to the server 110.
  • the user terminal 140 may include one or more service requester terminals registered with an online transportation service platform and one or more service provider terminals registered with the online transportation service platform. Each of the one or more service requester terminals may be used by a service requester to send a transportation order via the online transportation service platform. Each of the one or more service provider terminals may be used by a service provider to provide an online transportation service.
  • the storage device 150 may store data and/or instructions.
  • the storage device 150 may store data obtained from the user terminal 140 and/or the processing engine 112.
  • the storage device 150 may store a first feature associated with the first user account and a second feature associated with the target ID number obtained from the user terminal 140.
  • the first feature associated with the first user account may include a historical ID verification behavior feature of the first user account.
  • the second feature associated with the target ID number may include a historical ID verification behavior feature of a plurality of one or more second user accounts using the target ID number.
  • the storage device 150 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure.
  • the storage device 150 may store instructions that the processing engine 112 may execute or user to determine whether the target ID verification event is a malicious ID verification event.
  • the storage device 150 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc.
  • Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • Exemplary volatile read-and-write memory may include a random access memory (RAM) .
  • Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyrisor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically-erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • the storage device 150 may be implemented on a cloud platform.
  • the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
  • the positioning system 160 may determine information associated with an object, for example, the user terminal 140. For example, the positioning system 160 may determine a location of the user terminal 140 in real time.
  • the positioning system 160 may be a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a BeiDou navigation satellite system, a Galileo positioning system, a quasi-zenith satellite system (QZSS) , etc.
  • the information may include a location, an elevation, a velocity, or an acceleration of the object, an accumulative mileage number, or a current time.
  • the location may be in the form of coordinates, such as, latitude coordinate and longitude coordinate, etc.
  • the positioning system 160 may include one or more satellites, for example, a satellite 160-1, a satellite 160-2, and a satellite 160-3.
  • the satellites 160-1 through 160-3 may determine the information mentioned above independently or jointly.
  • the satellite positioning system 160 may send the information mentioned above to the network 120, or the user terminal 140 via wireless connections.
  • FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of a computing device on which the processing engine 112 may be implemented according to some embodiments of the present disclosure.
  • the computing device 200 may include a processor 210, a storage 220, an input/output (I/O) 230, and a communication port 240.
  • I/O input/output
  • the processor 210 may execute computer instructions (e.g., program code) and perform functions of the processing engine 112 in accordance with techniques described herein. For example, the processor 210 may verify a target identity (ID) number logging on an application (also referred to as a customer application) executing on a first user terminal (e.g., the user terminal 140) by performing a verification process.
  • the processor 210 may include interface circuits 210-a and processing circuits 210-b therein.
  • the interface circuits may be configured to receive electronic signals from a bus (not shown in FIG. 2) , wherein the electronic signals encode structured data and/or instructions for the processing circuits to process.
  • the processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus.
  • the computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein.
  • the processor 210 may process a first feature associated with the first user account and a second feature associated with the target ID number obtained from the user terminal 140, the storage device 150, and/or any other component of the identity risk assessment system 100.
  • the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
  • RISC reduced instruction set computer
  • ASICs application specific integrated circuits
  • ASIP application-specific instruction-set processor
  • CPU central processing unit
  • GPU graphics processing unit
  • PPU physics processing unit
  • DSP digital signal processor
  • FPGA field programmable gate array
  • ARM advanced RISC machine
  • processors of the computing device 200 may also include multiple processors, thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors.
  • the processor of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
  • the storage 220 may store data/information obtained from the user terminal 140, the storage device 150, and/or any other component of the identity risk assessment system 100.
  • the storage 220 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof.
  • the mass storage may include a magnetic disk, an optical disk, a solid-state drives, etc.
  • the removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc.
  • the volatile read-and-write memory may include a random access memory (RAM) .
  • the RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc.
  • the ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc.
  • the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure.
  • the storage 220 may store a program for the processing engine 112 for determining whether the target ID verification event is a malicious ID verification event.
  • the I/O 230 may input and/or output signals, data, information, etc.
  • the I/O 230 may enable a user interaction with the processing engine 112.
  • a user of the identity risk assessment system 100 may input a predetermined parameter through the I/O 230.
  • the I/O 230 may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof.
  • Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
  • LCD liquid crystal display
  • LED light-emitting diode
  • CRT cathode ray tube
  • the communication port 240 may be connected to a network (e.g., the network 120) to facilitate data communications.
  • the communication port 240 may establish connections between the processing engine 112 and the user terminal 140, the positioning system 160, or the storage device 150.
  • the communication port 240 may be configured to communicate with the user terminal 140, such as user terminals registered with an online transportation service platform.
  • the connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections.
  • the wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof.
  • the wireless connection may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc. ) , or the like, or a combination thereof.
  • the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc.
  • the one or more assessment features and the one or more thresholds may be provided using a classification algorithm to analyze positive samples and negative samples. Details regarding the positive samples and the negative samples may be found elsewhere in the present disclosure (e.g., the description in connection with FIG. 6A and FIG. 6B) .
  • the classification algorithm may be any existing suitable classification algorithm, for example, Dynamic Programing Algorithm, Greedy Algorithm, which is not limited in the present disclosure.
  • the one or more assessment features and the one or more thresholds may be provided by generating a first trained machine learning model based on the positive samples and the negative samples.
  • the first trained machine learning model may be any existing suitable machine learning model, for example, a Random Forest model, which is not limited in the present disclosure.
  • the risk assessment policy may be determined based on an evaluation function (e.g., an ROI function) .
  • a plurality of candidate policies may be generated based on a portion of the positive samples and the negative samples. Each of the plurality of candidate policies may be tested based on the other portion of the positive samples and the negative samples to determine an accurate rate, as the evaluation function value, of identifying malicious ID verification events using the candidate policy.
  • the processing engine 112 may determine the candidate policy with highest evaluation function value as the risk assessment policy configured to determine whether the target ID verification event is a malicious ID verification event.
  • the operation of determining the risk assessment policy based on an evaluation function may be integrated into the classification algorithm or the first trained machine learning model.
  • the malicious ID pattern may specify that the target ID number, if matching the malicious ID pattern, is indicative of a malicious ID number.
  • the malicious user account pattern may specify that the first user account, if matching the malicious user account pattern, is indicative of a malicious user account.
  • the second trained machine learning model may include a decision structure for a safe ID verification pattern trained based on the positive samples and the negative samples. The decision structure may specify that the target ID verification event, if matching the safe ID verification pattern, is indicative of a safe ID verification event.
  • the safe ID verification pattern may include a safe ID pattern and/or a safe user account pattern. The safe ID pattern may specify that the target ID number, if matching the safe ID pattern, is indicative of a safe ID number.
  • the safe user account pattern may specify that the first user account, if matching the safe user account pattern, is indicative of a safe user account.
  • the malicious ID verification event pattern may include that the count of times that the target ID number was submitted by the one or more second user accounts is more than a first threshold, that the count of the one or more second user accounts that have submitted the target ID number for ID verification is more than a second threshold, that the count of different ID numbers of failed ID verifications (with a same failed reason) submitted by the first user account is more than a third threshold, that the count of different names of failed ID verifications (with a same failed reason) submitted by the first user account is more than a forth threshold, that the count of different names the first user account has submitted for ID verification is more than a fifth threshold, that the count of different ID numbers the first user account has submitted for ID verification is more than a sixth threshold, that the count of different combinations of ID number and name the first user account has submitted for ID verification is more than a seventh threshold, or the like, or any combination thereof.
  • the malicious ID verification event pattern may include that a login or sign-up location (e.g., a geographic location and/or an IP address) of a user account is changed too frequently (e.g., a user account may be logged in at 9:00 a. m. from China, and then logged in at 9: 30 a. m. from London, which may indicate that malicious malware, distributed across multiple nations, has taken over the user account) .
  • the malicious ID verification event pattern may include that real-time interaction with a computing device mimics non-human input behavior (e.g., keystrokes per minute above a human capability threshold) .
  • the risk assessment model may be provided based on the positive samples and the negative samples.
  • the positive samples and the negative samples may include first positive samples and first negative samples that both correspond to the first feature.
  • the positive samples and the negative samples may also include second positive samples and second negative samples that both correspond to the second feature.
  • the positive samples and the negative samples may include training samples and test samples.
  • the training samples e.g., each input of the training samples having a known output
  • the preliminary model may learn how to provide an output for new input data by generalizing the information it learns in the training stage from the training data.
  • test samples may be processed by the learned preliminary model to validate the results of learning.
  • the second trained preliminary model may be obtained.
  • the second trained preliminary model may be trained according to a supervised learning algorithm.
  • the processing engine 112 may obtain training samples and a preliminary model.
  • Each training sample may include a first sample feature of a sample user account of a sample ID verification event, a second sample feature of a sample ID number of the sample ID verification event, and a classification of the sample ID verification event (e.g., whether the sample ID verification event is a malicious ID verification event or a safe ID verification event) .
  • the classification of the sample ID verification event of each training sample may be used as a ground truth identification result.
  • the preliminary model to be trained may include one or more model parameters, such as the number (or count) of layers, the number (or count) of nodes, a loss function, or the like, or any combination thereof. Before training, the preliminary model may have one or more initial parameter values of the model parameter (s) .
  • the training of the preliminary model may include one or more iterations to iteratively update the model parameters of the preliminary model based on the training sample (s) until a termination condition is satisfied in a certain iteration.
  • exemplary termination conditions may be that the value of a loss function obtained in the certain iteration is less than a threshold value, that a certain count of iterations has been performed, that the loss function converges such that the difference of the values of the loss function obtained in a previous iteration and the current iteration is within a threshold value, etc.
  • the loss function may be used to measure a discrepancy between a predicted identification result generated by the preliminary model in an iteration and the ground truth identification result.
  • the processing engine 112 may further update the preliminary model to be used in a next iteration according to, for example, a backpropagation algorithm. If the termination condition is satisfied in the current iteration, the processing engine 112 may designate the preliminary model in the current iteration as the risk assessment model.
  • the third-party device may generate the trained risk assessment model in advance and store the trained risk assessment model locally or in the storage medium (e.g., the storage device 150, the storage 220 of the processing engine 112) of the identity risk assessment system 100.
  • the processing engine 112 may obtain the trained risk assessment model from the storage medium of the identity risk assessment system 100 or the third-party device.
  • the third-party device may generate the trained risk assessment model online and transmit the trained risk assessment model to the processing engine 112.
  • the process 500 may proceed to operation 508.
  • the process 500 may proceed to operation 510.
  • the processing engine 112 may block the first user account and/or the target ID number. For example, the processing engine 112 may prevent the first user account from continuing to submit an ID verification request, prevent any other user account from using the target ID number to submit an ID verification request, forbid the first user account to access the identity risk assessment system 100, allow the first user account to online browse the contents of the identity risk assessment system 100 but forbid the first user account to use the service of the identity risk assessment system 100 (e.g., sending service requests) , demote an evaluation of the first user account in the identity risk assessment system 100, etc.
  • the processing engine 112 may block the first user account and/or the target ID number. For example, the processing engine 112 may prevent the first user account from continuing to submit an ID verification request, prevent any other user account from using the target ID number to submit an ID verification request, forbid the first user account to access the identity risk assessment system 100, allow the first user account to online browse the contents of the identity risk assessment system 100 but forbid the first user account to use the service of the identity risk assessment system 100
  • the processing device 112 may determine whether the name, date of birth, and/or profile information match the target ID number. For example, the processing engine 112 may identify profile information corresponding to the target ID number in the third-party device, and compare the identified profile information with the profile information in the target ID verification event. In some embodiments, in response to determining that the target ID number is valid, and/or the name, date of birth, and/or profile information match the target ID number, the processing engine 112 may determine that ID verification is successful. In response to determining that the target ID number is not valid, and/or the name, date of birth, and/or profile information may not match the target ID number, the processing engine 112 may determine that ID verification is failed.
  • the processing engine 112 may receive a service request inputted via an application logged on with the verified target ID number from a user terminal.
  • the processing engine 112 may further provide data to the application executing on the user terminal to generate a presentation on a display of the user terminal.
  • the presentation may include information associated with at least one service provider that satisfies the service request.
  • the processing device engine 112 may receive a taxi-hailing request inputted via a taxi-hailing application logged on with a verified target ID number from a user terminal.
  • the processing device engine 112 may provide data to the taxi-hailing application executing on the user terminal to generate information associated with at least one driver that satisfies the taxi-hailing request and display the information on the user terminal.
  • the identity risk assessment system 100 may receive a vast number of target ID verification events in real-time. Through online applying the risk assessment model, the identity risk assessment system 100 can identify malicious ID verification events from the vast number of target ID verification events in real-time and automatically. For a single target ID verification event, through the identity risk assessment system 100, a vast number of online behavior features of historical ID verification events can be quickly collected and analyzed (e.g., summarized) to obtain, in real-time and automatically, a first feature associated with the first user account and a second feature associated with the target ID number.
  • the carpooling service also allows a service request initiated by a service requester (e.g., a passenger) to be distributed in real-time and automatically to a vast number of individual service providers (e.g., taxi drivers) distance away from the service requester and allows a plurality of service provides to respond to the service request simultaneously and in real-time. Therefore, through Internet, the online to offline service systems may provide a much more efficient transaction platform for the service requesters and the service providers that may never met in a traditional pre-Internet transportation service system.
  • a service requester e.g., a passenger
  • service providers e.g., taxi drivers
  • the process 600-1 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600-1 as illustrated in FIG. 6A and described below is not intended to be limiting. In some embodiments, one or more operations of the process 600-1 may be performed to achieve at least part of operation 506 as described in connection with FIG. 5.
  • the processing engine 112 may identify historical malicious ID verification events.
  • a historical ID verification event may refer to an ID verification event that was previously received by the processing engine 112 from a sample user account and includes a sample ID number.
  • a historical malicious ID verification event may refer to a historical ID verification event of which the sample user account and/or the sample ID number was determined as malicious.
  • the processing engine 112 may retrieve the historical malicious ID verification events from a storage device (e.g., the storage device 150, the storage 220, the storage 390, or an external source) .
  • the processing engine 112 may construct a sample malicious ID verification pattern corresponding to the first feature and the second feature.
  • the sample malicious ID verification pattern may include online behavior features (e.g., corresponding to the first feature and the second feature) of the sample user account and/or the sample ID number of the historical malicious ID verification event.
  • the processing engine 112 may annotate the sample malicious ID verification patterns as malicious.
  • process 600-1 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • process 600-1 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
  • the processing engine 112 may identify historical safe ID verification events.
  • a historical safe ID verification event may refer to a historical ID verification event of which the sample user account and/or the sample ID number was determined as safe.
  • the processing engine 112 may retrieve the historical safe ID verification events from a storage device (e.g., the storage device 150, the storage 220, the storage 390, or an external source) .
  • the processing engine 112 may construct a sample safe ID verification pattern corresponding to the first feature and the second feature.
  • the sample malicious ID verification pattern may include online behavior features (e.g., corresponding to the first feature and the second feature) of the sample user account and/or the sample ID number of the historical safe ID verification event.
  • the processing engine 112 may obtain the negative samples by including the annotated sample safe ID verification patterns.
  • process 600-2 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • process 600-2 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
  • process 600 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure.
  • process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
  • aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python, or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
  • LAN local area network
  • WAN wide area network
  • SaaS Software as a Service

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

A method for identity risk assessment may include receiving a target identity (ID) verification event of a first user account from a user terminal, the target ID verification event including a target ID number. The method may also include obtaining a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. The method may also include determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the method may further include blocking at least one of the first user account and the target ID number.

Description

SYSTEMS AND METHODS FOR IDENTITY RISK ASSESSMENT TECHNICAL FIELD
The present disclosure generally relates to identity risk assessment, and in particular, relates to systems and methods for detecting malicious users based on their fill-and-submit behaviors.
BACKGROUND
Usually, an on-line service platform requires a user to provide his/her identity (ID) number to be verified for safety and anti-fraud purposes. After the user provides his/her ID number, identity verification may be performed based on the provided ID number.
SUMMARY
According to an aspect of the present disclosure, a method for identity risk assessment may be provided. The method may include receiving a target identity (ID) verification event of a first user account from a user terminal. The target ID verification event may include a target ID number. The method may also include obtaining a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. The first feature associated with the first user account may include a historical ID verification behavior feature of the first user account. The second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number. The method may also include determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the method may further include blocking at least one of the first user account and the target ID number.
In some embodiments, the target ID verification event may include submission of an ID verification request for verifying the first user account using the target ID number.
In some embodiments, the target ID verification event may include information indicating that the first user account is likely to submit an ID verification request for verifying the first user account using the target ID number.
In some embodiments, the first feature associated with the first user account may include at least one of a count of different ID numbers the first user account has submitted for ID verification, a count of different names the first user account has submitted for ID verification, a count of different dates of birth the first user account has submitted for ID verification, a count of times of successful ID verification of the first user account, a count of times of failed ID verification of the first user account, or reasons of the failed ID verification of the first user account.
In some embodiments, the first feature associated with the first user account may further include at least one of a real-time interaction feature of the target ID verification event, a geography location of the user terminal when the target ID verification event is received, historical geography locations where the first user account was signed up, an Internet protocol (IP) address of the user terminal, historical IP addresses where the first user account was signed up, or historical suspicious activity the first user account performed.
In some embodiments, the second feature associated with the target ID number may include at least one of a count of the one or more second user accounts that have submitted the target ID number for ID verification, a count of different second user accounts of the one or more second user accounts that have successful ID verification using the target ID number, a count of different second user accounts of the one or more second user accounts that have failed ID verification using the target ID number, a count of different names submitted, along with the target ID number, by the one or more second user accounts for ID verification, a count of  times that the target ID number was submitted by the one or more second user accounts, a count of times of successful verification of the target ID number, a count of times of failed verification of the target ID number, or reasons of the failed ID verification of the ID number.
In some embodiments, the second feature associated with the target ID number may further include at least one of credit information of the target ID number, or criminal information of the target ID number.
In some embodiments, the risk assessment model may be provided based on positive samples and negative samples. The positive samples and the negative samples may include first positive samples and first negative samples that correspond to the first feature. The positive samples and the negative samples may also include second positive samples and second negative samples that correspond to the second feature.
In some embodiments, the positive samples and the negative samples may be provided by performing one or more of the following operations. Historical malicious ID verification events and historical safe ID verification events may be identified. For each of the historical malicious ID verification events, a sample malicious ID verification pattern corresponding to the first feature and the second feature may be constructed. For each of the historical safe ID verification events, a sample safe ID verification pattern corresponding to the first feature and the second feature may be constructed. The sample malicious ID verification patterns may be annotated as malicious. The sample safe ID verification patterns may be annotated as safe. The positive samples by including the annotated sample malicious ID verification patterns may be obtained. The negative samples by including the annotated sample safe ID verification patterns may be obtained.
In some embodiments, the risk assessment model may include a risk assessment policy. A determination as to whether the ID verification event is a malicious ID verification event may be determined by comparing the first feature and  the second feature with the risk assessment policy.
In some embodiments, the risk assessment policy may include one or more assessment features corresponding to the first feature and the second feature, and one or more thresholds corresponding to the one or more assessment features.
In some embodiments, the one or more assessment features and the one or more thresholds may be provided using a classification algorithm to analyze the positive samples and the negative samples.
In some embodiments, the one or more assessment features and the one or more thresholds may be provided by generating a trained machine learning model based on the positive samples and the negative samples.
In some embodiments, the risk assessment model may include a trained machine learning model. A determination as to whether the ID verification event is a malicious ID verification event may be determined by using the trained machine learning model to analyze the first feature and the second feature.
In some embodiments, the trained machine learning model may include a decision structure for a malicious ID verification pattern trained based on the positive samples and the negative samples. If matching the malicious ID verification pattern, the decision structure may specify that the target ID verification event is indicative of a malicious ID verification event.
In some embodiments, a determination that the target ID verification event is a malicious ID verification event may include a determination that the first user account is a malicious user account or a determination that the target ID number is a malicious ID.
In some embodiments, in response to determining that the target ID verification event is a malicious ID verification event, the method may further include transmitting a notification indicating that the target ID verification event is determined as a malicious ID verification event to the user terminal.
In some embodiments, in response to determining that the target ID  verification event is a safe ID verification event, the method may further include performing ID verification on the target ID number by comparing the ID number with a third-party device.
According to another aspect of the present disclosure, a system for identity risk assessment may be provided. The system may include one or more service requester terminals registered with an online transportation service platform, one or more service provider terminals registered with the online transportation service platform, and a server implementing the online transportation service platform. Each of the one or more service requester terminals may be used by a service requester to send a transportation order via the online transportation service platform. Each of the one or more service provider terminals may be used by a service provider to provide an online transportation service. The server may be configured to allocate the transportation orders of the one or more service requester terminals to the one or more service provider terminals. When a target identity (ID) number logs on a customer application executing on a user terminal to interact with the online service transportation service platform, the server may be configured to cause the system to perform a verification process including the following operations. The system may receive a target identity (ID) verification event of a first user account from the user terminal. The target ID verification event may include a target ID number. The system may also obtain a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. The first feature associated with the first user account may include a historical ID verification behavior feature of the first user account, and the second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number. The system may also determine whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID  verification event is a malicious ID verification event, the system may further block at least one of the first user account and the target ID number.
According to yet another aspect of the present disclosure, a system for identity risk assessment may be provided. The system may include one or more network interfaces and logic circuits coupled to the one or more network interfaces. The one or more network interfaces may be configured to communicate with user terminals registered with an online transportation service platform. During operation the logic circuits, the system may verify a target identity (ID) number logging on a customer application executing on a first user terminal by performing a verification process including the following operations. The system may receive a target identity (ID) verification event of a first user account from the first user terminal via the one or more network interfaces. The target ID verification event may include the target ID number. The system may obtain a first feature associated with the first user account and a second feature associated with the target ID number from the transportation service platform and based on the target ID verification event. The first feature associated with the first user account may include a historical ID verification behavior feature of the first user account, and the second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number. The system may also determine whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the system may further block at least one of the first user account and the target ID number. In response to determining that the target ID verification event is not a malicious ID verification event, the system may further verify the target ID number. The system may further receive a service request inputted via the customer application logged on with the verified target ID number from the first user terminal. The system may further provide data to the customer  application executing on the first user terminal to generate a presentation on a display of the first user terminal. The presentation may include information associated with at least one service provider that satisfies the service request.
Additional features will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following and the accompanying drawings or may be learned by production or operation of the examples. The features of the present disclosure may be realized and attained by practice or use of various aspects of the methodologies, instrumentalities, and combinations set forth in the detailed examples discussed below.
BRIEF DESCRIPTION OF THE DRAWINGS
The present disclosure is further described in terms of exemplary embodiments. These exemplary embodiments are described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar structures throughout the several views of the drawings, and wherein:
FIG. 1 is a schematic diagram illustrating an exemplary identity risk assessment system according to some embodiments of the present disclosure;
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary computing device according to some embodiments of the present disclosure;
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of an exemplary mobile device according to some embodiments of the present disclosure;
FIG. 4 is a block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure;
FIG. 5 is a flowchart illustrating an exemplary process for identity risk  assessment according to some embodiments of the present disclosure;
FIG. 6A is a flowchart illustrating an exemplary process for obtaining positive samples according to some embodiments of the present disclosure; and
FIG. 6B is a flowchart illustrating an exemplary process for obtaining negative samples according to some embodiments of the present disclosure;
DETAILED DESCRIPTION
The following description is presented to enable any person skilled in the art to make and use the present disclosure, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the present disclosure is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. As used herein, the singular forms “a, ” “an, ” and “the” may be intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprise, ” “comprises, ” and/or “comprising, ” “include, ” “includes, ” and/or “including, ” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features, and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structure and the combination of parts and economies of manufacture, may become more apparent upon consideration of the following description with reference to the  accompanying drawings, all of which form a part of this disclosure. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended to limit the scope of the present disclosure. It is understood that the drawings are not to scale.
The flowcharts used in the present disclosure illustrate operations that systems implement according to some embodiments of the present disclosure. It is to be expressly understood, the operations of the flowchart may be implemented not in order. Conversely, the operations may be implemented in inverted order, or simultaneously. Moreover, one or more other operations may be added to the flowcharts. One or more operations may be removed from the flowcharts.
Moreover, the systems and methods in the present disclosure may be applied to any application scenario in which identity risk assessment is required. For example, the system or method of the present disclosure may be applied to different online service platform. For instance, the system or method of the present disclosure may be applied to different transportation systems including land, ocean, aerospace, or the like, or any combination thereof. The vehicle of the transportation systems may include a taxi, a private car, a hitch, a bus, a train, a bullet train, a high speed rail, a subway, a vessel, an aircraft, a spaceship, a hot-air balloon, a driverless vehicle, a bicycle, a tricycle, a motorcycle, or the like, or any combination thereof. The system or method of the present disclosure may be applied to taxi hailing, chauffeur services, delivery service, carpool, bus service, take-out service, driver hiring, vehicle hiring, bicycle sharing service, train service, subway service, shuttle services, location service, or the like. As another example, the system or method of the present disclosure may be applied to shopping service, learning service, fitness service, financial service, social service, or the like. The application scenarios of the system or method of the present disclosure may include a web page, a plug-in of a browser, a client terminal, a custom system, an internal analysis system, an artificial intelligence robot, or the like, or any combination thereof.
An aspect of the present disclosure relates to systems and methods for identity risk assessment. The systems may receive a target identity (ID) verification event of a first user account from a user terminal. The target ID verification event may relate to an ID verification request using a target ID number. The systems may obtain online behavior features of the first user account and a target ID number based on the target ID verification event. The systems may further determine whether the target ID verification event is a malicious ID verification event based on the online behavior features. In response to determining that the target ID verification event is a malicious ID verification event, the systems may block the target ID number and/or the first user account. In response to determining that the target ID verification event is a safe ID verification event, the systems may perform ID verification on the target ID number.
The systems and methods for identity risk assessment may perform identity risk assessment before the ID verification, such that the identity risk assessment is performed as early as possible. The systems and methods for identity risk assessment may perform the identity risk assessment based on the online behavior features of the first user account and the target ID number, instead of official identity information that is with information delay, thereby making the identity risk assessment more accurate.
FIG. 1 is a schematic diagram of an exemplary identity risk assessment system according to some embodiments. The identity risk assessment system 100 may include a server 110, a network 120, a user terminal 140, a storage device 150, and a positioning system 160.
In some embodiments, the server 110 may be a single server or a server group. The server group may be centralized, or distributed (e.g., server 110 may be a distributed system) . In some embodiments, the server 110 may be local or remote. For example, the server 110 may access information and/or data stored in the user terminal 140, and/or the storage device 150 via the network 120. As  another example, the server 110 may be directly connected to the user terminal 140, and/or the storage device 150 to access stored information and/or data. In some embodiments, the server 110 may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof. In some embodiments, the server 110 may be implemented on an online transportation service platform and be configured to allocate transportation orders of one or more service requester terminals to one or more service provider terminals. When a target identity (ID) number logs on a customer application executing on a user terminal to interact with the online service transportation service platform, the server 110 may perform an ID verification process. In some embodiments, the server 110 may be implemented on a computing device 200 having one or more components illustrated in FIG. 2 in the present disclosure.
In some embodiments, the server 110 may include a processing engine 112. The processing engine 112 may process information and/or data to perform one or more functions described in the present disclosure. For example, the processing engine 112 may receive a target identity (ID) verification event of a first user account from a user terminal. The processing engine 112 may obtain a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. The processing engine 112 may also determine whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In response to determining that the target ID verification event is a malicious ID verification event, the processing engine 112 may further block at least one of the first user account and the target ID number. In some embodiments, the processing engine 112 may include one or more processing engines (e.g., single-core processing engine (s) or multi-core processor (s) ) . Merely by way of example,  the processing engine 112 may include one or more hardware processors, such as a central processing unit (CPU) , an application-specific integrated circuit (ASIC) , an application-specific instruction-set processor (ASIP) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a digital signal processor (DSP) , a field-programmable gate array (FPGA) , a programmable logic device (PLD) , a controller, a microcontroller unit, a reduced instruction-set computer (RISC) , a microprocessor, or the like, or any combination thereof.
The network 120 may facilitate the exchange of information and/or data. In some embodiments, one or more components in the identity risk assessment system 100 (e.g., the server 110, the user terminal 140, the storage device 150, and the positioning system 160) may send information and/or data to other component (s) in the identity risk assessment system 100 via the network 120. For example, the processing engine 112 may receive a target identity (ID) verification event of a first user account from the user terminal 140 via the network 120. In some embodiments, the network 120 may be any type of wired or wireless network, or a combination thereof. Merely by way of example, the network 120 may include a cable network, a wireline network, an optical fiber network, a telecommunications network, an intranet, the Internet, a local area network (LAN) , a wide area network (WAN) , a wireless local area network (WLAN) , a metropolitan area network (MAN) , a wide area network (WAN) , a public telephone switched network (PSTN) , a Bluetooth TM network, a ZigBee network, a near field communication (NFC) network, or the like, or any combination thereof. In some embodiments, the network 120 may include one or more network access points. For example, the network 120 may include wired or wireless network access points such as base stations and/or internet exchange points 120-1, 120-2, …, through which one or more components of the identity risk assessment system 100 may be connected to the network 120 to exchange data and/or information.
In some embodiments, the user terminal 140 may include a mobile device  140-1, a tablet computer 140-2, a laptop computer 140-3, or the like, or any combination thereof. In some embodiments, the mobile device 140-1 may include a smart home device, a wearable device, a mobile equipment, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart lighting device, a control device of an intelligent electrical apparatus, a smart monitoring device, a smart television, a smart video camera, an interphone, or the like, or any combination thereof. In some embodiments, the wearable device may include a bracelet, footgear, glasses, a helmet, a watch, clothing, a backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the mobile equipment may include a mobile phone, a personal digital assistance (PDA) , a gaming device, a navigation device, a point of sale (POS) device, a laptop, a desktop, or the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, a virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glasses, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or the augmented reality device may include a Google Glass TM, a RiftCon TM, a Fragments TM, a Gear VR TM, etc. In some embodiments, the user terminal 140 may be a device with positioning technology for locating the position of the user terminal 140. In some embodiments, the user terminal 140 may send positioning information to the server 110.
In some embodiment, the user terminal 140 may include one or more service requester terminals registered with an online transportation service platform and one or more service provider terminals registered with the online transportation service platform. Each of the one or more service requester terminals may be used by a service requester to send a transportation order via the online transportation service platform. Each of the one or more service provider terminals may be used by a service provider to provide an online transportation service.
The storage device 150 may store data and/or instructions. In some embodiments, the storage device 150 may store data obtained from the user terminal 140 and/or the processing engine 112. For example, the storage device 150 may store a first feature associated with the first user account and a second feature associated with the target ID number obtained from the user terminal 140. The first feature associated with the first user account may include a historical ID verification behavior feature of the first user account. The second feature associated with the target ID number may include a historical ID verification behavior feature of a plurality of one or more second user accounts using the target ID number. In some embodiments, the storage device 150 may store data and/or instructions that the server 110 may execute or use to perform exemplary methods described in the present disclosure. For example, the storage device 150 may store instructions that the processing engine 112 may execute or user to determine whether the target ID verification event is a malicious ID verification event. In some embodiments, the storage device 150 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. Exemplary mass storage may include a magnetic disk, an optical disk, a solid-state drive, etc. Exemplary removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. Exemplary volatile read-and-write memory may include a random access memory (RAM) . Exemplary RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyrisor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. Exemplary ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically-erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM, etc. In some embodiments, the storage device 150 may be implemented on a cloud platform. Merely by way of example, the cloud platform may include a private  cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, or the like, or any combination thereof.
In some embodiments, the storage device 150 may be connected to the network 120 to communicate with one or more components in the identity risk assessment system 100 (e.g., the server 110, the user terminal 140, etc. ) . One or more components in the identity risk assessment system 100 may access the data or instructions stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be directly connected to or communicate with one or more components in the identity risk assessment system 100 (e.g., the server 110, the user terminal 140, etc. ) . In some embodiments, the storage device 150 may be part of the server 110.
The positioning system 160 may determine information associated with an object, for example, the user terminal 140. For example, the positioning system 160 may determine a location of the user terminal 140 in real time. In some embodiments, the positioning system 160 may be a global positioning system (GPS) , a global navigation satellite system (GLONASS) , a compass navigation system (COMPASS) , a BeiDou navigation satellite system, a Galileo positioning system, a quasi-zenith satellite system (QZSS) , etc. The information may include a location, an elevation, a velocity, or an acceleration of the object, an accumulative mileage number, or a current time. The location may be in the form of coordinates, such as, latitude coordinate and longitude coordinate, etc. The positioning system 160 may include one or more satellites, for example, a satellite 160-1, a satellite 160-2, and a satellite 160-3. The satellites 160-1 through 160-3 may determine the information mentioned above independently or jointly. The satellite positioning system 160 may send the information mentioned above to the network 120, or the user terminal 140 via wireless connections.
FIG. 2 is a schematic diagram illustrating exemplary hardware and/or software components of a computing device on which the processing engine 112  may be implemented according to some embodiments of the present disclosure. As illustrated in FIG. 2, the computing device 200 may include a processor 210, a storage 220, an input/output (I/O) 230, and a communication port 240.
The processor 210 (e.g., logic circuits coupled to the communication port 240) may execute computer instructions (e.g., program code) and perform functions of the processing engine 112 in accordance with techniques described herein. For example, the processor 210 may verify a target identity (ID) number logging on an application (also referred to as a customer application) executing on a first user terminal (e.g., the user terminal 140) by performing a verification process. For example, the processor 210 may include interface circuits 210-a and processing circuits 210-b therein. The interface circuits may be configured to receive electronic signals from a bus (not shown in FIG. 2) , wherein the electronic signals encode structured data and/or instructions for the processing circuits to process. The processing circuits may conduct logic calculations, and then determine a conclusion, a result, and/or an instruction encoded as electronic signals. Then the interface circuits may send out the electronic signals from the processing circuits via the bus.
The computer instructions may include, for example, routines, programs, objects, components, data structures, procedures, modules, and functions, which perform particular functions described herein. For example, the processor 210 may process a first feature associated with the first user account and a second feature associated with the target ID number obtained from the user terminal 140, the storage device 150, and/or any other component of the identity risk assessment system 100. In some embodiments, the processor 210 may include one or more hardware processors, such as a microcontroller, a microprocessor, a reduced instruction set computer (RISC) , an application specific integrated circuits (ASICs) , an application-specific instruction-set processor (ASIP) , a central processing unit (CPU) , a graphics processing unit (GPU) , a physics processing unit (PPU) , a microcontroller unit, a digital signal processor (DSP) , a field programmable gate  array (FPGA) , an advanced RISC machine (ARM) , a programmable logic device (PLD) , any circuit or processor capable of executing one or more functions, or the like, or any combinations thereof.
Merely for illustration, only one processor is described in the computing device 200. However, it should be noted that the computing device 200 in the present disclosure may also include multiple processors, thus operations and/or method steps that are performed by one processor as described in the present disclosure may also be jointly or separately performed by the multiple processors. For example, if in the present disclosure the processor of the computing device 200 executes both step A and step B, it should be understood that step A and step B may also be performed by two or more different processors jointly or separately in the computing device 200 (e.g., a first processor executes step A and a second processor executes step B, or the first and second processors jointly execute steps A and B) .
The storage 220 may store data/information obtained from the user terminal 140, the storage device 150, and/or any other component of the identity risk assessment system 100. In some embodiments, the storage 220 may include a mass storage, a removable storage, a volatile read-and-write memory, a read-only memory (ROM) , or the like, or any combination thereof. For example, the mass storage may include a magnetic disk, an optical disk, a solid-state drives, etc. The removable storage may include a flash drive, a floppy disk, an optical disk, a memory card, a zip disk, a magnetic tape, etc. The volatile read-and-write memory may include a random access memory (RAM) . The RAM may include a dynamic RAM (DRAM) , a double date rate synchronous dynamic RAM (DDR SDRAM) , a static RAM (SRAM) , a thyristor RAM (T-RAM) , and a zero-capacitor RAM (Z-RAM) , etc. The ROM may include a mask ROM (MROM) , a programmable ROM (PROM) , an erasable programmable ROM (EPROM) , an electrically erasable programmable ROM (EEPROM) , a compact disk ROM (CD-ROM) , and a digital versatile disk ROM,  etc. In some embodiments, the storage 220 may store one or more programs and/or instructions to perform exemplary methods described in the present disclosure. For example, the storage 220 may store a program for the processing engine 112 for determining whether the target ID verification event is a malicious ID verification event.
The I/O 230 may input and/or output signals, data, information, etc. In some embodiments, the I/O 230 may enable a user interaction with the processing engine 112. For example, a user of the identity risk assessment system 100 may input a predetermined parameter through the I/O 230. In some embodiments, the I/O 230 may include an input device and an output device. Examples of the input device may include a keyboard, a mouse, a touch screen, a microphone, or the like, or a combination thereof. Examples of the output device may include a display device, a loudspeaker, a printer, a projector, or the like, or a combination thereof. Examples of the display device may include a liquid crystal display (LCD) , a light-emitting diode (LED) -based display, a flat panel display, a curved screen, a television device, a cathode ray tube (CRT) , a touch screen, or the like, or a combination thereof.
The communication port 240 (e.g., one or more network interfaces) may be connected to a network (e.g., the network 120) to facilitate data communications. The communication port 240 may establish connections between the processing engine 112 and the user terminal 140, the positioning system 160, or the storage device 150. For example, the communication port 240 may be configured to communicate with the user terminal 140, such as user terminals registered with an online transportation service platform. The connection may be a wired connection, a wireless connection, any other communication connection that can enable data transmission and/or reception, and/or any combination of these connections. The wired connection may include, for example, an electrical cable, an optical cable, a telephone wire, or the like, or any combination thereof. The wireless connection  may include, for example, a Bluetooth TM link, a Wi-Fi TM link, a WiMax TM link, a WLAN link, a ZigBee link, a mobile network link (e.g., 3G, 4G, 5G, etc. ) , or the like, or a combination thereof. In some embodiments, the communication port 240 may be and/or include a standardized communication port, such as RS232, RS485, etc.
FIG. 3 is a schematic diagram illustrating exemplary hardware and/or software components of a mobile device on which the user terminal 140 may be implemented according to some embodiments of the present disclosure. As illustrated in FIG. 3, the mobile device 300 may include a communication platform 310, a display 320, a graphic processing unit (GPU) 330, a central processing unit (CPU) 340, an I/O 350, a memory 360, and a storage 390. In some embodiments, any other suitable component, including but not limited to a system bus or a controller (not shown) , may also be included in the mobile device 300. In some embodiments, a mobile operating system 370 (e.g., iOS TM, Android TM, Windows Phone TM, etc. ) and one or more applications 380 may be loaded into the memory 360 from the storage 390 in order to be executed by the CPU 340. The applications 380 may include a browser or any other suitable mobile apps for receiving and rendering information relating to image processing or other information from the processing engine 112. User interactions with the information stream may be achieved via the I/O 350 and provided to the processing engine 112 and/or other components of the identity risk assessment system 100 via the network 120.
To implement various modules, units, and their functionalities described in the present disclosure, computer hardware platforms may be used as the hardware platform (s) for one or more of the elements described herein. A computer with user interface elements may be used to implement a personal computer (PC) or any other type of work station or terminal device. A computer may also act as a server if appropriately programmed.
One of ordinary skill in the art would understand that when an element of the identity risk assessment system 100 performs, the element may perform through  electrical signals and/or electromagnetic signals. For example, when the processing engine 112 processes a task, such as making a determination, or identifying information, the processing engine 112 may operate logic circuits in its processor to process such task. When the processing engine 112 receives data (e.g., a user input) from the user terminal 140, a processor of the processing engine 112 may receive electrical signals including the data. The processor of the processing engine 112 may receive the electrical signals through an input port. If the user terminal 140 communicates with the processing engine 112 via a wired network, the input port may be physically connected to a cable. If the user terminal 140 communicates with the processing engine 112 via a wireless network, the input port of the processing engine 112 may be one or more antennas, which may convert the electrical signals to electromagnetic signals. Within an electronic device, such as the user terminal 140, and/or the server 110, when a processor thereof processes an instruction, sends out an instruction, and/or performs an action, the instruction and/or action is conducted via electrical signals. For example, when the processor retrieves or saves data from a storage medium (e.g., the storage device 150) , it may send out electrical signals to a read/write device of the storage medium, which may read or write structured data in the storage medium. The structured data may be transmitted to the processor in the form of electrical signals via a bus of the electronic device. Here, an electrical signal may refer to one electrical signal, a series of electrical signals, and/or a plurality of discrete electrical signals.
FIG. 4 is a schematic block diagram illustrating an exemplary processing engine according to some embodiments of the present disclosure. As shown in FIG. 4, the processing engine 112 may include a receiving module 402, an acquisition module 404, a determination module 406.
The receiving module 402 may be configured to receive data and/or information. For example, the receiving module 402 may receive a target identity (ID) verification event of a first user account from a user terminal (e.g., the user  terminal 140) . More descriptions regarding the receiving of the target ID verification event may be found elsewhere in the present disclosure. See, e.g., operation 502 in FIG. 5 and relevant descriptions thereof.
The acquisition module 404 may be configured to obtain data or information from other modules or units inside or outside the processing engine 112. For example, the acquisition module 404 may obtain a first feature associated with the first user account and a second feature associated with the target ID number based on the target ID verification event. In some embodiments, the first feature and the second feature may include an online behavior feature of the first user account and the target ID number. In some embodiments, the first feature associated with the first user account may include a historical ID verification behavior feature of the first user account. The second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number. More descriptions regarding the obtaining of the first feature and the second feature may be found elsewhere in the present disclosure. See, e.g., operation 504 in FIG. 5 and relevant descriptions thereof.
The determination module 406 may be configured to determine whether the target ID verification event is a malicious ID verification event. More descriptions regarding the determination of whether the target ID verification event is a malicious ID verification event may be found elsewhere in the present disclosure. See, e.g., operation 506 in FIG. 5 and relevant descriptions thereof.
In some embodiments, in response to determining that the target ID verification event is a malicious ID verification event, the determination module 406 may block the first user account and/or the target ID number. For example, the processing engine 112 may prevent the first user account from continuing to submit an ID verification request, prevent any other user account from using the target ID number to submit an ID verification request, forbid the first user account to access the identity risk assessment system 100, allow the first user account to online  browse the contents of the identity risk assessment system 100 but forbid the first user account to use the service of the identity risk assessment system 100 (e.g., sending service requests) , demote an evaluation of the first user account in the identity risk assessment system 100, etc. More descriptions regarding blocking the first user account and/or the target ID number may be found elsewhere in the present disclosure. See, e.g., operation 508 in FIG. 5 and relevant descriptions thereof.
In some embodiments, in response to determining that the target ID verification event is not a malicious ID verification event, the determination module 406 may perform ID verification on the target ID number by comparing the ID number with a third-party device. More descriptions regarding performing ID verification on the target ID number by comparing the ID number with a third-party device may be found elsewhere in the present disclosure. See, e.g., operation 510 in FIG. 5 and relevant descriptions thereof.
t should be noted that the above description is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the processing engine 112 may further include a storage module (not shown in FIG. 4) . The storage module may be configured to store data generated during any process performed by any component of in the processing engine 112. As another example, each of the components of the processing engine 112 may include a storage device. Additionally, or alternatively, the components of the processing engine 112 may share a common storage device.
FIG. 5 is a flowchart illustrating an exemplary process for identity risk assessment according to some embodiments of the present disclosure. In some embodiments, the process 500 may be implemented in the identity risk assessment  system 100 illustrated in FIG. 1. For example, the process 500 may be stored in a storage medium (e.g., the storage device 150, or the storage 220 of the processing engine 112) as a form of instructions, and invoked and/or executed by the server 110 (e.g., the processing engine 112 of the server 110, the processor 220 of the processing engine 112, or one or more modules in the processing engine 112 illustrated in FIG. 4) . The operations of the illustrated process 500 presented below are intended to be illustrative. In some embodiments, the process 500 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 500 as illustrated in FIG. 5 and described below is not intended to be limiting.
In 502, the processing engine 112 (e.g., the receiving module 402) may receive a target identity (ID) verification event of a first user account from a user terminal (e.g., the user terminal 140) .
In some embodiments, the user terminal 140 may establish a communication (e.g., wireless communication) with a server (e.g., the server 110) , through an application (aslo referred to as a customer application, e.g., the application 380 in FIG. 3) installed in the user terminal 140 or a webpage in a browser via a network (e.g., the network 120) . More descriptions for the user terminal 140 may be found elsewhere in the present disclosure (e.g., FIG. 1 and the descriptions thereof) . The application may be associated with the identity risk assessment system 100. For example, the application may be a taxi-hailing application associated with the identity risk assessment system 100.
In some embodiments, a new user may submit, through the user terminal 140 to the processing engine 112, registration information, such as, a user name, a mobile phone number, a password, etc., on the registration page of the application or the webpage to register, and then the processing engine 112 may generate a user account (e.g., the first user account) for the new user based on the registration  information. After completing the registration, the user account may be required to submit an ID verification request (e.g., providing ID information) for ID verification for safety and anti-fraud purposes. The ID information may include an ID number (e.g., the target ID number) that refers to information that is a unique identifier for a single person. For example, the ID number may include an ID card number, a passport number, or the like, or any combination thereof. In some embodiments, the ID information may further include a name (e.g., a real name) , a date of birth, profile information, or the like, or any information thereof.
In some embodiments, when a user account (e.g., the first user account) initiates a sensitive issue (e.g., changing a password, receiving an exclusive issue, etc. ) , the user account may be required to submit an ID verification request (e.g., providing ID information) for ID verification for safety and anti-fraud purposes.
In some embodiments, after the first user account is asked for submitting an ID verification request, the ID information may be input through the user terminal 140. A target ID verification event may be initiated, under the first user account based on the input ID information, in the user terminal 140. For example, the target ID verification event may include a target ID number. As another example, the target ID verification event may further include a name (e.g., a real name) , a date of birth, profile information, or the like, or any information thereof. The target ID verification event may be transmitted to the processing engine 112 through the user terminal 140 via the network 120.
In some embodiments, the target ID verification event may include submission of an ID verification request for verifying the first user account using the target ID number. Merely by way of example, a user may input the ID information on the ID verification interface of the application installed in the user terminal 140. After completing inputting the ID information, the user may send out an ID verification request including the ID information to the server 110 by pressing a button (e.g., a "submit" button) on the ID verification interface of the application  installed in the user terminal 140. Upon receiving the ID verification request, the server 110 may determine that the ID verification request is formally sent out.
In some embodiments, the target ID verification event may include information indicating that the first user account is likely to submit an ID verification request for verifying the first user account using the target ID number before the ID verification request is actually submitted. Merely by way of example, the application installed in the user terminal 140 may direct the user terminal 140 to monitor, continuously or periodically, input information from a user and transmit the input information to the identity risk assessment system 100 via the network 120. Consequently, the user terminal 140 may inform the identity risk assessment system 100 about the user’s input information in real-time or substantially real-time. As a result, when the user starts to input the target ID number on the ID verification interface of the application installed in the user terminal 140, the identity risk assessment system 100 may receive enough information to determine whether the user is likely to submit an ID verification request, i.e., an intention of the user. For example, when the user inputs all or part of the target ID number, and before sending out the ID verification request to the identity risk assessment system 100, the identity risk assessment system 100 may have already received the target ID number, and determine that the user is likely to submit an ID verification request.
In 504, the processing engine 112 (e.g., the acquisition module 404) may obtain, based on the target ID verification event, a first feature associated with the first user account and a second feature associated with the target ID number. In some embodiments, the first feature and the second feature may include an online behavior feature of the first user account and the target ID number. In some embodiments, the first feature associated with the first user account may include a historical ID verification behavior feature of the first user account. The second feature associated with the target ID number may include a historical ID verification behavior feature of one or more second user accounts using the target ID number.
In some embodiments, the first feature associated with the first user account may include a count (or number) of different ID numbers the first user account has submitted for ID verification, a count (or number) of different names the first user account has submitted for ID verification, a count (or number) of different dates of birth the first user account has submitted for ID verification, a count (or number) of times of successful ID verification of the first user account, a count (or number) of times of failed ID verification of the first user account, reasons of the failed ID verification of the first user account, or the like, or any combination thereof. The reasons of the failed ID verification may include that an ID number does not match a date of birth, profile information, and/or a name submitted by the first user account, that the submitted ID number is invalid, that an ID varication event is determined as malicious, etc.
In some embodiments, the first feature associated with the first user account may include a real-time interaction feature of the target ID verification event, a geography location of the user terminal 140 when the target ID verification event is received, historical geography locations where the first user account was signed up and/or logged in, an Internet protocol (IP) address of the user terminal 140, historical IP addresses where the first user account was signed up and/or logged in, historical suspicious activity the first user account performed, credit information of the first user account, or the like, or any combination thereof. The real-time interaction feature of the target ID verification event may include, when the target ID verification event is being input, an input speed, a typo, a count (or number) of times of re-inputting operations, a count (or number) of times of modification operations, a time interval between modification operations, or the like, or any combination thereof. The historical suspicious activity the first user account performed may include visiting a suspect website or application, abusing the service provided by the identity risk assessment system 100, (e.g., false consumption, malicious evaluation, malicious cancellation of orders, etc. ) . For example, an activity of visiting a website that other  malicious user accounts have visited may be considered suspect. As another example, an activity of visiting an unsafe website may be considered suspect. The credit information of the first user account may include evaluation of the first user account in the identity risk assessment system 100.
In some embodiments, the second feature associated with the target ID number account may include a count (or number) of the one or more second user accounts that have submitted the target ID number for ID verification, a count (or number) of different second user accounts of the one or more second user accounts that have successful ID verification using the target ID number, a count (or number) of different second user accounts of the one or more second user accounts that have failed ID verification using the target ID number, a count (or number) of different names submitted, along with the target ID number, by the one or more second user accounts for ID verification, a count (or number) of times that the target ID number was submitted by the one or more second user accounts, a count (or number) of times of successful verification of the target ID number, a count (or number) of times of failed verification of the target ID number, or reasons of the failed ID verification of the target ID number.
In some embodiments, the second feature associated with the target ID number may include credit information of the target ID number and/or criminal information of the target ID number. In some embodiments, the processing engine 112 may obtain the credit information of the target ID number from the identity risk assessment system 100 or a third-part device. Merely by way of example, the processing engine 112 may obtain the credit information of the target ID number from other online service platforms and/or an official platform (e.g., a credit information platform) . In some embodiments, the criminal information may be obtained from a third-part device (e.g., a public security platform) .
In some embodiments, the processing engine 112 may obtain a plurality of historical ID verification events related to the first user account and/or the target ID  number. In some embodiments, at least one of the plurality of historical ID verification events may be initiated by the first user account for ID verification using the target ID number or other ID numbers. In some embodiments, at least one of the plurality of historical ID verification events may be initiated by other user accounts for ID verification using the target ID number.
In some embodiments, the processing engine 112 may extract features, focusing on online behavior features of the first user account and the target ID number, from the plurality of historical ID verification events and convert the extracted features into structured data to obtain the first feature associated with the first user account and the second feature associated with the target ID nubmer. In some embodiments, the structured data may be stored in a storage device (e.g., the storage device 150, the storage 220, the storage 390, or an external source) . The processing engine 112 may retrieve the structured data from the storage device to obtain the first feature associated with the first user account and the second feature associated with the target ID number.
In 506, the processing engine 112 (e.g., the determination module 406) may determine whether the target ID verification event is a malicious ID verification event. In some embodiments, the processing engine 112 may determine whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature. In some embodiments, a determination that the target ID verification event is a malicious ID verification event may include that the first user account is a malicious user account and/or that the target ID number is a malicious ID.
In some embodiments, the risk assessment model may include a risk assessment policy. The processing engine 112 may determine whether the target ID verification event is a malicious ID verification event by comparing the first feature and the second feature with the risk assessment policy. For example, if the processing engine 112 determines that the first feature and/or the second feature  satisfies the risk assessment policy, the processing engine 112 may determine that the target ID verification event is a malicious ID verification event.
In some embodiments, the risk assessment policy may include one or more assessment features corresponding to the first feature and the second feature, and one or more thresholds corresponding to the one or more assessment features. In some embodiments, the risk assessment policy may include that the count of times that the target ID number was submitted by the one or more second user accounts is more than a first threshold, that the count of the one or more second user accounts that have submitted the target ID number for ID verification is more than a second threshold, that the count of different ID numbers of failed ID verifications (with a same failed reason) submitted by the first user account is more than a third threshold, that the count of different names of failed ID verifications (with a same failed reason) submitted by the first user account is more than a forth threshold, that the count of different names the first user account has submitted for ID verification is more than a fifth threshold, that the count of different ID numbers the first user account has submitted for ID verification is more than a sixth threshold, that the count of different combinations of ID number and name the first user account has submitted for ID verification is more than a seventh threshold, or the like, or any combination thereof.
In some embodiments, the one or more assessment features and the one or more thresholds may be set manually by a user, according to an experience value, or according to a default setting of the identity risk assessment system 100, or determined by the processing engine 112 according to an actual need.
In some embodiments, the one or more assessment features and the one or more thresholds may be provided using a classification algorithm to analyze positive samples and negative samples. Details regarding the positive samples and the negative samples may be found elsewhere in the present disclosure (e.g., the description in connection with FIG. 6A and FIG. 6B) . The classification algorithm  may be any existing suitable classification algorithm, for example, Dynamic Programing Algorithm, Greedy Algorithm, which is not limited in the present disclosure.
In some embodiments, the one or more assessment features and the one or more thresholds may be provided by generating a first trained machine learning model based on the positive samples and the negative samples. The first trained machine learning model may be any existing suitable machine learning model, for example, a Random Forest model, which is not limited in the present disclosure.
In some embodiments, the risk assessment policy may be determined based on an evaluation function (e.g., an ROI function) . In some embodiments, a plurality of candidate policies may be generated based on a portion of the positive samples and the negative samples. Each of the plurality of candidate policies may be tested based on the other portion of the positive samples and the negative samples to determine an accurate rate, as the evaluation function value, of identifying malicious ID verification events using the candidate policy. The processing engine 112 may determine the candidate policy with highest evaluation function value as the risk assessment policy configured to determine whether the target ID verification event is a malicious ID verification event. In some embodiments, the operation of determining the risk assessment policy based on an evaluation function may be integrated into the classification algorithm or the first trained machine learning model.
In some embodiments, the risk assessment model may include a second trained machine learning model. The second trained machine learning model may be any existing suitable machine learning model, for example, a Random Forest model, which is not limited in the present disclosure. The processing engine 112 may determine whether the target ID verification event is a malicious ID verification event by using the second trained machine learning model to analyze the first feature and the second feature. The second trained machine learning model may include a decision structure for a malicious ID verification pattern trained based on  the positive samples and the negative samples. The decision structure may specify that the target ID verification event, if matching the malicious ID verification pattern, is indicative of a malicious ID verification event. In some embodiments, the malicious ID verification pattern may include a malicious ID pattern and/or a malicious user account pattern. The malicious ID pattern may specify that the target ID number, if matching the malicious ID pattern, is indicative of a malicious ID number. The malicious user account pattern may specify that the first user account, if matching the malicious user account pattern, is indicative of a malicious user account. In some embodiments, the second trained machine learning model may include a decision structure for a safe ID verification pattern trained based on the positive samples and the negative samples. The decision structure may specify that the target ID verification event, if matching the safe ID verification pattern, is indicative of a safe ID verification event. In some embodiments, the safe ID verification pattern may include a safe ID pattern and/or a safe user account pattern. The safe ID pattern may specify that the target ID number, if matching the safe ID pattern, is indicative of a safe ID number. The safe user account pattern may specify that the first user account, if matching the safe user account pattern, is indicative of a safe user account.
For example, the malicious ID verification event pattern may include that the count of times that the target ID number was submitted by the one or more second user accounts is more than a first threshold, that the count of the one or more second user accounts that have submitted the target ID number for ID verification is more than a second threshold, that the count of different ID numbers of failed ID verifications (with a same failed reason) submitted by the first user account is more than a third threshold, that the count of different names of failed ID verifications (with a same failed reason) submitted by the first user account is more than a forth threshold, that the count of different names the first user account has submitted for ID verification is more than a fifth threshold, that the count of different ID numbers  the first user account has submitted for ID verification is more than a sixth threshold, that the count of different combinations of ID number and name the first user account has submitted for ID verification is more than a seventh threshold, or the like, or any combination thereof.
As another example, the malicious ID verification event pattern may include that a login or sign-up location (e.g., a geographic location and/or an IP address) of a user account is changed too frequently (e.g., a user account may be logged in at 9:00 a. m. from China, and then logged in at 9: 30 a. m. from London, which may indicate that malicious malware, distributed across multiple nations, has taken over the user account) . As still another example, the malicious ID verification event pattern may include that real-time interaction with a computing device mimics non-human input behavior (e.g., keystrokes per minute above a human capability threshold) .
In some embodiments, the risk assessment model may be provided based on the positive samples and the negative samples. The positive samples and the negative samples may include first positive samples and first negative samples that both correspond to the first feature. The positive samples and the negative samples may also include second positive samples and second negative samples that both correspond to the second feature. In some embodiments, the positive samples and the negative samples may include training samples and test samples. In a training stage, the training samples (e.g., each input of the training samples having a known output) may be processed by a preliminary model so that the preliminary model may learn how to provide an output for new input data by generalizing the information it learns in the training stage from the training data. After learning is complete, test samples may be processed by the learned preliminary model to validate the results of learning. After passing the test, the second trained preliminary model may be obtained.
Merely by way of example, the second trained preliminary model may be  trained according to a supervised learning algorithm. The processing engine 112 may obtain training samples and a preliminary model. Each training sample may include a first sample feature of a sample user account of a sample ID verification event, a second sample feature of a sample ID number of the sample ID verification event, and a classification of the sample ID verification event (e.g., whether the sample ID verification event is a malicious ID verification event or a safe ID verification event) . The classification of the sample ID verification event of each training sample may be used as a ground truth identification result. The preliminary model to be trained may include one or more model parameters, such as the number (or count) of layers, the number (or count) of nodes, a loss function, or the like, or any combination thereof. Before training, the preliminary model may have one or more initial parameter values of the model parameter (s) .
The training of the preliminary model may include one or more iterations to iteratively update the model parameters of the preliminary model based on the training sample (s) until a termination condition is satisfied in a certain iteration. Exemplary termination conditions may be that the value of a loss function obtained in the certain iteration is less than a threshold value, that a certain count of iterations has been performed, that the loss function converges such that the difference of the values of the loss function obtained in a previous iteration and the current iteration is within a threshold value, etc. The loss function may be used to measure a discrepancy between a predicted identification result generated by the preliminary model in an iteration and the ground truth identification result. For example, for each training sample, the first sample feature of a sample user account, a second sample feature of a sample ID number may be inputted into the preliminary model, and the preliminary model may output a predicted identification result of whether the sample ID verification event of the training sample is a malicious ID verification event or a safe ID verification event. The loss function may be used to measure a difference between the predicted identification result and the ground truth  identification result of each training sample. Exemplary loss functions may include a focal loss function, a log loss function, a cross-entropy loss, a Dice ratio, or the like. If the termination condition is not satisfied in the current iteration, the processing engine 112 may further update the preliminary model to be used in a next iteration according to, for example, a backpropagation algorithm. If the termination condition is satisfied in the current iteration, the processing engine 112 may designate the preliminary model in the current iteration as the risk assessment model.
In some embodiments, the positive samples and the negative samples may be updated periodically or non-periodically, for example, every three months, every half year etc. After updating positive samples and negative samples, the processing engine 112 may retrain the model and get an updated risk assessment model.
In some embodiments, the second trained machine learning model may also be generated by a third-party device communicating with the identity risk assessment system 100.
In some embodiments, the first trained machine learning model may be obtained based on the positive samples and the negative samples using a similar training process to the second trained machine learning model.
In some embodiments, the trained risk assessment model (e.g., the first trained machine learning model and/or the second trained machine learning model) may be generated online or offline. In some embodiment, the processing engine 112 may generate the trained risk assessment model in advance and store the trained risk assessment model in a storage medium (e.g., the storage device 150, the storage 220 of the processing engine 112) . When assessing the target ID verification event, the processing engine 112 may obtain the trained risk assessment model from the storage medium. In some embodiments, when assessing the target ID verification event, the processing engine 112 may generate the trained risk  assessment model online. In some embodiments, the third-party device may generate the trained risk assessment model in advance and store the trained risk assessment model locally or in the storage medium (e.g., the storage device 150, the storage 220 of the processing engine 112) of the identity risk assessment system 100. When assessing the target ID verification event, the processing engine 112 may obtain the trained risk assessment model from the storage medium of the identity risk assessment system 100 or the third-party device. In some embodiments, when assessing the target ID verification event, the third-party device may generate the trained risk assessment model online and transmit the trained risk assessment model to the processing engine 112.
In response to determining that the target ID verification event is a malicious ID verification event, the process 500 may proceed to operation 508.
In response to determining that the target ID verification event is not a malicious ID verification event, e.g., the target ID verification event is a safe ID verification event, the process 500 may proceed to operation 510.
In 508, the processing engine 112 (e.g., the determination module 406) may block the first user account and/or the target ID number. For example, the processing engine 112 may prevent the first user account from continuing to submit an ID verification request, prevent any other user account from using the target ID number to submit an ID verification request, forbid the first user account to access the identity risk assessment system 100, allow the first user account to online browse the contents of the identity risk assessment system 100 but forbid the first user account to use the service of the identity risk assessment system 100 (e.g., sending service requests) , demote an evaluation of the first user account in the identity risk assessment system 100, etc. In some embodiments, if the processing engine 112 determines that the first user account is a malicious user account, the processing engine 112 may block the first user account, or further block the target ID number. In some embodiments, if the processing engine 112 determines that the  target ID number is a malicious ID, the processing engine 112 may block the target ID number, or further block the first user account.
In some embodiments, the processing engine 112 may transmit a notification indicating that the target ID verification event is determined as a malicious ID verification event to the user terminal 140. In some embodiments, the processing engine 112 may transmit, to the user terminal 140, a notification indicating that the first user account and/or the target ID number is determined as malicious. In some embodiments, the processing engine 112 may transmit, to the user terminal 140, a notification indicating that the first user account and/or the target ID number will be or has been blocked.
In 510, the processing engine 112 (e.g., the determination module 406) may perform ID verification on the target ID number by comparing the ID number with a third-party device. In some embodiments, the processing engine 112 may perform ID verification further on a name, a date of birth, and/or profile information included in the target ID verification event by comparing the name, date of birth, and/or profile information with the third-party device. In some embodiments, the processing engine 112 may retrieve data stored in the third-party device to determine whether the target ID number is valid by determining whether the target ID number is included in the third-party device. In some embodiments, in response to determining that the target ID number is valid, the processing device 112 may determine whether the name, date of birth, and/or profile information match the target ID number. For example, the processing engine 112 may identify profile information corresponding to the target ID number in the third-party device, and compare the identified profile information with the profile information in the target ID verification event. In some embodiments, in response to determining that the target ID number is valid, and/or the name, date of birth, and/or profile information match the target ID number, the processing engine 112 may determine that ID verification is successful. In response to determining that the target ID number is not valid,  and/or the name, date of birth, and/or profile information may not match the target ID number, the processing engine 112 may determine that ID verification is failed. In some embodiments, the processing engine 112 may receive a service request inputted via an application logged on with the verified target ID number from a user terminal. The processing engine 112 may further provide data to the application executing on the user terminal to generate a presentation on a display of the user terminal. The presentation may include information associated with at least one service provider that satisfies the service request. For example, the processing device engine 112 may receive a taxi-hailing request inputted via a taxi-hailing application logged on with a verified target ID number from a user terminal. The processing device engine 112 may provide data to the taxi-hailing application executing on the user terminal to generate information associated with at least one driver that satisfies the taxi-hailing request and display the information on the user terminal.
In some embodiments, the identity risk assessment system 100 may receive a vast number of target ID verification events in real-time. Through online applying the risk assessment model, the identity risk assessment system 100 can identify malicious ID verification events from the vast number of target ID verification events in real-time and automatically. For a single target ID verification event, through the identity risk assessment system 100, a vast number of online behavior features of historical ID verification events can be quickly collected and analyzed (e.g., summarized) to obtain, in real-time and automatically, a first feature associated with the first user account and a second feature associated with the target ID number.
The carpooling service also allows a service request initiated by a service requester (e.g., a passenger) to be distributed in real-time and automatically to a vast number of individual service providers (e.g., taxi drivers) distance away from the service requester and allows a plurality of service provides to respond to the service request simultaneously and in real-time. Therefore, through Internet, the online to  offline service systems may provide a much more efficient transaction platform for the service requesters and the service providers that may never met in a traditional pre-Internet transportation service system.
It should be noted that the above description regarding the process 500 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. For example, the processing engine 112 may determine whether the target ID verification event is a safe ID verification event. The determination of whether the target ID verification event is a safe ID verification event may be similar to the determination of whether the target ID verification event is a malicious ID verification event illustrated in operation 506. In some embodiments, the process 500 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
FIG. 6A is a flowchart illustrating an exemplary process for obtaining positive samples according to some embodiments of the present disclosure. In some embodiments, the process 600-1 may be implemented in the identity risk assessment system 100 illustrated in FIG. 1. For example, the process 600-1 may be stored in a storage medium (e.g., the storage device 150, or the storage 220 of the processing engine 112) as a form of instructions, and invoked and/or executed by the server 110 (e.g., the processing engine 112 of the server 110, the processor 220 of the processing engine 112, or one or more modules in the processing engine 112 illustrated in FIG. 4) . The operations of the illustrated process 600-1 presented below are intended to be illustrative. In some embodiments, the process 600-1 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which  the operations of the process 600-1 as illustrated in FIG. 6A and described below is not intended to be limiting. In some embodiments, one or more operations of the process 600-1 may be performed to achieve at least part of operation 506 as described in connection with FIG. 5.
In 602, the processing engine 112 (e.g., the determination module 406) may identify historical malicious ID verification events. In some embodiments, a historical ID verification event may refer to an ID verification event that was previously received by the processing engine 112 from a sample user account and includes a sample ID number. In some embodiments, a historical malicious ID verification event may refer to a historical ID verification event of which the sample user account and/or the sample ID number was determined as malicious. The processing engine 112 may retrieve the historical malicious ID verification events from a storage device (e.g., the storage device 150, the storage 220, the storage 390, or an external source) .
In 604, for each of the historical malicious ID verification events, the processing engine 112 (e.g., the determination module 406) may construct a sample malicious ID verification pattern corresponding to the first feature and the second feature. The sample malicious ID verification pattern may include online behavior features (e.g., corresponding to the first feature and the second feature) of the sample user account and/or the sample ID number of the historical malicious ID verification event.
In 606, the processing engine 112 (e.g., the determination module 406) may annotate the sample malicious ID verification patterns as malicious.
In 608, the processing engine 112 (e.g., the determination module 406) may obtain the positive samples by including the annotated sample malicious ID verification patterns.
In some embodiments, the process 600-1 may also be performed by a third-party device communicating with the identity risk assessment system 100.
It should be noted that the above description regarding the process 600-1 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the process 600-1 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
FIG. 6B is a flowchart illustrating an exemplary process for obtaining negative samples according to some embodiments of the present disclosure. In some embodiments, the process 600-2 may be implemented in the identity risk assessment system 100 illustrated in FIG. 1. For example, the process 600-2 may be stored in a storage medium (e.g., the storage device 150, or the storage 220 of the processing engine 112) as a form of instructions, and invoked and/or executed by the server 110 (e.g., the processing engine 112 of the server 110, the processor 220 of the processing engine 112, or one or more modules in the processing engine 112 illustrated in FIG. 4) . The operations of the illustrated process 600-2 presented below are intended to be illustrative. In some embodiments, the process 600-2 may be accomplished with one or more additional operations not described, and/or without one or more of the operations discussed. Additionally, the order in which the operations of the process 600-2 as illustrated in FIG. 6B and described below is not intended to be limiting. In some embodiments, one or more operations of the process 600-2 may be performed to achieve at least part of operation 506 as described in connection with FIG. 5.
In 610, the processing engine 112 (e.g., the determination module 406) may identify historical safe ID verification events. In some embodiments, a historical safe ID verification event may refer to a historical ID verification event of which the sample user account and/or the sample ID number was determined as safe. The  processing engine 112 may retrieve the historical safe ID verification events from a storage device (e.g., the storage device 150, the storage 220, the storage 390, or an external source) .
In 612, for each of the historical safe ID verification events, the processing engine 112 (e.g., the determination module 406) may construct a sample safe ID verification pattern corresponding to the first feature and the second feature. The sample malicious ID verification pattern may include online behavior features (e.g., corresponding to the first feature and the second feature) of the sample user account and/or the sample ID number of the historical safe ID verification event.
In 614, the processing engine 112 (e.g., the determination module 406) may annotate the sample safe ID verification patterns as safe.
In 616, the processing engine 112 (e.g., the determination module 406) may obtain the negative samples by including the annotated sample safe ID verification patterns.
In some embodiments, the process 600-2 may also be performed by a third-party device communicating with the identity risk assessment system 100.
It should be noted that the above description regarding the process 600-2 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the process 600-2 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
It should be noted that the above description regarding the process 600 is merely provided for the purposes of illustration, and not intended to limit the scope of the present disclosure. For persons having ordinary skills in the art, multiple variations and modifications may be made under the teachings of the present  disclosure. However, those variations and modifications do not depart from the scope of the present disclosure. In some embodiments, the process 600 may be accomplished with one or more additional operations not described and/or without one or more of the operations discussed above.
Having thus described the basic concepts, it may be rather apparent to those skilled in the art after reading this detailed disclosure that the foregoing detailed disclosure is intended to be presented by way of example only and is not limiting. Various alterations, improvements, and modifications may occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested by this disclosure, and are within the spirit and scope of the exemplary embodiments of this disclosure.
Moreover, certain terminology has been used to describe embodiments of the present disclosure. For example, the terms “one embodiment, ” “an embodiment, ” and/or “some embodiments” mean that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Therefore, it is emphasized and should be appreciated that two or more references to “an embodiment” or “one embodiment” or “an alternative embodiment” in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the present disclosure.
Further, it will be appreciated by one skilled in the art, aspects of the present disclosure may be illustrated and described herein in any of a number of patentable classes or context including any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof. Accordingly, aspects of the present disclosure may be implemented entirely hardware, entirely software (including firmware, resident software, micro-code, etc. ) or combining software and hardware implementation that may all generally be referred to herein  as a “unit, ” “module, ” or “system. ” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including electro-magnetic, optical, or the like, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that may communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable signal medium may be transmitted using any appropriate medium, including wireless, wireline, optical fiber cable, RF, or the like, or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C++, C#, VB. NET, Python, or the like, conventional procedural programming languages, such as the "C" programming language, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, dynamic programming languages such as Python, Ruby and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN) , or the connection may be made to an external computer  (for example, through the Internet using an Internet Service Provider) or in a cloud computing environment or offered as a service such as a Software as a Service (SaaS) .
Furthermore, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes and methods to any order except as may be specified in the claims. Although the above disclosure discusses through various examples what is currently considered to be a variety of useful embodiments of the disclosure, it is to be understood that such detail is solely for that purpose, and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the disclosed embodiments. For example, although the implementation of various components described above may be embodied in a hardware device, it may also be implemented as a software only solution, e.g., an installation on an existing server or mobile device.
Similarly, it should be appreciated that in the foregoing description of embodiments of the present disclosure, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the various embodiments. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, claimed subject matter may lie in less than all features of a single foregoing disclosed embodiment.

Claims (20)

  1. A method for identity risk assessment, implemented on a computing device having at least one processor and at least one storage device, the method comprising:
    receiving a target identity (ID) verification event of a first user account from a user terminal, the target ID verification event including a target ID number;
    obtaining, based on the target ID verification event, a first feature associated with the first user account and a second feature associated with the target ID number, the first feature associated with the first user account including a historical ID verification behavior feature of the first user account, the second feature associated with the target ID number including a historical ID verification behavior feature of one or more second user accounts using the target ID number;
    determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature; and
    in response to determining that the target ID verification event is a malicious ID verification event, blocking at least one of the first user account and the target ID number.
  2. The method of claim 1, wherein the target ID verification event includes submission of an ID verification request for verifying the first user account using the target ID number.
  3. The method of claim 1, wherein the target ID verification event includes information indicating that the first user account is likely to submit an ID verification request for verifying the first user account using the target ID number.
  4. The method of claim 1, wherein the first feature associated with the first user account includes at least one of
    a count of different ID numbers the first user account has submitted for ID verification,
    a count of different names the first user account has submitted for ID verification,
    a count of different dates of birth the first user account has submitted for ID verification,
    a count of times of successful ID verification of the first user account,
    a count of times of failed ID verification of the first user account, or
    reasons of the failed ID verification of the first user account.
  5. The method of claim 4, wherein the first feature associated with the first user account further includes at least one of
    a real-time interaction feature of the target ID verification event,
    a geography location of the user terminal when the target ID verification event is received,
    historical geography locations where the first user account was signed up,
    an Internet protocol (IP) address of the user terminal,
    historical IP addresses where the first user account was signed up, or
    historical suspicious activity the first user account performed.
  6. The method of claim 1, wherein the second feature associated with the target ID number includes at least one of
    a count of the one or more second user accounts that have submitted the target ID number for ID verification,
    a count of different second user accounts of the one or more second user accounts that have successful ID verification using the target ID number,
    a count of different second user accounts of the one or more second user accounts that have failed ID verification using the target ID number,
    a count of different names submitted, along with the target ID number, by the one or more second user accounts for ID verification,
    a count of times that the target ID number was submitted by the one or more second user accounts,
    a count of times of successful verification of the target ID number,
    a count of times of failed verification of the target ID number, or
    reasons of the failed ID verification of the ID number.
  7. The method of claim 6, wherein the second feature associated with the target ID number further includes at least one of
    credit information of the target ID number, or
    criminal information of the target ID number.
  8. The method of claim 1, wherein the risk assessment model is provided based on positive samples and negative samples, the positive samples and the negative samples including first positive samples and first negative samples that correspond to the first feature, the positive samples and the negative samples including second positive samples and second negative samples that correspond to the second feature.
  9. The method of claim 8, wherein the positive samples and the negative samples are provided by:
    identifying historical malicious ID verification events and historical safe ID verification events;
    for each of the historical malicious ID verification events, constructing a sample malicious ID verification pattern corresponding to the first feature and the second feature;
    for each of the historical safe ID verification events, constructing a sample safe ID verification pattern corresponding to the first feature and the second feature;
    annotating the sample malicious ID verification patterns as malicious;
    annotating the sample safe ID verification patterns as safe;
    obtaining the positive samples by including the annotated sample malicious ID verification patterns; and
    obtaining the negative samples by including the annotated sample safe ID verification patterns.
  10. The method of claim 9, wherein
    the risk assessment model includes a risk assessment policy; and
    a determination as to whether the ID verification event is a malicious ID verification event is determined by comparing the first feature and the second feature with the risk assessment policy.
  11. The method of claim 10, wherein the risk assessment policy includes one or more assessment features corresponding to the first feature and the second feature, and one or more thresholds corresponding to the one or more assessment features.
  12. The method of claim 11, wherein the one or more assessment features and the one or more thresholds are provided using a classification algorithm to analyze the positive samples and the negative samples.
  13. The method of claim 11, wherein the one or more assessment features and the one or more thresholds are provided by generating a trained machine learning model based on the positive samples and the negative samples.
  14. The method of claim 9, wherein
    the risk assessment model includes a trained machine learning model; and
    a determination as to whether the ID verification event is a malicious ID verification event is determined by using the trained machine learning model to analyze the first feature and the second feature.
  15. The method of claim 14, wherein the trained machine learning model includes a decision structure for a malicious ID verification pattern trained based on the positive samples and the negative samples, the decision structure specifying that the target ID verification event, if matching the malicious ID verification pattern, is indicative of a malicious ID verification event.
  16. The method of claim 1, wherein a determination that the target ID verification event is a malicious ID verification event includes a determination that the first user account is a malicious user account or a determination that the target ID number is a malicious ID.
  17. The method of claim 1, further comprising:
    in response to determining that the target ID verification event is a malicious ID verification event, transmitting, to the user terminal, a notification indicating that the target ID verification event is determined as a malicious ID verification event.
  18. The method of claim 1, further comprising:
    in response to determining that the target ID verification event is a safe ID verification event, performing ID verification on the target ID number by comparing the ID number with a third-party device.
  19. A system comprising:
    one or more service requester terminals registered with an online transportation service platform, each of the one or more service requester terminals being used by a service requester to send a transportation order via the online transportation service platform;
    one or more service provider terminals registered with the online transportation service platform, each of the one or more service provider terminals being used by a service provider to provide an online transportation service; and
    a server implementing the online transportation service platform and being configured to allocate the transportation orders of the one or more service requester terminals to the one or more service provider terminals, wherein when a target identity (ID) number logs on a customer application executing on a user terminal to interact with the online service transportation service platform, the server performs a verification process including:
    receiving a target identity (ID) verification event of a first user account from the user terminal, the target ID verification event including the target ID number;
    obtaining, based on the target ID verification event, a first feature associated with the first user account and a second feature associated with the target ID number, the first feature associated with the first user account including a historical ID verification behavior feature of the first user account, the second feature associated with the target ID number including a historical ID verification behavior feature of one or more second user accounts using the target ID number;
    determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature; and
    in response to determining that the target ID verification event is a malicious ID verification event, blocking at least one of the first user account and the target ID number.
  20. A system, comprising:
    one or more network interfaces configured to communicate with user terminals registered with an online transportation service platform; and
    logic circuits coupled to the one or more network interfaces, wherein during operation the logic circuits:
    verify a target identity (ID) number logging on a customer application executing on a first user terminal by performing a verification process including:
    receiving, via the one or more network interfaces, a target identity (ID) verification event of a first user account from the first user terminal, the target ID verification event including the target ID number;
    obtaining, from the transportation service platform and based on the target ID verification event, a first feature associated with the first user account and a second feature associated with the target ID number, the first feature associated with the first user account including a historical ID verification behavior feature of the first user account, the second feature associated with the target ID number including a historical ID verification behavior feature of one or more second user accounts using the target ID number;
    determining whether the target ID verification event is a malicious ID verification event using a risk assessment model to analyze the first feature and the second feature; and
    in response to determining that the target ID verification event is a malicious ID verification event, blocking at least one of the first user account and the target ID number; or
    in response to determining that the target ID verification event is not a malicious ID verification event,
    verifying the target ID number;
    receiving, from the first user terminal, a service request inputted via the customer application logged on with the verified target ID number;
    providing data to the customer application executing on the first user terminal to generate a presentation on a display of the first user terminal, the presentation including information associated with at least one service provider that satisfies the service request.
PCT/CN2020/137874 2020-12-21 2020-12-21 Systems and methods for identity risk assessment WO2022133632A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137874 WO2022133632A1 (en) 2020-12-21 2020-12-21 Systems and methods for identity risk assessment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/137874 WO2022133632A1 (en) 2020-12-21 2020-12-21 Systems and methods for identity risk assessment

Publications (1)

Publication Number Publication Date
WO2022133632A1 true WO2022133632A1 (en) 2022-06-30

Family

ID=82157039

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137874 WO2022133632A1 (en) 2020-12-21 2020-12-21 Systems and methods for identity risk assessment

Country Status (1)

Country Link
WO (1) WO2022133632A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015109947A1 (en) * 2014-01-24 2015-07-30 Tencent Technology (Shenzhen) Company Limited Method and system for verifying an account operation
CN106549902A (en) * 2015-09-16 2017-03-29 阿里巴巴集团控股有限公司 A kind of recognition methods of suspicious user and equipment
CN107196889A (en) * 2016-03-14 2017-09-22 深圳市深信服电子科技有限公司 The detection method and device of corpse account
US20180077192A1 (en) * 2015-05-29 2018-03-15 Alibaba Group Holding Limited Account theft risk identification
CN109660513A (en) * 2018-11-13 2019-04-19 微梦创科网络科技(中国)有限公司 A kind of method and device based on Storm cluster identification problem account

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015109947A1 (en) * 2014-01-24 2015-07-30 Tencent Technology (Shenzhen) Company Limited Method and system for verifying an account operation
US20180077192A1 (en) * 2015-05-29 2018-03-15 Alibaba Group Holding Limited Account theft risk identification
CN106549902A (en) * 2015-09-16 2017-03-29 阿里巴巴集团控股有限公司 A kind of recognition methods of suspicious user and equipment
CN107196889A (en) * 2016-03-14 2017-09-22 深圳市深信服电子科技有限公司 The detection method and device of corpse account
CN109660513A (en) * 2018-11-13 2019-04-19 微梦创科网络科技(中国)有限公司 A kind of method and device based on Storm cluster identification problem account

Similar Documents

Publication Publication Date Title
US10883842B2 (en) Systems and methods for route searching
US20220122083A1 (en) Machine learning engine using following link selection
WO2019165838A1 (en) Systems and methods for identifying risky driving behavior
US20200213349A1 (en) Anti-replay systems and methods
WO2018205561A1 (en) Systems and methods for processing an abnormal order
CN110832284B (en) System and method for destination prediction
CA3028630A1 (en) Systems and methods for identifying risky driving behavior
WO2021012342A1 (en) Systems and methods for traffic prediction
EP3320494A1 (en) Systems and methods for predicting service time point
US20200193357A1 (en) Systems and methods for allocating service requests
US11003730B2 (en) Systems and methods for parent-child relationship determination for points of interest
AU2017400606A1 (en) Systems and methods for providing a navigation route
WO2019063005A1 (en) Systems and methods for identifying incorrect order request
WO2019109604A1 (en) Systems and methods for determining an estimated time of arrival for online to offline services
WO2019019958A1 (en) Systems and methods for determining an optimal strategy
WO2018232684A1 (en) Methods and systems for estimating time of arrival
EP3566149A1 (en) Systems and methods for data updating
US20220277764A1 (en) Cough detection system
US20210407022A1 (en) Real-time monitoring
CN110651305B (en) System and method for vehicle value assessment
CN110945484B (en) System and method for anomaly detection in data storage
US20210034686A1 (en) Systems and methods for improving user experience for an on-line platform
WO2022133632A1 (en) Systems and methods for identity risk assessment
CN110972500B (en) System and method for payment management
US11641368B1 (en) Machine learning powered authentication challenges

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966213

Country of ref document: EP

Kind code of ref document: A1