CN111475587B - Risk identification method and system - Google Patents
Risk identification method and system Download PDFInfo
- Publication number
- CN111475587B CN111475587B CN202010440549.8A CN202010440549A CN111475587B CN 111475587 B CN111475587 B CN 111475587B CN 202010440549 A CN202010440549 A CN 202010440549A CN 111475587 B CN111475587 B CN 111475587B
- Authority
- CN
- China
- Prior art keywords
- data
- layer
- precision
- target
- layers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/285—Clustering or classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
Abstract
The specification provides a risk identification method and system, wherein a risk identification model comprises a plurality of data layers and a plurality of classifiers, the classifiers are respectively connected with the data layers, and the classifiers represent a plurality of precision gears. The method and the system input target data into a risk identification classification model, and set a target precision gear based on current target business data, so that an output result of a target classifier corresponding to the target precision gear is used as a classification result of the target data. According to the method and the system, data classification with multiple accuracies is realized through the multi-accuracy data classification model, the corresponding classifier is adjusted based on the target service data, the calculation process is simplified for a simple sample, the calculation accuracy is enhanced for a complex sample, and therefore calculation time is saved, and calculation efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of internet technologies, and in particular, to a method and system for classifying data with multiple accuracies.
Background
With the rapid development of artificial intelligence (Artificial Intelligence, abbreviated as AI) technology, a method for performing data calculation and data classification based on a neural network model is increasingly applied to various network platforms. However, due to the limitations of different traffic scenarios, the requirements for data computation and data classification are often different. For example, a high-precision neural network model is required to be used in a high-precision recognition request, and a low-precision neural network model is required to be switched to in a low-precision recognition request. Therefore, the same platform often needs to build multiple neural network models to classify data to cope with different business scenarios.
In order to improve the computing efficiency of a computer, a method and a system for risk identification capable of rapidly classifying data for different business scenes are needed.
Disclosure of Invention
The present specification provides a more efficient risk identification model, method and system.
The multi-precision data classification model is constructed based on a neural network model, a plurality of classifier outlets are arranged at a plurality of different depths in the multi-precision data classification model, the classifier outlets represent a plurality of precision gears of data classification, and the deeper the depth is, the higher the precision gears of data classification are. Inputting the target data into a multi-precision data classification model, and selecting a classifier outlet of one precision gear based on the current service data as a classification result of the target data. According to the method and the system, the data classification with multiple accuracies is realized through the multi-accuracy data classification model, the calculation process is simplified for a simple sample, and the calculation accuracy is enhanced for a complex sample, so that the calculation time is saved, and the calculation efficiency is improved.
In a first aspect, the present specification provides a multi-precision data classification model comprising a plurality of data layers and a plurality of classifiers, the plurality of data layers being connected in a preset manner and configured to process input data according to a predetermined scale; each of the plurality of classifiers receives output data of one of the plurality of data layers and is configured to classify the input data based on the output data of the data layer, the plurality of classifiers corresponding to a plurality of precision gears, wherein the multi-precision data classification model is configured to classify the input data corresponding to a precision gear, the multi-precision data classification model being constructed based on a neural network model.
In some embodiments, the multiple data layers are connected into a target network, the multiple data layers including a root node layer and multiple child node layers connected in series with the root node layer.
In some embodiments, each of the root node layer and the plurality of sub-node layers includes a transfer function, and each of the plurality of sub-node layers uses output data of a data layer of an adjacent previous layer as input data of a current sub-node layer, and inputs the input data of the current sub-node layer into the current sub-node layer to calculate to obtain the output data of the current sub-node layer.
In some embodiments, the multi-layer data layer further includes a plurality of first splicing layers, each first splicing layer of the plurality of first splicing layers connects two adjacent sub-node layers in series, and splices output data of the root node layer and all sub-node layers before the current first splicing layer, as output data of the current first splicing layer, and inputs the output data of the current first splicing layer into an adjacent sub-node layer of the next layer.
In some embodiments, each classifier of the plurality of classifiers receives output data of one of the plurality of first splice layers.
In some embodiments, the transfer function comprises: a hierarchical transfer function configured to convolve data input to the hierarchical transfer function with a first convolution kernel in a first step size.
In some embodiments, the predetermined scale comprises a plurality of scales.
In some embodiments, the output data of the adjacent previous data layer includes output data of multiple scales, each of the multiple sub-node layers further includes a second splicing layer configured to splice data of different scales, and the inputting the input data of the current sub-node layer into the current sub-node layer to calculate to obtain the output data of the current sub-node layer includes: inputting the output data of the multiple scales into the transfer function for calculation; and inputting the results of the multiple scales calculated by the transfer function into the second splicing layer for splicing to obtain the output data of the current sub-node layer.
In some embodiments, the transfer function further comprises a scale transfer function for transfer of data between different scales, configured to convolve data input to the scale transfer function with a second convolution kernel in a second step size.
In a second aspect, the present specification provides a method of multi-precision data classification, comprising: loading a multi-precision data classification model described in the specification; acquiring target data and target service data, and setting target precision gears of target data classification based on the target service data, wherein the plurality of precision gears comprise the target precision gears; inputting the target data as input data into the multi-precision data classification model; and outputting a classification result of the target data through a target classifier of the multi-precision data classification model, wherein the target classifier represents the target precision gear, and the plurality of classifiers comprise the target classifier.
In a third aspect, the present specification provides a system for multi-precision data classification, comprising at least one storage medium storing a multi-precision data classification model as described herein, and at least one processor, the at least one storage medium comprising at least one instruction set for multi-precision data classification; the at least one processor is communicatively coupled to the at least one storage medium, wherein when the system is operating, the at least one processor reads the at least one instruction set and performs a method of multi-precision data classification according to an indication of the at least one instruction set, the method of multi-precision data classification comprising: loading the multi-precision data classification model; acquiring target data and target service data, and setting target precision gears of target data classification based on the target service data, wherein the plurality of precision gears comprise the target precision gears; inputting the target data as input data into the multi-precision data classification model; and outputting a classification result of the target data through a target classifier of the multi-precision data classification model, wherein the target classifier represents the target precision gear, and the plurality of classifiers comprise the target classifier.
According to the technical scheme, the multi-precision data classification model comprises a plurality of data layers and a plurality of classifiers, wherein the classifiers are respectively connected with the plurality of data layers, and the classifiers represent a plurality of precision gears. The method and the system input the target data into a multi-precision data classification model, and set a target precision gear based on the current target service data, so that an output result of a target classifier corresponding to the target precision gear is used as a classification result of the target data. According to the method and the system, a plurality of precision data classification is realized through a multi-precision data classification model, the model calculation outlet is set based on the target service data, the calculation process is simplified for a simple sample, the calculation precision is enhanced for a complex sample, and therefore calculation time is saved, and calculation efficiency is improved.
Additional functions of the method and system for multi-precision data classification provided herein will be set forth in part in the description which follows. The following numbers and examples presented will be apparent to those of ordinary skill in the art in view of the description. The inventive aspects of the methods, systems, and storage media for multi-precision data classification provided herein may be fully explained by practicing or using the methods, devices, and combinations described in the detailed examples below.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present description, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present description, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a system diagram of a multi-precision data classification provided in accordance with an embodiment of the present description;
FIG. 2 illustrates a schematic diagram of a server for multi-precision data classification provided in accordance with an embodiment of the present description;
FIG. 3 is a schematic diagram of a multi-precision data classification model according to an embodiment of the present disclosure;
FIG. 4A shows a schematic diagram of a multi-precision data classification model provided in accordance with an embodiment of the present description;
FIG. 4B is a schematic diagram of a multi-precision data classification model according to an embodiment of the disclosure;
FIG. 4C illustrates a schematic diagram of a multi-precision data classification model provided in accordance with an embodiment of the present description;
FIG. 5 illustrates a data flow diagram of a multi-precision data classification model provided in accordance with an embodiment of the present description; and
Fig. 6 shows a flow chart of a method of multi-precision data classification provided in accordance with an embodiment of the present description.
Detailed Description
The following description is presented to enable one of ordinary skill in the art to make and use the invention, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the disclosure. Thus, the present description is not limited to the embodiments shown, but is to be accorded the widest scope consistent with the claims.
The terminology used herein is for the purpose of describing particular example embodiments only and is not intended to be limiting. For example, as used herein, the singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. The terms "comprises," "comprising," "includes," and/or "including," when used in this specification, are taken to specify the presence of stated integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
These and other features of the present specification, as well as the operation and function of the related elements of structure, as well as the combination of parts and economies of manufacture, may be significantly improved upon in view of the following description. All of which form a part of this specification, reference is made to the accompanying drawings. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of the limits of the description. It should also be understood that the drawings are not drawn to scale.
The flowcharts used in this specification illustrate operations implemented by systems according to some embodiments in this specification. It should be clearly understood that the operations of the flow diagrams may be implemented out of order. Rather, operations may be performed in reverse order or concurrently. Further, one or more other operations may be added to the flowchart. One or more operations may be removed from the flowchart.
With the rapid development of artificial intelligence technology, the data calculation and data classification modes based on the neural network model are increasingly applied to various network platforms. In particular, the data classification based on the neural network model is widely applied to the internet technology. The data classification is to merge together data having some common attribute or feature, and distinguish the data by the attribute or feature of its category. For example, in a shopping platform, it is often necessary to build a commodity classification model, classifying commodities according to their descriptions. For example, in some video platforms, it is often desirable to build a video classification model, classify video according to video content, and so forth. For another example, in the internet financial platform, in order to improve the security and reliability of the internet financial platform, a risk control system is generally established in the internet financial platform, for making identification and decision of various risks (such as garbage registration, check-in, fraud, cashing, false transaction, money laundering, credit fraud, transaction fraud, card theft, account theft, marketing fraud, gambling), and determining the risk level of the operation.
For example, in an internet financial platform, a risk control system is typically built based on a neural network model. The neural network model is often a "static" network, and when the neural network model is applied to online production prevention and control risks, an appropriate model threshold is generally selected on a performance evaluation index of the model according to the disturbance rate and the case coverage rate expected by the service to perform risk evaluation on the operation behaviors. For example, a model threshold of 0.9 timescales, a disturbance rate of one-tens of millions, may cover 80% of cases. In practical application, if the output score of the model is greater than or equal to 0.9 score, the operational behavior is considered to have risk, and output control is needed for the operational behavior. If the model output score is less than 0.9 time-sharing, the operation behavior is considered to be free of risk, and the operation behavior is released. The "static" means that in the case of disturbance rate and case coverage determination, the risk recognition result of the model is always fixed, that is, the model cannot be changed according to the service sceneThe operational behaviour performs different risk identification calculations. While the internet financial system is actually operated, different business scenarios often exist. To wash baby TM For the purposes of illustration, in Taobao TM There are many different modes of operation for the platform to operate daily, such as daily mode, biundec mode, O2O mode, special public opinion mode, social responsibility mode, etc. Each mode is divided into a plurality of gears according to the flow rate and risk form of different time intervals each day. For different operation modes, different wind control modes and different gears are often required. The data classification precision between different wind control modes and different gears is different, the risk management and control intensity is different, and the business targets (disturbance rate and case coverage rate) are also different. The adjustment of classification precision and management and control intensity can be realized through switching modes and gears, so that the balance between risks and experiences is realized. Different wind control modes and different gear positions correspond to a plurality of different neural network models. For example, when twenty-one is promoted, the transaction amount is greatly increased, and in order to ensure normal operation of the service, reduce time consumption of risk identification, and intercept potential risks, a neural network model needs to be adjusted. A neural network model can only cope with one mode or gear. When business objectives (disturbance rate and case coverage) change, the neural network model can only be switched by the background to deal with. And, after the wind control mode and the gear are set, the gear cannot be changed according to the complexity of the sample. This not only occupies a large space, but also causes a decrease in the computing efficiency of the computer. For simple samples, we may waste time and energy; for complex samples, we may again have insufficient computational accuracy.
The accuracy is the accuracy of data calculation. In the neural network model, as the depth of the neural network increases, the neural network model can acquire higher-level features and more abstract features of input data. These abstract features are very advantageous for data classification. Therefore, as the depth of the neural network increases, the accuracy of the classification of data by the neural network model is also increasing. The higher the accuracy gear that classifies the data, the higher the accuracy of the calculation (i.e., the smaller the disturbance rate, the greater the percentage of cases that can be covered). The cost is that the higher the precision gear is, the more complex the data calculation is, the larger the calculated amount is, and the longer the time consumption is. The lower the precision gear is, the simpler the data calculation is, the smaller the calculation amount is, the shorter the time consumption is, and the lower the calculation result precision is.
The specification provides a multi-precision data classification model, a method and a system, wherein the multi-precision data classification model is constructed based on a neural network model, a plurality of classifier outlets are arranged at a plurality of different depths in the multi-precision data classification model, the classifier outlets represent a plurality of precision gears of data classification, and the deeper the depth is, the higher the precision gears of data classification are. The method and the system input the target data into the multi-precision data classification model, and select a classifier outlet with one precision gear based on the current service data as a classification result of the target data. The method and the system realize data classification with multiple accuracies through the multi-accuracy data classification model, and can set the accuracy gear of the data classification according to the service data, thereby obtaining the classification result of the corresponding classifier. The calculation process is simplified for the samples with low precision gear, and the calculation precision is enhanced for the samples with high precision gear, so that the calculation time is saved, and the calculation efficiency is improved. The multi-precision data classification model, method and system provided by the specification can be applied to various data classification scenes, such as the commodity classification scene, the video classification scene, the risk identification scene and the like. For ease of description, the method and system for multi-precision data classification provided in this specification will be described below using risk identification for an internet financial platform as an example.
In one aspect, the present description provides a system for multi-precision data classification (hereinafter referred to as system). In a second aspect, the present specification provides a multi-precision classification model. In a third aspect, the present specification describes a method of multi-precision data classification from a server side. Fig. 1 shows a schematic diagram of a system 100 for multi-precision data classification. System 100 may include server 200, client 300, network 120, and database 150.
The server 200 may store data or instructions for performing the method of multi-precision data classification described herein and may perform the data and/or instructions. Server 200 may also store a multi-precision data classification model as described herein.
As shown in fig. 1, client 300 may be a smart device hosting a target application (target APP). The client 300 is communicatively connected to the server 200. The target user 110 is a user of the client 300. In some embodiments, client 300 may be installed with one or more Applications (APPs). The APP can provide the target user 110 with the ability to interact with the outside world via the network 120 as well as an interface. The APP includes, but is not limited to: chat-type APP programs, shopping-type APP programs, video-type APP programs, financial-type APP programs, etc., e.g. payment treasures TM Naughty medicine TM Jingdong tea TM And/or APP programs for financial service institutions such as banks, financial products and the like. The target APP refers to a client APP corresponding to the server 200 and capable of providing target service, such as Taobao TM Payment device TM Or the APP program of the respective bank class, etc. For example, when the server 200 is a China banking system, the target APP is a China banking client APP, and when the server 200 is a payment device TM When in system, the target APP is a payment device TM Client APP, when server 200 is Taobao TM In the system, the target APP is Taobao TM Client APP. In some embodiments, the client 300 may include a mobile device 300-1, a tablet computer 300-2, a notebook computer 300-3, a built-in device of the motor vehicle 300-4, or the like, or any combination thereof. In some embodiments, the mobile device 300-1 may comprise a smart home device, a smart mobile device, or the like, or any combination thereof. In some embodiments, the smart home device may include a smart television, a desktop computer, or the like, or any combination. In some embodiments, the smart mobile device may include a smart phone, personal digital assistant, or the like, or any combination thereof. In some embodiments, the built-in devices in the motor vehicle 300-4 may include an on-board computer, an on-board television, and the like. In some embodiments, client 300 may Is a device with positioning technology for locating the position of the client 300.
The network 120 may facilitate the exchange of information and/or data. As shown in fig. 1, clients 300, servers 200, databases 150 may be connected to network 120 and communicate information and/or data with each other via network 120. For example, server 200 may obtain service requests and/or operational data from clients 300 via network 120. In some embodiments, network 120 may be any type of wired or wireless network, or a combination thereof. For example, network 120 may include a cable network, a wired network, a fiber optic network, a telecommunications network, an intranet, the internet, a Local Area Network (LAN), a Wide Area Network (WAN), a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Wide Area Network (WAN), a Public Switched Telephone Network (PSTN), a bluetooth network, a ZigBee network, a Near Field Communication (NFC) network, or the like. In some embodiments, network 120 may include one or more network access points. For example, network 120 may include wired or wireless network access points, such as base stations and/or internet switching points 120-1, 120-2, through which one or more components of client 300, server 200, database 150 may connect to network 120 to exchange data and/or information.
Taking the risk identification scenario as an example for illustration, the target user 110 is at the target APP (e.g. naughty) TM A client APP) performs a target operation action (e.g., a transaction action), and the target operation action is transmitted to the server 200 through the network 120; the server 200 executes instructions of a method for classifying multi-precision data stored in the server 200 and/or the database 150, performs data classification calculation of corresponding precision gears on the target operation behavior based on the target data of the target user 110 and a service scene when the target operation behavior occurs, obtains a classification result, and performs risk management and control based on the classification result. For example, interrupting the target operational behaviour when the classification result is risky; when the classification result is risk-free, the target operational behavior may continue.
Fig. 2 shows a schematic diagram of a server 200 for multi-precision data classification. The server 200 may perform the method of multi-precision data classification described in this specification. The method of multi-precision data classification is described elsewhere in this specification. For example, the method of multi-precision data classification is presented in the description of FIG. 6.
As shown in fig. 2, the server 200 includes at least one storage medium 230 and at least one processor 220. In some embodiments, server 200 may also include a communication port 250 and an internal communication bus 210. Also, server 200 may also include I/O component 260.
The I/O component 260 supports input/output between the server 200 and other components (e.g., the client 300).
The communication port 250 is used for data communication between the server 200 and the outside world. For example, server 200 may connect to network 120 via communication port 250 to receive information about target user 110 at an APP (e.g., payment instrument TM And/or Taobao TM ) Operational behavior on the computer.
The at least one processor 220 is communicatively coupled to at least one storage medium 230 via an internal communication bus 210. The at least one processor 220 is configured to execute the at least one instruction set. When the system 100 is running, the at least one processor 220 reads the at least one instruction set and performs the method of multi-precision data classification provided herein as indicated by the at least one instruction set. The processor 220 may perform all of the steps involved in the method of multi-precision data classification. Processor 220 may be in the form of one or more processors, in some embodiments processor 220 may include one or more hardware processors, such as microcontrollers, microprocessors, reduced Instruction Set Computers (RISC), application Specific Integrated Circuits (ASIC), application specific instruction set processors (ASIP), central Processing Units (CPU), graphics Processing Units (GPU), physical Processing Units (PPU), microcontroller units, digital Signal Processors (DSP), field Programmable Gate Arrays (FPGA), advanced RISC Machines (ARM), programmable Logic Devices (PLD), any circuit or processor capable of executing one or more functions, or the like, or any combination thereof. For illustrative purposes only, only one processor 220 is depicted in the server 200 in this description. However, it should be noted that the server 200 may also include multiple processors in this specification, and thus, the operations and/or method steps disclosed in this specification may be performed by one processor as described in this specification, or may be performed jointly by multiple processors. For example, if the processor 220 of the server 200 performs steps a and B in this specification, it should be understood that steps a and B may also be performed by two different processors 220 in combination or separately (e.g., a first processor performs step a, a second processor performs step B, or the first and second processors perform steps a and B together).
Although the above structure describes the server 200, the structure is also applicable to the client 300.
The multi-precision data classification model may be a computational model that classifies the input data 320. The multi-precision data classification model may be constructed based on a Neural network model (NN for short). The multi-precision data classification model may be a convolutional neural network model (Convolutional Neural Networks, abbreviated as CNN), a deep neural network (Deep Neural Networks, abbreviated as DNN), or the like. For ease of illustration, a convolutional neural network model (CNN) is described herein as an example. Fig. 3 shows a schematic structural diagram of a multi-precision data classification model 600 provided according to an embodiment of the present disclosure. As shown in fig. 3, the multi-precision data classification model 600 may include a multi-layer data layer 620 and a plurality of classifiers 660.
The multiple data layers 620 may be connected in a preset manner and configured to process the input data according to a predetermined scale. In the convolutional neural network model, the multi-layer data layer 620 may be a plurality of convolutional layers and/or pooled layers. The preset mode may be a serial mode. The predetermined scale may be one scale or a plurality of different scales. The scale refers to the size of the data. Different scales refer to different scales possessed by data of the input data 320 processed by convolution kernels of different sizes. For example, a 20×20 size data is subjected to a 3×3 convolution kernel to obtain 18×18 data, and a 5×5 convolution kernel to obtain 16×16 data. Convolving the input data 320 with convolution kernels of different sizes may extract feature information of different scales. The multi-scale feature processing can be used for extracting more comprehensive feature information, and not only global overall information, but also local detailed information can be extracted.
The input data 320 refers to data to be classified that is input into the multi-precision data classification model 600. The input data 320 is different for different network platforms. For example, in Taobao TM In the system, when it is desired to classify the merchandise, the input data 320 may be a description of the merchandise, a photograph of the merchandise, etc.; can also be naughty treasured TM Transaction data on the system, such as during Christmas or twenty-one hours TM The system has a large amount of transaction data to be flushed, and the system is precious TM The system may need to sort and/or analyze the large amount of transaction data. For another example, in a video platform, the input data 320 may be video data, or the like, when it is desired to categorize video. For example, in an Internet financial platform, such as a Payment Bao TM In the system, when the multi-precision data classification model 600 is a risk recognition model, the input data 320 may be target operation behavior data of the target user 110 at the current time, and behavior feature data and attribute feature data of the target user 110 in a preset time window, and so on. The input data 320 may be a matrix or a vectorMay be one-dimensional or multidimensional.
The classifier 660 uses a function or model that can map data entered into the classifier 660 to one of a given class to yield a classification result. Each classifier 660 of the plurality of classifiers 660 is connected to one of the plurality of data layers 620 and receives output data of the data layer 620 connected thereto; meanwhile, each classifier 660 may be configured to classify the input data 320 based on the output data of the data layer 620 connected thereto and output the classification result. Since the different classifiers 660 are connected to the different data layers 620, that is, the data received by the classifiers 660 connected to the different data layers are calculated at different depths, the data received by the classifiers 660 connected to the different data layers have different accuracies, so that the classifiers 660 connected to the different data layers 620 correspond to different accuracy gears of the multi-accuracy data classification model 600 for classifying the input data 320. The plurality of classifiers 660 correspond to a plurality of different precision gears. The multi-precision data classification model 600 may be configured to classify the input data 320 into corresponding precision gears. In some embodiments, the output of one classifier 660 represents the classification result for one precision gear. The plurality of precision gears may be partitioned based on the traffic data. The service data is data corresponding to a service scene. Different traffic scenarios have corresponding traffic data. For example, in a risk identification scenario, the business scenario may be a different operation mode, such as daily mode, biundecy mode, O2O mode, special public opinion mode, social responsibility mode; the business scenario may also be a different time window, such as a peak transaction time window from 0 to 10 late on the day of twenty-one. The business data may be data stream density of platform transaction behavior, number of online people, time stamp, etc. The traffic data may be a range of data. The dividing of the precision gear based on the service data may be to divide different precision gears according to different service data ranges. For example, when the data stream density of the transaction behavior is between A-B, the risk identification precision gear is divided into A first precision gear; and when the data stream density of the transaction behavior is between B and C, dividing the precision gear of risk identification into a second precision gear, and the like. Alternatively, the timestamp is at 10: 00-12: 00, dividing the precision gear of risk identification into a first precision gear; when the timestamp is 12: 00-14: 00, the precision gear of risk identification is divided into a second precision gear, and so on. Or, wash the treasures on the double eleven days TM The density of transaction data streams on the platform is obviously higher than that of transaction data streams on other days, so that the precision gear of risk identification can be divided into a first precision gear; in the rest of the naughty TM The days on the platform where the density of the transaction data stream is normal divide the precision gear of risk identification into a second precision gear, and so on. The method for dividing the precision gear according to the service data is quite a few examples are given in the specification, and the rest modes are not repeated one by one. As previously described, as the depth of the multi-precision data classification model 600 increases, the more abstract the extracted features, the more accurate the data computation and the higher the corresponding precision gear. Thus, the deeper the data layer 620 connected to the classifier 660 is in the multi-precision data classification model 600, the higher the precision gear corresponding to the classifier 660.
As shown in fig. 3, when data classification is performed on the input data 320, the precision gear of the data classification corresponding to the current input data 320 is set based on the service data of the input data 320, the input data 320 is input into the multi-precision data classification model 600, and the data classification result of the classifier 660 corresponding to the precision gear of the data classification of the input data 320 is obtained and output as the output result. The multi-precision data classification model 600 is provided with a plurality of classifier 660 outlets, and can set different precision gears according to the service data corresponding to the input data 320 to perform classification calculation. When the precision gear is set at the low precision gear, the result of the low-level classifier 660 is output, and the result is output without deep calculation, so that the calculation process is simplified, and the calculation amount and calculation time are reduced; when the precision gear is set at the high-precision gear, the calculation result has high precision, large calculation amount and long time consumption. The multi-precision data classification model 600 provided in the specification realizes data classification calculation of multi-precision gears through a neural network model, so that the memory occupation is greatly reduced, and the calculation efficiency of a computer is improved.
FIG. 4A shows a schematic diagram of a multi-precision data classification model 600A provided in accordance with an embodiment of the present description; FIG. 4B is a schematic diagram of a multi-precision data classification model 600B according to an embodiment of the disclosure; fig. 4C shows a schematic structural diagram of a multi-precision data classification model 600C provided according to an embodiment of the present disclosure. The multi-precision data classification model 600 may be a two-dimensional neural network structure or a one-dimensional neural network structure. As shown in fig. 4A, 4B, and 4C, the multi-precision data classification model 600A is a one-dimensional neural network structure, and the multi-precision data classification model 600B and the multi-precision data classification model 600C are two-dimensional neural network structures. The multi-precision data classification model 600A, the multi-precision data classification model 600B, and the multi-precision data classification model 600C may include multiple data layers 620. The multiple data layers 620 may be directly or indirectly connected to each other and in number of layers (depth)lAnd dimensionssThe two dimensions form a two-dimensional network structure. As shown in FIG. 4A, when the predetermined scaleIn this case, the multi-precision data classification model 600A has a one-dimensional network structure. As shown in FIGS. 4B and 4C, when the predetermined dimension +. >In this case, the multi-precision data classification model 600B and the multi-precision data classification model 600C have a two-dimensional network structure. NamelyThat is, the multi-precision data classification model 600B and the multi-precision data classification model 600C may include multiple data layers 620, each data layer 620 including multiple data layers 620 at multiple scales. The connection direction of the data layer 620 is depth at the same scalelThe connection direction of the data 620 layers at the same depth is the scalesDirection. For convenience of presentation, in the multi-precision data classification model 600B and the multi-precision data classification model 600C shown in fig. 4B and 4C, the scale +.>An example is described. The multiple data layers 620 are connected to form a target network. Along the depthlThe multi-layer data layer 620 may include a root node layer 622 and a plurality of child node layers 624. In some embodiments, the multi-layer data layer 620 may also include a plurality of first splice layers 640, as shown in fig. 4A and 4B.
As shown in fig. 4A, 4B and 4C, the multiple data layers 620 in the multi-precision data classification model 600B are respectively at different depthslAnd different dimensionssThe two dimensions are connected. Along the depthlDirection, same dimensionsThe lower multi-layer data layer 620 is connected in series, with the root node 622 at layer 1, and further being a plurality of sub-node layers 624 connected in series in turn. For ease of description, the number of layers of the root node layer 622 and the plurality of child node layers 624 are labeled as . The root node layer 622 is marked as layer 1 and the plurality of sub-node layers 624 are marked as layer 2 sub-node layer 624, layer 3 sub-node layer 624, … …, layer +.>A layer sub-node layer 624. Layer 2 child node layer 624 is directly connected to root node layer 622. The remaining child node layers 624 are in turn connected in series, indirectly with the root node layer 622. For convenience of description, the->The output data of layer sub-node layer 624 is marked +.>As shown in fig. 4A. When the predetermined scale is a plurality of scales, the +.>Output data of layer sub-node layer 624>Data comprising a plurality of scales, will->Layer sub-node layer 624->The output data of the individual scale is marked +.>As shown in FIGS. 4B and 4C, wherein +.>。
As shown in fig. 4A, 4B, and 4C, each of the plurality of sub-node layers 624 (the thLayer) sub-node layer 624 takes the output data of the adjacent previous layer data layer 620 as the current (th +.>Layer) input data of the child node layer 624 will be current (th->Layer) input data input of the sub-node layer 624 is current (first +.>Layer) the current (th +.>Layer) output data of the sub-node layer 624 (a->). When the predetermined scale is a plurality of scales, the output data of the adjacent previous data layer 620 includes the output data of the plurality of scales, and thus, currently (the >Layer) the input data of the child node layer 624 includes a plurality of different scale data.
As shown in fig. 4A, the root node layer 622 and each of the plurality of child node layers 624 includes a transfer function 680. The transfer function 680 may be used for transferring data between different levels, and may also be used for transferring data between different scales. The transfer function 680 may be configured to convolve data input to the transfer function 680 with a predetermined convolution kernel in a predetermined step size. As shown in fig. 4B and 4C, when the predetermined scale is a plurality of scales, the transfer function 680 may include a hierarchical transfer function 682 and may further include a scale transfer function 684.
As shown in fig. 4B and 4C, the transfer function 680 may include a hierarchical transfer function 682 for transferring data between different hierarchies, configured to convolve data input to the hierarchical transfer function 682 with a first convolution kernel in a first step size. For ease of description, the hierarchical transfer function 682 is labeled asRepresenting from->Layer direction->The hierarchical transfer function 682 at the time of layer transfer. When the predetermined scale is a plurality of scales, from +.>Layer direction->Layer transfer comprising multiple scalesThe hierarchical transfer function 682 of the degree, therefore, will be from +. >Layer direction->Layer transfer time->The hierarchical transfer function 682 of the individual scale is marked +.>. It should be noted that one or more convolution kernels may be included in the first convolution kernel. The first convolution kernel of the hierarchical transfer function 682 employed in data transfer between different hierarchies may be different or the same. Similarly, the first step size of the hierarchical transfer function 682 used in transferring data between different hierarchies may be different or the same.
As previously described, different scales may extract different feature information. The multi-scale can be used for extracting more comprehensive characteristic information, and the characteristic information has global overall information and local detailed information. In order for the data in the input classifier 660 to contain more comprehensive and detailed feature information, it is often necessary to perform multi-scale processing on the input data 320 and fuse the feature data of different scales to extract the more comprehensive and detailed information. As shown in fig. 4B and 4C, 3 different scales are shown, each data layer containing 3 different scales of data. In order to fuse feature data of different scales, cross-scale data processing is required.
As shown in fig. 4B and 4C, when performing cross-scale data processing, the transfer function 680 may further include a scale transfer function 684 for transferring data between different scales, configured to convolve data input to the scale transfer function with a second convolution kernel in a second step size. For ease of description, the scale transfer function 684 is labeled as Watch (Table)Show from->Layer->Scale to->Layer-> Scale transfer function 684 at the time of scale transfer. It should be noted that one or more convolution kernels may be included in the second convolution kernel. The second convolution kernel of the scale transfer function 684 used in transferring data between scales at different levels may be different or the same. Similarly, the second step size of the scale transfer function 684 used in transferring data between scales of different levels may be different or the same.
In some embodiments, the multi-layer data layer 620 may further include a plurality of first splice layers 640, as shown in fig. 4A and 4B. In some embodiments, the multi-layer data layer 620 does not include the first splice layer 640, as shown in fig. 4C. Each first splice layer 640 of the plurality of first splice layers 640 is connected in series with two adjacent sub-node layers 624, and for ease of description, the first splice layer 640 will be connected in seriesLayer child node layer 624 and->The first splice layer 640 of the layer sub-node layer 624 is defined as +.>Layer a first splice layer 640. First->The output data of the layer first splicing layer 640 is marked +.>As shown in fig. 4A. When the predetermined scale isAt multiple scales>Output data of layer first splicing layer 640 +.>Data comprising a plurality of scales, will- >The first splicing layer 640 is the first layer in the output data>The output data of the individual scale is marked +.>As shown in fig. 4B. As shown in fig. 4A and 4B, each of the plurality of first spliced layers 640 (th->Layer) first splice layer 640 connects two adjacent (th->Layer and->Layer) sub-node layer 624, and will be current (first->Layer) root node layer 622 and child node layer 624 (layer 1, layer 2, … …, layer +.>Layer) output data (++>、/>、……、/>) Splicing is performed as the current (th->Layer) output data of the first splicing layer 640 (a->) And inputs the next layer (th->Layer) a child node layer 624. Specifically, at present (th->Layer) the first splice layer 640 may connect the adjacent previous layer (th +_>Layer) output data of the sub-node layer 624 (a->) And the adjacent preceding layer (th +.>Layer) output data of the first splicing layer 640 (a->) Splicing is performed as the current (th->Layer) output data of the first splicing layer 640 (a->) And inputs the next layer (th->Layer) a child node layer 624. As shown in fig. 4A and 4B, a first splice layer 640 of the plurality of first splice layersOutput data of layer first splicing layer 640 +.>Can be expressed by the following formula:
When the predetermined scale is a plurality of scales, the first scaleThe first splicing layer 640 is the first layer in the output data>Output data of individual scale->Can be expressed by the following formula:
it should be noted that the multi-layer data layer 620 may include a plurality of first splice layers 640, or may not include the first splice layers 640. When the multi-layered data layer 620 includes a plurality of first splice layers 640, the plurality of first splice layers 640 and the plurality of sub-node layers 624 are cross-connected in series. Each of the plurality of sub-node layers 624 (the firstLayer) the sub-node layer 624 is formed of an adjacent previous layer (the th layerLayer) output data of the first splicing layer 640 (a->) As the current (th->Layer) input data of the child node layer 624 will be current (th->Layer) input data of the sub-node layer 624 (a->) Input current (th->Layer) the current (th +.f) is calculated in transfer function 680 of child node layer 624>Layer) output data of the sub-node layer 624 (a->). When the multi-layered data layer 620 includes a plurality of first splice layers 640, each classifier 660 of the plurality of classifiers 660 receives output data (% of one first splice layer 640 of the plurality of first splice layers 640>). When the predetermined scale is a plurality of scales, the output data of the first spliced layer 640 (++ >) Comprises->Output data of different scales (+)>、/>) Classifier 660 receives the output data (++f) of first splice layer 640 that contains the most different scale information>) Here, the description will be made in detail later.
In the model training process, the shallow classifier 660 is often used to make the parameters of model training more favorable for the classification of the self level in order to make the classification of the self level better, faster and better, but the classification effect of the deep classifier 660 is not the most favorable. Thus, a first splice layer 640 is interposed between the sub-node layers 624, and the first splice layer 640 can splice output data of all previous sub-node layers 624. Therefore, each classifier 660 can be connected to other shallow classifiers 660, and each classifier 660 is densely connected with other shallow classifiers 660, so that the shallow parameters are beneficial to classification of own layers and deep classification in the model training process, and the final performance of the network is ensured. During back propagation, each classifier 660 can directly affect a shallow classifier 660 of a certain layer, so that the result of model training is updated towards a direction with better, faster and better effect on each classifier 660.
As described above, when the predetermined scale is a plurality of scales, the current (the firstLayer) the input data of the child node layer 624 includes a plurality of different scale data. In order to fuse feature data of different scales, each of the plurality of sub-node layers 624 (th +.>Layer) the sub-node layer 624 may include a second stitching layer 630 configured to stitch data of different scales. As shown in fig. 4B, the current (th +.>Layer) sub-layerNo. H of node layer 624>The input data of the individual scale may comprise an adjacent preceding layer (th +.>Layer) output data of the first splicing layer 640 (a->) The%>Output data of individual scale (+)>) And->Output data of individual scale (+)>). Currently (th->Layer) the second splicing layer 630 in the sub-node layer 624 is used for outputting the plurality of different scales of output data (++>And->) The transfer function 680 computes the splice. Specifically, the current (th->Layer) input data of sub-node layer (+.>And->) Input current (th->Layer) sub-node layer 624 calculates the current (first)>Layer) output data of sub-node layer (+.>) Comprising the following steps:
an adjacent previous layer (the firstLayer) output data of the first splicing layer 640 (a- >) Output data of multiple scales (++)>And->) Input transfer function 680 performs calculations; and
the results of the multiple scales calculated by the transfer function 680 are input to the second splicing layer 630 to be spliced, to obtain the current (the firstLayer) child node layer 624 +.>Individual scale output data (+)>)。
Wherein the computation of the transfer function 680 includes the computation of the hierarchical transfer function 682 and the computation of the scale transfer function 684. Hierarchical transfer function 682 is used for pairingCalculation is performed and the scale transfer function 684 is used for the +.>And (5) performing calculation. Specifically, at present (th->Layer) output data of the sub-node layer 624 (a->) The calculation of (2) can be expressed as the following formula:
it can be seen that, theLayer->Output data of individual scales are fused +.>First->Output data of individual scale and +.>Output data of individual scales. And->First->The output data of the individual scales are fused with +.>First->Output data of individual scale and +.>Output data of individual scales. Analogize, the first->Layer->The output data of the individual scales are fused from 1 st scale to +.>Output data of individual scales. Specifically, an embodiment shown in fig. 4B is taken as an example for explanation. In the embodiment shown in fig. 4B, < > >In FIG. 4B, the output data of each data layer 620 is shown in Table 1, wherein +.>Representing input data 320: />
It should be noted that, the second stitching layer 630 may stitch data of a limited number of the multiple scales, and the second stitching layer 630 may stitch data of all the multiple scales.
In summary, in the embodiment shown in fig. 4B, more abstract features are obtained by extracting features of multiple different scales from the input data 320, and the feature data of different scales are fused to obtain more comprehensive and detailed information. The fused characteristic data is input into the classifier, so that the classifier can obtain a better, more comprehensive and more accurate classification result.
As can be seen from the table 1,comprises->And +.>Not only fused with information of different scales, but also fused with all information of a shallow level, so that the data input into the classifier 660 may contain more comprehensive and detailed characteristic information. In addition, the data of each level can be connected to the data of other shallow levels, so that in the training process of the model, the parameters of each shallow level are beneficial to the classification effect of the level of the model, and meanwhile, the classification effect of the deep level is beneficial to.
As previously described, the multi-layer data layer 620 may not include the first splice layer 640. As shown in fig. 4C, when the multi-layered data layer 620 does not include the first splice layer 640, each of the plurality of sub-node layers 624 (the firstLayer) the sub-node layer 624 is formed of an adjacent preceding layer (th->Layer) output data of the sub-node layer 624 (a->) As the current (th->Layer) input data of the child node layer 624 will be current (th->Layer) input data of the sub-node layer 624 (a->) The current (th +.>Layer) output data of the sub-node layer 624 (a->). When (when)When the multi-layered data layer 620 does not include the first stitching layer 640, each classifier 660 of the plurality of classifiers 660 is connected to one of the plurality of sub-node layers 624, and receives output data ('in/out') of the sub-node layer 624 connected thereto>). When the predetermined scale is a plurality of scales, each (th +.>Layer) output data of the sub-node layer 624 (a->) Also comprises a plurality of output data of different scales (+)>、/>). Specifically, the current (th->Layer) child node layer 624 +.>Output data of multiple scales (++) among input data of multiple scales>And->) Input transfer function 680 performs calculations; inputting the results of the multiple scales calculated by the transfer function 680 into the second splicing layer 630 to splice, thereby obtaining the current (the +. >Layer) output data of the sub-node layer 624 (a->). Wherein the calculation of the transfer function 680 includes a layerCalculation of the stage transfer function 682 and calculation of the scale transfer function 684. Hierarchical transfer function 682 is used for p->Calculation is performed and the scale transfer function 684 is used for the +.>And (5) performing calculation. Specifically, at present (th->Layer) output data of the sub-node layer 624 (a->) The calculation of (2) can be expressed as the following formula:
fig. 5 shows a data flow diagram of a multi-precision data classification model 600B provided in accordance with an embodiment of the present description. In FIG. 5 toFor example, output data for child node layer 624>Further description is given. In fig. 5, arrows represent the direction of data transfer. As can be seen from Table 1, the data in the solid line box represents AND +.>Directly related data. Data in dashed box and +.>And +.>Directly related, the data in the dashed box therefore represent the and +.>Indirectly related data. Data representative of Wireless Box and->Uncorrelated data. As shown in fig. 5, ->Can be directly or indirectly connected with data of different layers and different scales, thus, the +.>Comprising characteristic information of a plurality of data in low level and low scale, so,/->Including more comprehensive and detailed features. It can be seen that the deeper, more scale output data has more comprehensive and more detailed characteristic information, so that the higher the accuracy of data classification is, the higher the accuracy is, and the larger the calculation amount is.
It should be noted that, when performing feature extraction of different scales on the input data 320, the method shown in fig. 4B may be used to perform feature extraction of different scales on the input data 320 first, and then, different convolution kernels may be used to perform feature extraction of different scales on the input data. Multiple feature extraction may also be performed directly on the input data 320 using multiple different convolution checks on the input data 320 to obtain feature data of different scales.
When the splicing operation is performed on the data with different scales, the filling processing is required to be performed on the data with different scales, and the data with different scales is converted into the data with the same specification and size. Typically, small-scale data is augmented with data 0 to generate fixed-specification-sized data.
Fig. 6 shows a flow chart of a method P100 of multi-precision data classification. As described above, the server 200 may perform the method P100 of multi-precision data classification provided in the present specification. Specifically, the processor 220 in the server 200 may perform the method P100 of multi-precision data classification provided in the present specification. The method P100 comprises the following steps performed by the at least one processor 220:
S100: the multi-precision data classification model 600 is loaded.
S300: and acquiring target data and target service data, and setting target precision gears of the target data classification based on the target service data, wherein the plurality of precision gears comprise the target precision gears.
The target data is input data 320 of the multi-precision data classification model 600. The target data is different in different scenarios. A risk recognition scenario is described as an example. In naughty with water TM In the risk identification scene of the system, the target user 110 at the current moment passes through Taobao TM APP performs payment transaction, the target data may be that target user 110 panning within a preset window prior to the current time TM Historical characteristic behavior and attribute characteristic behavior on the APP, such as transfer, off-line payment, repayment, code scanning, login, binding, payment, card swiping, account opening and the like. Acquiring the target data includes acquiring historical characteristic behavior data and attribute characteristic behavior data of the target user 110 in a preset time window before the current moment, and generating input data 320 according to a time sequence.
The target service data refers to the service data corresponding to the target data at the current moment. A risk recognition scenario is described as an example. In naughty with water TM In the risk identification scene of the system, the target user 110 at the current moment passes through Taobao TM The APP performs payment transaction, and the target business data can be the data flow density of the transaction behavior of the platform at the current moment, the number of people online at the current moment, the time stamp and the like. As described above, the precision shift of data classification is divided based on the service data. When the multi-precision data classification model 600 is used to classify the target data, a precision gear corresponding to the target data at the current moment when the data classification is performed, that is, a target precision gear, needs to be set according to the target service data of the target data. The setting of the target precision gear may be manually set in the background or may be automatically set by the server 200. The manual setting can beThe background switches the precision gear according to different operation modes. The automatic setting may be that the server 200 outputs the target precision gear by inputting the target service data into a precision gear setting model. The precision gear setting model can be trained based on service data and precision gear classification. The precision shift setting model may be connected with the multi-precision data classification model 600.
S500: the target data is input as input data 320 to the multi-precision data classification model 600.
S700: the classification result of the target data is output by a target classifier of the multi-precision data classification model 600, wherein the plurality of classifiers 660 includes the target classifier, which represents the target precision gear.
And setting a target precision gear corresponding to the target data at the current moment when the data classification is carried out according to the target business data of the target data, thereby determining the position of a target classifier corresponding to the target precision gear. The target data is input into the multi-precision data classification model 600, and the multi-precision data classification model 600 outputs the classification result of the target classifier as the classification result of the target data. For example, in Taobao TM In the risk identification scenario of the system, the target user 110 performs a transaction on the double eleven days, and the transaction amount on the double eleven days is greatly increased, so that the target user is treasured-washing TM The system sets the precision gear of data classification to the second precision based on the target service data of the target user 110, and the corresponding target classifier is connected with the third layer first splicing layer 640, so that the multi-precision data classification model 600 inputs the output result of the third layer first splicing layer 640 into the target classifier, and takes the output result of the target classifier as the classification result of the target data.
With the advent of IOT (Internet of things ) age, neural network models are increasingly deployed to various smart devices, such as smartphones, smart wearable devices, mobile payment tools, and the like. The multi-precision data classification model 600, the method P100 and the system 100 provided in the present specification can be applied not only in a data classification scene, but also on various intelligent devices. The service data may be a model number, a memory, an electric quantity, etc. of the intelligent device. The target service data may be a memory, an electric quantity, a model of the intelligent device, and the like at the current moment of the intelligent device. The system of the intelligent equipment can set calculation modes of different precision gears according to the service data. The system of the intelligent equipment can adjust the precision gear of the calculation mode according to the target service data, so that the calculation time is saved, and the calculation efficiency is improved. For example, when the electric quantity of the smart phone is low, the system of the smart phone can be adjusted to a low-precision calculation mode so as to reduce the calculation amount and reduce the power consumption.
In summary, the present disclosure provides a multi-precision data classification model 600, a method P100 and a system 100, where the multi-precision data classification model 600 includes a multi-layer data layer 620 and a plurality of classifiers 660, the plurality of classifiers 660 are respectively connected with the multi-layer data layer 620, and the plurality of classifiers 660 represent a plurality of precision gears. The method P100 and the system 100 may input the target data into the multi-precision data classification model 600 and set a target precision gear based on current target service data, thereby taking an output result of a target classifier corresponding to the target precision gear as a classification result of the target data. The method P100 and the system 100 realize data classification with multiple precision through a multi-precision data classification model 600, set the model calculation outlet based on the target service data, simplify the calculation process for simple samples, and strengthen the calculation precision for complex samples, thereby saving calculation time and improving calculation efficiency.
The foregoing describes specific embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In view of the foregoing, it will be evident to a person skilled in the art that the foregoing detailed disclosure may be presented by way of example only and may not be limiting. Although not explicitly described herein, those skilled in the art will appreciate that the present description is intended to encompass various adaptations, improvements, and modifications of the embodiments. Such alterations, improvements, and modifications are intended to be proposed by this specification, and are intended to be within the spirit and scope of the exemplary embodiments of this specification.
Furthermore, certain terms in the present description have been used to describe embodiments of the present description. For example, "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present description. Thus, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various portions of this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined as suitable in one or more embodiments of the invention.
It should be appreciated that in the foregoing description of embodiments of the present specification, various features have been combined in a single embodiment, the accompanying drawings, or description thereof for the purpose of simplifying the specification in order to assist in understanding one feature. However, this is not to say that a combination of these features is necessary, and it is entirely possible for a person skilled in the art to extract some of them as separate embodiments to understand them upon reading this description. That is, embodiments in this specification may also be understood as an integration of multiple secondary embodiments. While each secondary embodiment is satisfied by less than all of the features of a single foregoing disclosed embodiment.
Each patent, patent application, publication of patent application, and other materials, such as articles, books, specifications, publications, documents, articles, etc., cited herein are hereby incorporated by reference. The entire contents for all purposes, except for any prosecution file history associated therewith, may be any identical prosecution file history inconsistent or conflicting with this file, or any identical prosecution file history which may have a limiting influence on the broadest scope of the claims. Now or later in association with this document. For example, if there is any inconsistency or conflict between the description, definition, and/or use of terms associated with any of the incorporated materials, the terms in the present document shall prevail.
Finally, it is to be understood that the embodiments of the application disclosed herein are illustrative of the principles of the embodiments of the present specification. Other modified embodiments are also within the scope of this specification. Accordingly, the embodiments disclosed herein are by way of example only and not limitation. Those skilled in the art can adopt alternative arrangements to implement the application in the specification based on the embodiments in the specification. Therefore, the embodiments of the present specification are not limited to the embodiments precisely described in the application.
Claims (10)
1. A method of risk identification, comprising:
loading a risk identification model, the risk identification model comprising:
the multi-layer data layer is connected in a preset mode and is configured to process input data according to a preset scale;
a plurality of classifiers, each of the plurality of classifiers receiving output data of one of the plurality of data layers and configured to classify the input data based on the output data of the data layer, the plurality of classifiers corresponding to a plurality of precision gears partitioned based on traffic scenarios, wherein the risk recognition model is configured to classify the input data as a corresponding precision gear, the risk recognition model being constructed based on a neural network model;
Acquiring transaction behaviors of a target user and a business scene when the transaction behaviors occur, and setting a target precision gear based on the business scene when the transaction behaviors occur, wherein the plurality of precision gears comprise the target precision gear;
inputting the transaction behavior as the input data into the risk identification model; and
and outputting a classification result of the transaction behavior through a target classifier of the risk identification model, wherein the target classifier represents the target precision gear, the plurality of classifiers comprise the target classifier, and the classification result comprises risk or no risk.
2. The risk identification method of claim 1, wherein the plurality of data layers are connected into a target network, the plurality of data layers comprising:
a root node layer; and
a plurality of sub-node layers connected in series are connected in series with the root node layer.
3. The risk identification method as claimed in claim 2, wherein each of the root node layer and the plurality of sub-node layers includes a transfer function, each of the plurality of sub-node layers uses output data of an adjacent previous data layer as input data of a current sub-node layer, and the input data of the current sub-node layer is input into the current sub-node layer to calculate to obtain the output data of the current sub-node layer.
4. A method of risk identification as claimed in claim 3, wherein the multi-layer data layer further comprises:
and each first splicing layer in the first splicing layers is connected with two adjacent sub-node layers in series, and the output data of the root node layer and all the sub-node layers in front of the current first splicing layer are spliced to be used as the output data of the current first splicing layer, and the output data of the next adjacent sub-node layer is input.
5. The method of risk identification of claim 4 wherein each classifier of the plurality of classifiers receives output data of one of the plurality of first splice layers.
6. A method of risk identification as claimed in claim 3, wherein the transfer function comprises:
a hierarchical transfer function configured to convolve data input to the hierarchical transfer function with a first convolution kernel in a first step size.
7. The method of risk identification of claim 6 wherein the predetermined scale comprises a plurality of scales.
8. The risk identification method of claim 7, wherein the output data of the adjacent previous data layer comprises a plurality of scales of output data, each of the plurality of sub-node layers further comprising a second stitching layer configured to stitch data of different scales,
The step of inputting the input data of the current sub-node layer into the current sub-node layer to calculate to obtain the output data of the current sub-node layer includes:
inputting the output data of the multiple scales into the transfer function for calculation; and
and inputting the results of the multiple scales calculated by the transfer function into the second splicing layer for splicing to obtain the output data of the current sub-node layer.
9. The method of risk identification of claim 8 wherein the transfer function further comprises:
and the scale transfer function is used for transferring data among different scales and is configured to carry out convolution operation on the data input into the scale transfer function according to a second step length by using a second convolution kernel.
10. A system of risk identification, comprising:
at least one storage medium comprising at least one set of instructions for risk identification; and
at least one processor communicatively coupled to the at least one storage medium,
wherein the at least one processor reads the at least one instruction set and performs the method of risk identification of any of claims 1-9 as directed by the at least one instruction set when the system is running.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010440549.8A CN111475587B (en) | 2020-05-22 | 2020-05-22 | Risk identification method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010440549.8A CN111475587B (en) | 2020-05-22 | 2020-05-22 | Risk identification method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111475587A CN111475587A (en) | 2020-07-31 |
CN111475587B true CN111475587B (en) | 2023-06-09 |
Family
ID=71765235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010440549.8A Active CN111475587B (en) | 2020-05-22 | 2020-05-22 | Risk identification method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111475587B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116383708B (en) * | 2023-05-25 | 2023-08-29 | 北京芯盾时代科技有限公司 | Transaction account identification method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509978A (en) * | 2018-02-28 | 2018-09-07 | 中南大学 | The multi-class targets detection method and model of multi-stage characteristics fusion based on CNN |
CN110210313A (en) * | 2019-05-06 | 2019-09-06 | 河海大学 | United Hyperspectral Remote Sensing Imagery Classification method is composed based on multiple dimensioned PCA-3D-CNN sky |
CN110597816A (en) * | 2019-09-17 | 2019-12-20 | 深圳追一科技有限公司 | Data processing method, data processing device, computer equipment and computer readable storage medium |
CN110909775A (en) * | 2019-11-08 | 2020-03-24 | 支付宝(杭州)信息技术有限公司 | Data processing method and device and electronic equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2381689A1 (en) * | 2002-04-12 | 2003-10-12 | Algorithmics International Corp. | System, method and framework for generating scenarios |
US10453015B2 (en) * | 2015-07-29 | 2019-10-22 | International Business Machines Corporation | Injury risk factor identification, prediction, and mitigation |
-
2020
- 2020-05-22 CN CN202010440549.8A patent/CN111475587B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108509978A (en) * | 2018-02-28 | 2018-09-07 | 中南大学 | The multi-class targets detection method and model of multi-stage characteristics fusion based on CNN |
CN110210313A (en) * | 2019-05-06 | 2019-09-06 | 河海大学 | United Hyperspectral Remote Sensing Imagery Classification method is composed based on multiple dimensioned PCA-3D-CNN sky |
CN110597816A (en) * | 2019-09-17 | 2019-12-20 | 深圳追一科技有限公司 | Data processing method, data processing device, computer equipment and computer readable storage medium |
CN110909775A (en) * | 2019-11-08 | 2020-03-24 | 支付宝(杭州)信息技术有限公司 | Data processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN111475587A (en) | 2020-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11544550B2 (en) | Analyzing spatially-sparse data based on submanifold sparse convolutional neural networks | |
Fu et al. | Credit card fraud detection using convolutional neural networks | |
US20220335711A1 (en) | Method for generating pre-trained model, electronic device and storage medium | |
TW201942826A (en) | Payment mode recommendation method and device and equipment | |
US10496752B1 (en) | Consumer insights analysis using word embeddings | |
US10685183B1 (en) | Consumer insights analysis using word embeddings | |
US11182806B1 (en) | Consumer insights analysis by identifying a similarity in public sentiments for a pair of entities | |
US10558759B1 (en) | Consumer insights analysis using word embeddings | |
US10509863B1 (en) | Consumer insights analysis using word embeddings | |
US11113315B2 (en) | Search keyword generation | |
US10803248B1 (en) | Consumer insights analysis using word embeddings | |
CN110674188A (en) | Feature extraction method, device and equipment | |
CN108961032A (en) | Borrow or lend money processing method, device and server | |
US20210026891A1 (en) | Information processing method, related device, and computer storage medium | |
CN108804617A (en) | Field term abstracting method, device, terminal device and storage medium | |
CN112989085B (en) | Image processing method, device, computer equipment and storage medium | |
CN112214652A (en) | Message generation method, device and equipment | |
CN111475587B (en) | Risk identification method and system | |
CN113535912B (en) | Text association method and related equipment based on graph rolling network and attention mechanism | |
US11030539B1 (en) | Consumer insights analysis using word embeddings | |
CN114330476A (en) | Model training method for media content recognition and media content recognition method | |
CN116993513A (en) | Financial wind control model interpretation method and device and computer equipment | |
CN113988878B (en) | Graph database technology-based anti-fraud method and system | |
Levus et al. | Intelligent System for Arbitrage Situations Searching in the Cryptocurrency Market. | |
CN111475652B (en) | Data mining method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |