CN116976885A - Training method of object recognition and recognition model and related device - Google Patents

Training method of object recognition and recognition model and related device Download PDF

Info

Publication number
CN116976885A
CN116976885A CN202211236681.2A CN202211236681A CN116976885A CN 116976885 A CN116976885 A CN 116976885A CN 202211236681 A CN202211236681 A CN 202211236681A CN 116976885 A CN116976885 A CN 116976885A
Authority
CN
China
Prior art keywords
feature
feature set
attribute
recognition
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211236681.2A
Other languages
Chinese (zh)
Inventor
洪伟俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211236681.2A priority Critical patent/CN116976885A/en
Publication of CN116976885A publication Critical patent/CN116976885A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/382Payment protocols; Details thereof insuring higher security of transaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/04Payment circuits
    • G06Q20/06Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme
    • G06Q20/065Private payment circuits, e.g. involving electronic currency used among participants of a common payment scheme using e-cash
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/30Payment architectures, schemes or protocols characterised by the use of specific devices or networks
    • G06Q20/32Payment architectures, schemes or protocols characterised by the use of specific devices or networks using wireless devices
    • G06Q20/326Payment applications installed on the mobile devices

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Finance (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of artificial intelligence, in particular to a training method and a related device of an object recognition and recognition model, which can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like to acquire original attribute characteristics of an object to be recognized; inputting original attribute characteristics of the object to be identified into a trained identification model, and determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified; performing dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified respectively to obtain corresponding dimension reduced cross attribute features; performing feature stitching on the cross attribute features of the objects to be identified after the dimension reduction and the original attribute features of the objects to be identified to obtain target stitching features; based on a preset activation function, nonlinear conversion is carried out on the target splicing characteristics, and a recognition result corresponding to the object to be recognized is obtained, so that recognition accuracy can be improved.

Description

Training method of object recognition and recognition model and related device
Technical Field
The application relates to the technical field of artificial intelligence, in particular to a training method and a related device for object recognition and recognition models.
Background
Currently, with the development of online payment technology, more and more illegal transactions occur, for example, an illegal object induces a target transaction object to conduct illegal transactions online. Therefore, in order to secure the on-line payment, it is necessary to recognize whether or not the object to be recognized is a target object before the transaction is completed.
In the related art, when performing object recognition, it is generally possible to determine whether an object to be recognized is a target object based on a recognition model by mining each decision rule and training the recognition model based on each decision rule.
However, since there are often relatively similar features between the target object and the normal object, training the recognition model based on only a simple decision rule may cause misjudgment of the recognition model in the recognition process, for example, the situation that both the merchant and the target object may appear in large amounts, so that the merchant may be determined as the target object by the recognition model in the related art.
Therefore, by such a model training manner in the related art, the accuracy of the obtained recognition model is not high.
Disclosure of Invention
The embodiment of the application provides a training method and a related device for object recognition and recognition models, which are used for improving the accuracy of training the recognition models.
The specific technical scheme provided by the embodiment of the application is as follows:
in one aspect, an embodiment of the present application provides an object recognition method, including:
acquiring original attribute characteristics of an object to be identified;
inputting original attribute characteristics of the object to be identified into a trained identification model, and determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified;
performing dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified respectively to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features of the objects to be identified after the dimension reduction and the original attribute features of the objects to be identified to obtain target stitching features;
and performing nonlinear conversion on the target splicing characteristics based on a preset activation function to obtain a recognition result corresponding to the object to be recognized.
In one aspect, an embodiment of the present application provides a training method for an identification model, including:
obtaining a set of object samples, each object sample comprising: each original attribute feature of the corresponding object and an identification tag obtained after identification of the object;
Performing iterative training on the recognition model to be trained based on the object sample set; in a round of iterative training process, the following operations are executed for one extracted object sample:
performing feature intersection on each original attribute feature contained in the object sample to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with gain meeting a preset gain condition from each generated candidate feature set, wherein each gain is obtained based on an identification result and an identification tag of the corresponding candidate feature set;
based on the target feature set and the original attribute features, a predicted recognition result corresponding to the object sample is obtained, and the corresponding recognition label is combined to perform parameter adjustment on the recognition model.
Optionally, feature intersection is performed on each original attribute feature contained in the object sample to obtain each intersection attribute feature, a corresponding candidate feature set is generated based on each intersection attribute feature, and a target feature set meeting a preset gain condition is determined from each generated candidate feature set, including:
Each original attribute feature contained in the object sample is subjected to feature-by-feature intersection to obtain corresponding intersection attribute features, and corresponding candidate feature sets are generated based on the intersection attribute features respectively;
the following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output:
determining a target feature set meeting the preset gain condition from each current candidate feature set;
the intersection attribute features with the highest orders in the target feature set are respectively subjected to feature intersection with the original attribute features to obtain corresponding new intersection attribute features;
based on each new cross attribute feature, a corresponding new candidate feature set is generated.
Optionally, determining, from each current candidate feature set, a target feature set that meets the preset gain condition includes:
respectively inputting each current candidate feature set into a recognition model to obtain a recognition result of the corresponding candidate feature set, and respectively obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition tag;
and taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
Optionally, the current candidate feature sets are respectively input into the recognition model to obtain recognition results of the corresponding candidate feature sets, and gains of the corresponding candidate feature sets are respectively obtained based on the recognition results and the recognition labels, including:
dividing each candidate feature set determined at present into a plurality of preset feature sets in average;
for the preset plurality of feature groups, the following operations are respectively executed:
inputting each candidate feature set in a feature set into a recognition model, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model;
when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in the feature group;
and determining gain values corresponding to candidate feature sets in the feature group based on the identification results and the identification labels.
Optionally, performing parameter adjustment on model parameters associated with each cross attribute feature contained in the corresponding candidate feature set in the identification model, where the parameter adjustment includes:
determining the cross attribute features reaching the current highest order from all the cross attribute features contained in the corresponding candidate feature sets;
And carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
Optionally, the obtaining a predicted recognition result corresponding to the object sample based on the determined target feature set and the original attribute features includes:
performing dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features subjected to the dimension reduction and the original attribute features to obtain sample stitching features;
and based on a preset activation function, performing nonlinear conversion on the sample splicing characteristics to obtain a predicted recognition result corresponding to the object sample.
Optionally, performing parameter adjustment on the identification model includes:
determining a loss value corresponding to the one object sample based on the predicted identification result and the identification tag;
and carrying out parameter adjustment on each recognition model parameter associated with the splicing characteristic in the recognition model based on the loss value.
In one aspect, an embodiment of the present application provides an object recognition apparatus, including:
the acquisition module is used for acquiring original attribute characteristics of the object to be identified;
The first determining module is used for inputting original attribute characteristics of the object to be identified into a trained identification model, and determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified;
the dimension reduction module is used for respectively carrying out dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified to obtain corresponding dimension reduced cross attribute features;
the splicing module is used for carrying out characteristic splicing on the cross attribute characteristics of the objects to be identified after the dimension reduction and the original attribute characteristics of the objects to be identified to obtain target splicing characteristics;
and the second determining module is used for carrying out nonlinear conversion on the target splicing characteristics based on a preset activation function to obtain a recognition result corresponding to the object to be recognized.
In one aspect, an embodiment of the present application provides a training device for identifying a model, including:
an acquisition module for acquiring a set of object samples, each object sample comprising: each original attribute feature of the corresponding object and an identification tag obtained after identification of the object;
the training module is used for carrying out iterative training on the recognition model to be trained based on the object sample set; in a round of iterative training process, the following operations are executed for one extracted object sample:
Performing feature intersection on each original attribute feature contained in the object sample to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with gain meeting a preset gain condition from each generated candidate feature set, wherein each gain is obtained based on an identification result and an identification tag of the corresponding candidate feature set;
based on the target feature set and the original attribute features, a predicted recognition result corresponding to the object sample is obtained, and the corresponding recognition label is combined to perform parameter adjustment on the recognition model.
Optionally, feature intersection is performed on each original attribute feature contained in the object sample to obtain each intersection attribute feature, a corresponding candidate feature set is generated based on each intersection attribute feature, and when a target feature set meeting a preset gain condition is determined from each generated candidate feature set, the training module is further configured to:
each original attribute feature contained in the object sample is subjected to feature-by-feature intersection to obtain corresponding intersection attribute features, and corresponding candidate feature sets are generated based on the intersection attribute features respectively;
The following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output:
determining a target feature set meeting the preset gain condition from each current candidate feature set;
the intersection attribute features with the highest orders in the target feature set are respectively subjected to feature intersection with the original attribute features to obtain corresponding new intersection attribute features;
based on each new cross attribute feature, a corresponding new candidate feature set is generated.
Optionally, when determining, from each current candidate feature set, a target feature set that meets the preset gain condition, the training module is further configured to:
respectively inputting each current candidate feature set into a recognition model to obtain a recognition result of the corresponding candidate feature set, and respectively obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition tag;
and taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
Optionally, the current candidate feature sets are respectively input into the recognition model, recognition results of the corresponding candidate feature sets are obtained, and when gains of the corresponding candidate feature sets are obtained based on the recognition results and the recognition labels, the training module is further configured to:
Dividing each candidate feature set determined at present into a plurality of preset feature sets in average;
for the preset plurality of feature groups, the following operations are respectively executed:
inputting each candidate feature set in a feature set into a recognition model, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model;
when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in the feature group;
and determining gain values corresponding to candidate feature sets in the feature group based on the identification results and the identification labels.
Optionally, when parameter adjustment is performed on model parameters associated with each cross attribute feature included in the corresponding candidate feature set in the identification model, the training module is further configured to:
determining the cross attribute features reaching the current highest order from all the cross attribute features contained in the corresponding candidate feature sets;
and carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
Optionally, when the predicted recognition result corresponding to the object sample is obtained based on the determined target feature set and the original attribute features, the training module is further configured to:
Performing dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features subjected to the dimension reduction and the original attribute features to obtain sample stitching features;
and based on a preset activation function, performing nonlinear conversion on the sample splicing characteristics to obtain a predicted recognition result corresponding to the object sample.
Optionally, when the parameter adjustment is performed on the identification model, the training module is further configured to:
determining a loss value corresponding to the one object sample based on the predicted identification result and the identification tag;
and carrying out parameter adjustment on each recognition model parameter associated with the splicing characteristic in the recognition model based on the loss value.
In one aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores program code that, when executed by the processor, causes the processor to perform any one of the above-mentioned object recognition methods or training methods of a recognition model.
In one aspect, embodiments of the present application provide a computer storage medium storing computer instructions that, when executed on a computer, cause the computer to perform the steps of any one of the object recognition methods or the training method of the recognition model described above.
In one aspect, embodiments of the present application provide a computer program product comprising computer instructions stored in a computer-readable storage medium; when the processor of the electronic device reads the computer instructions from the computer-readable storage medium, the processor executes the computer instructions, causing the electronic device to perform the steps of any one of the object recognition methods or training methods of the recognition model described above.
Due to the adoption of the technical scheme, the embodiment of the application has at least the following technical effects:
acquiring original attribute characteristics of an object to be identified, inputting the original attribute characteristics of the object to be identified into a trained identification model, determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified, performing dimension reduction processing on cross attribute characteristics contained in the target feature set of the object to be identified respectively to obtain corresponding dimension reduced cross attribute characteristics, performing feature stitching on the dimension reduced cross attribute characteristics of the object to be identified and the original attribute characteristics of the object to be identified to obtain target stitching characteristics, performing nonlinear conversion on the target stitching characteristics based on a preset activation function, and obtaining an identification result corresponding to the object to be identified. In this way, the object feature set containing the high-order crossed attribute features is obtained by feature crossing of each original attribute feature of the object to be identified, and the object feature set of the high-order crossed attribute features is introduced into the identification model, so that the identification accuracy and generalization capability of the model can be improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
FIG. 2A is a flowchart of a training method for identifying a model according to an embodiment of the present application;
FIG. 2B is an exemplary diagram of the original attribute features in an embodiment of the present application;
FIG. 2C is an exemplary diagram of generating a candidate feature set in an embodiment of the application;
FIG. 2D is an exemplary diagram of determining cross attribute characteristics in an embodiment of the present application;
FIG. 2E is an exemplary diagram of feature crossing in an embodiment of the application;
FIG. 2F is an exemplary diagram of generating a target feature set in an embodiment of the application;
FIG. 2G is a schematic diagram of a deep & wide network according to an embodiment of the present application;
FIG. 3 is a flowchart of an object recognition method according to an embodiment of the present application;
FIG. 4 is a first exemplary diagram of an object recognition method according to an embodiment of the present application;
FIG. 5 is a second exemplary diagram of an object recognition method according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of a training device for identifying a model according to an embodiment of the present application;
FIG. 7 is a schematic diagram of an object recognition device according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware composition structure of an electronic device to which the embodiment of the present application is applied.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be capable of operation in sequences other than those illustrated or otherwise described.
Some terms in the embodiments of the present application are explained below to facilitate understanding by those skilled in the art.
Object sample set: the object sample set comprises a plurality of object samples, and each object sample corresponds to the original attribute characteristic of the corresponding object and an identification tag corresponding to the object sample.
The object sample may be, for example, a transaction object sample, an online payment object, and the like, which is not limited in the embodiment of the present application.
Original attribute features: the original attribute features are first-order features after feature extraction on the object sample, and may be object attribute features, fund flow features, transaction pair features, and the like, which are not limited in the embodiment of the present application.
The object attribute features may be age, gender, birth place, social activity, whether the object is complained in historical time and has abnormal behaviors, etc.
The fund flow characteristics can be fund flow characteristics and transaction behavior characteristics, wherein the fund flow characteristics represent that the transaction behavior of the target object has obvious difference from the historical transaction behavior, the characteristics of a plurality of fast-forwarding and fast-outputting exist, and the transaction behavior characteristics represent the transaction behavior of a spoofed party and the characteristics of payment sudden increase.
The transaction pair features represent the transaction features of immediate large payment, wherein the relationship between the two parties is sparse, and the transaction pair features are non-friends or just-added friends in different places.
It should be noted that, in the embodiment of the present application, the user is prompted to collect the user information through a certain form (such as a prompt interface, a prompt short message, an authorization code, etc.), so as to obtain the original attribute feature, obtain the user consent, and the collected original attribute feature is not stored and can be deleted at any time.
In addition, it should be noted that, in the embodiment of the present application, the acquisition and use of the related data are legal.
Cross-attribute feature: and characterizing the attribute characteristics generated after the characteristic intersection of the original attribute characteristics.
Wherein the feature cross-characterization concatenates at least two original attribute features, e.g., concatenates age and gender, and exploits the correlation between age and gender.
Gain: the model effect used to evaluate the recognition model may be, for example, a loss value, which is not limited in the embodiment of the present application.
Recognition result: the recognition result can be divided into abnormality and normal, the recognition result is that the sample of the abnormal characterization object is an abnormal object with abnormal behavior, and the recognition result is that the sample of the normal characterization object is a non-abnormal object without abnormal behavior.
Target object: the target object in the embodiment of the application can be an abnormal object.
deep & wide model: the deep learning and machine learning combined frame has a wide part as a machine learning model, a logistic regression model is generally used, and a deep part as a deep learning frame, a fully connected neural network and the like.
Social payment: personal payments (including but not limited to friend transfers, code transfers, red packets, etc.) are given to individual users.
Social unusual behavior: the illegal object guides the user to pay in the social payment scenario.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
With research and advancement of artificial intelligence technology, research and application of artificial intelligence technology is being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, automatic driving, unmanned aerial vehicles, robots, smart medical treatment, smart customer service, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and with increasing importance value.
Currently, with the development of online payment technology, a large number of transactions need to be responded in real time every day, so in order to improve the safety of online payment, before the transaction is completed, whether the object to be identified is a target object is identified.
In the related art, when object recognition is performed, whether an object to be recognized is a target object can be predicted through decision rules and a scoring card model.
The decision rule mode is to manually construct first-order features such as statistical features and the like through case analysis and expert experience, so that an effective feature rule combination is obtained based on business experience or a decision tree model, and if an object to be detected meets the rule combination, the risk of being deceptively detected exists.
Moreover, the rule extracted through the decision tree is single, the decision tree performs greedy learning based on gain maximization, the root node features are unique, the learned features are single, and the feature threshold value is fitted with the training sample.
However, because there are often relatively similar features between the target object and the normal object, training the recognition model based on only a simple decision rule may cause misjudgment of the recognition model in the recognition process, for example, students find that there is a free red packet sending activity through a network channel, and through an illegal action policy, the students are enticed to make multiple online large payments, and the target object receives the payment amount to make a presentation, but the merchant also has a large presentation, so that the merchant is determined as the target object through the recognition model in the related art.
In the embodiment of the application, original attribute characteristics of an object to be identified are acquired, the original attribute characteristics of the object to be identified are input into a trained identification model, a target feature set of the object to be identified is determined based on the original attribute characteristics of the object to be identified, dimension reduction processing is respectively carried out on cross attribute characteristics contained in the target feature set of the object to be identified, the corresponding dimension reduced cross attribute characteristics are obtained, feature splicing is carried out on the dimension reduced cross attribute characteristics of the object to be identified and the original attribute characteristics of the object to be identified, target splicing characteristics are obtained, nonlinear conversion is carried out on the target splicing characteristics based on a preset activation function, and an identification result corresponding to the object to be identified is obtained. Therefore, an automatic feature screening mode and a deep learning framework are introduced, more high-order and effective cross attribute features are mined based on data, so that the time complexity of a recognition model can be reduced, the recognition accuracy is ensured, and the time complexity is reduced as much as possible.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and that the embodiments of the present application and the features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application. The application scenario schematic includes a client 110 and a server 120. Communication between the client 110 and the server 120 may be through a communication network.
A target application having an online function is installed in advance in the client 110, and the function of the target application is not limited to online payment. The target application may be a pre-installed client application, web page application, applet, etc. The client 110 may include one or more processors, memory, I/O interfaces to interact with the server 120, and a display screen, among others. The clients 110 include, but are not limited to, cell phones, computers, intelligent voice interaction devices, intelligent appliances, vehicle terminals, aircraft, and the like.
The server 120 is a background server corresponding to the target application, and provides services for the target application. The server 120 may include one or more processors, memory, I/O interfaces to interact with the clients 110, and so forth. The server 120 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, a content delivery network (Content Delivery Network, CDN), basic cloud computing services such as big data and an artificial intelligence platform. The client 110 and the server 120 may be directly or indirectly connected through wired or wireless communication, and embodiments of the present application are not limited herein.
The training method of the recognition model in the embodiment of the present application may be performed on the server 120. When the server 120 performs object recognition, obtaining original attribute features of the object to be recognized, inputting the original attribute features of the object to be recognized into a trained recognition model, determining a target feature set of the object to be recognized based on the original attribute features of the object to be recognized, performing dimension reduction processing on cross attribute features contained in the target feature set of the object to be recognized respectively to obtain corresponding dimension reduced cross attribute features, performing feature stitching on the dimension reduced cross attribute features of the object to be recognized and the original attribute features of the object to be recognized to obtain target stitching features, performing nonlinear conversion on the target stitching features based on a preset activation function, and obtaining a recognition result corresponding to the object to be recognized.
The method for identifying the object in the embodiment of the application can be applied to an online payment scene, and because the online payment scene has a large number of transactions which need to respond in real time every day, when whether the object to be identified is a target object or not is identified, not only the identification accuracy is required, but also the identification efficiency is required.
The embodiment of the application can be applied to various scenes, including but not limited to cloud technology, artificial intelligence, intelligent transportation, auxiliary driving and the like.
The scheme provided by the embodiment of the application relates to the technology of training an artificial intelligence recognition model and the like, and is specifically described by the following embodiments:
the following describes a training method of an identification model according to an embodiment of the present application with reference to the accompanying drawings, where the training method of an identification model according to an embodiment of the present application may be applied to the server 120 shown in fig. 1, and referring to fig. 2A, a flow chart of the training method of an identification model according to an embodiment of the present application is shown, and a specific training flow of an identification model is as follows:
s20: a sample set of objects is obtained.
Wherein each object sample comprises: each original attribute characteristic of the corresponding object and an identification tag obtained after the object is identified.
In the embodiment of the application, object sample sets are acquired, each object sample set at least comprises a plurality of object samples, each object sample corresponds to each original attribute feature of the corresponding object sample, and an identification tag is acquired after the object sample is identified.
For example, referring to fig. 2B, an exemplary diagram of original attribute features in the embodiment of the present application is shown, where each original attribute feature corresponding to an object sample is: age "18", gender "woman", social activity "active", fund flow feature "fast-in and fast-out transactions", transaction pair feature "non-friend in different places, large transfer" and identification tag corresponding to the object sample is abnormal.
Optionally, in order to improve the data integrity of the object sample set, in the embodiment of the present application, data preprocessing may be performed on the object samples, specifically, for each object sample included in the object sample set, the following operations are performed respectively: and acquiring each original attribute feature corresponding to one object sample, and respectively carrying out missing value filling, data type conversion and continuous feature discretization on each original attribute feature so as to acquire the corresponding original attribute feature.
Firstly, a process of filling a missing value is described in the embodiment of the present application, specifically, original attribute types corresponding to each original attribute feature are respectively determined, a missing target attribute type is determined based on each standard attribute type, and original attribute features corresponding to each target attribute type are given to a default value, so that each original attribute feature filled with the missing value is obtained.
Then, a data type conversion process is described in the embodiment of the present application, specifically, the data types corresponding to each original attribute feature are determined, and the data types corresponding to each original attribute feature are converted into standard data types, so as to obtain the corresponding standard original attribute feature.
Finally, the process of discretizing the continuous features in the embodiment of the present application is described, specifically, because some original attribute features in each original attribute feature may include continuous features, discretizing the continuous features is needed, when discretizing each original attribute feature, if the discretizing particle sizes selected are different, the effect will be very different, and if the optimal discretizing particle size is manually set for each object sample, the cost will be relatively high.
It should be noted that, the multi-granularity binning method in the embodiment of the present application may integrate the binning process into the training process of the recognition model, so that the recognition model may automatically select the binning most suitable for the original attribute features.
S21: and carrying out iterative training on the recognition model to be trained based on the object sample set.
Taking an example of any extracted object sample (hereinafter referred to as an object sample a) in a round of iterative training process, the process of training the recognition model to be trained in the embodiment of the application is described as follows:
S211: and performing feature intersection on each original attribute feature contained in the object sample a to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with the gain meeting a preset gain condition from each generated candidate feature set.
Wherein each gain is obtained based on the identification result and the identification tag of the corresponding candidate feature set.
In the embodiment of the application, first, feature intersection is performed on each original attribute feature contained in an object sample a to obtain each intersection attribute feature, then, candidate feature sets at least containing corresponding intersection attribute features are respectively generated, so that each candidate feature set is obtained, gains corresponding to the corresponding candidate feature sets are respectively determined based on identification results and identification labels of each candidate feature set, and target feature sets with the gains meeting preset gain conditions are determined from each candidate feature set based on the gains corresponding to each candidate feature set.
It should be noted that, when the candidate feature set including at least the corresponding cross attribute feature is generated, the generated candidate feature set includes not only the cross attribute features of different orders but also each original attribute feature, which is not limited thereto.
Optionally, in the embodiment of the present application, a possible implementation manner is provided for determining the target feature set based on each original attribute feature, and the process for determining the target feature set in the embodiment of the present application is described below, which specifically includes:
s2111: and respectively carrying out feature-by-feature intersection on each original attribute feature contained in the object sample a to obtain corresponding intersection attribute features, and respectively generating corresponding candidate feature sets based on each intersection attribute feature.
In the embodiment of the present application, since each original attribute feature is included in the object sample a, the following operations are performed for each original attribute feature: and respectively intersecting one original attribute feature with other original attribute features in pairs to obtain corresponding intersecting attribute features, and respectively executing the following operations aiming at each intersecting attribute feature: based on a cross attribute feature, a candidate feature set is generated that includes at least the cross attribute feature.
For example, referring to fig. 2C, for an example diagram of generating a candidate feature set in an embodiment of the present application, assume that each original attribute feature included in an object sample a is A, B, C, D, a and B are subjected to feature intersection to obtain an intersecting attribute feature AB, a and C are subjected to feature intersection to obtain an intersecting attribute feature AC, a and D are subjected to feature intersection to obtain an intersecting attribute feature AD, B and C are subjected to feature intersection to obtain an intersecting attribute feature BC, B and D are subjected to feature intersection to obtain an intersecting attribute feature BD, and C and D are subjected to feature intersection to obtain an intersecting attribute feature CD, candidate feature sets i1 and i1 generated based on the intersecting attribute feature AB include A, B, C, D and AB, candidate feature sets i2 and i2 generated based on the intersecting attribute feature AC include A, B, C, D and AC, candidate feature sets i3 and i3 include A, B, C, D and AD, candidate feature sets i4 and i4 generated based on the intersecting attribute feature BC include A, B, C, D and BD, candidate feature sets i5 and i6 and CD 6 generated based on the intersecting attribute BC include 566 and i 6.
S2112: the following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output: determining a target feature set meeting a preset gain condition from each current candidate feature set, carrying out feature intersection on the intersection attribute features with the highest orders in the target feature set and each original attribute feature respectively to obtain corresponding new intersection attribute features, and generating corresponding new candidate feature sets based on each new intersection attribute feature respectively.
The following describes the process of generating a corresponding new candidate feature set in the embodiment of the present application, taking an iterative process as an example:
s2112-1: and determining a target feature set meeting a preset gain condition from the current candidate feature sets.
In the embodiment of the application, the gains corresponding to the candidate feature sets are respectively determined, the candidate feature set with the gain meeting the preset gain condition is determined from the current candidate feature sets based on the determined gains, and the determined candidate feature set is used as the target feature set.
Optionally, in the embodiment of the present application, a possible implementation manner is provided for determining the target feature set with the gain meeting the preset gain condition, which is specifically described below:
A1: and respectively inputting the current candidate feature sets into the recognition model to obtain recognition results of the corresponding candidate feature sets, and respectively obtaining gains of the corresponding candidate feature sets based on the recognition results and the recognition labels.
In the embodiment of the application, after each current candidate feature set is obtained, the following operations are respectively executed for each candidate feature set: and inputting the candidate feature set into the recognition model, obtaining a recognition result of the candidate feature set, and determining the gain of the candidate feature set based on the obtained recognition result and the recognition label corresponding to the object sample.
Optionally, in the embodiment of the present application, a possible implementation manner is further provided for obtaining the gain corresponding to the candidate feature set, and in the following embodiment of the present application, the process of determining the corresponding gain is described, which specifically includes:
a11: and equally dividing each candidate feature set determined currently into a plurality of preset feature sets.
In the embodiment of the application, the number of the preset feature groups is determined, and each candidate feature set which is currently determined is equally divided into a plurality of preset feature groups based on the number of the feature groups, so that the number of the candidate feature sets contained in each feature group is the same.
For example, assuming that the number of preset feature groups is 3 and the number of currently determined candidate feature sets is 9, the number of candidate feature sets included in each feature group is 3.
A12: for a preset plurality of feature groups, the following operations are respectively executed: inputting each candidate feature set in one feature set into the recognition model respectively, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model; when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in one candidate group; based on the identification results and the identification tags, gain values corresponding to candidate feature sets in one feature set are determined.
In the embodiment of the present application, gain values corresponding to each candidate feature set in a preset number of feature sets are obtained respectively, and a process of determining gain values corresponding to each candidate feature set in a preset number of feature sets in the embodiment of the present application is described below by taking one feature set (hereinafter referred to as a feature set b) as an example:
a121: and respectively inputting each candidate feature set in the feature group b into the recognition model, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model.
In the embodiment of the present application, the following operations are performed for each candidate feature set included in one feature set, respectively: inputting a candidate feature set into the recognition model, outputting a recognition result corresponding to the candidate feature set, determining the gain of the candidate feature value to the recognition model based on the recognition result and the recognition label, and carrying out parameter adjustment on the feature model parameters respectively associated with each cross attribute feature contained in the candidate feature set based on the determined gain until the determined gain is minimized, and determining that the parameter adjustment of each feature model parameter is completed to obtain the recognition model after parameter adjustment.
Optionally, in the embodiment of the present application, in order to evaluate the effect of the candidate feature set, training is generally required to use the candidate feature set as an input of the recognition model, and the effect of the recognition model is evaluated in the test set and the verification set to screen out the target feature set most suitable for the recognition model. However, modeling, training and evaluating each cross attribute feature set results in a significant amount of computational resource consumption, and in this process, because some of the features present in each cross attribute feature set are identical, results in a significant amount of unnecessary repeated computations. Therefore, in the embodiment of the present application, a domain-by-domain logarithmic probability regression algorithm is adopted to evaluate, in the model training process, the weight of the existing cross attribute feature is fixed, and only the weight of the newly added cross attribute feature is learned, so that a possible implementation manner is provided for parameter adjustment, and in the following embodiment of the present application, the process of parameter adjustment is described by taking the candidate feature set a as an example, and specifically includes:
A1211: and determining the cross attribute features reaching the current highest order from the cross attribute features contained in the candidate feature set a.
In the embodiment of the application, the respective corresponding orders of the cross attribute features contained in the candidate feature set a are respectively determined, and the cross attribute feature with the highest order is determined from the cross attribute features based on the determined orders.
For example, referring to fig. 2D, for an exemplary diagram for determining cross attribute features in the embodiment of the present application, it is assumed that each cross attribute feature included in the candidate feature set a is A, B, C, D, AB, ABC, where the order of the cross attribute feature A, B, C, D is first order, the order of the cross attribute feature AB is second order, and the order of the cross attribute feature ABC is third order, so that the cross attribute feature corresponding to the highest order is determined to be ABC.
A1212: and carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
In the embodiment of the application, after the cross attribute features reaching the highest order are determined, when the parameters of the identification model are adjusted, the corresponding parameter adjustment is carried out on the feature model parameters associated with the cross attribute features reaching the current highest order, so that the resource consumption can be reduced.
For example, assuming that each original attribute feature is { a, B, C, D }, the generated candidate feature set a is { a, B, C, D, AB }, when there is a gain on the recognition model by the candidate feature a, because a, B, C, D have been traversed and evaluated in the previous iteration process, the corresponding feature model parameter weights have been learned, and the intersection attribute feature reaching the current highest order is AB, in the present iteration process, only the feature model parameters associated with the intersection attribute feature AB need to be subjected to parameter adjustment, that is, only the weights of the intersection attribute feature AB need to be trained in the present iteration process, and it is determined whether the gain on the recognition model by the candidate feature set a is provided.
When the parameters of the feature model associated with the cross attribute feature reaching the current highest order are adjusted, the candidate feature set a is input into the recognition model, the recognition result is output, and the feature model parameters associated with the cross attribute feature reaching the current highest order are adjusted based on the recognition result and the recognition label, so that the recognition model can learn the weight corresponding to the cross attribute feature reaching the current highest order.
A122: and when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in the feature group b.
In the embodiment of the application, when the parameter adjustment of each feature model parameter is completed, each candidate feature set contained in the feature group b is sequentially input into the recognition model again, and the recognition result corresponding to each candidate feature set is obtained.
For example, assuming that each candidate feature set included in the feature set b is a1, a2, and a3, the candidate feature set a1 is input into the recognition model, a recognition result corresponding to the candidate feature set a1 is obtained, the candidate feature set a2 is input into the recognition model, a recognition result corresponding to the candidate feature set a2 is obtained, and the candidate feature set a3 is input into the recognition model, and a recognition result corresponding to the candidate feature set a3 is obtained.
A123: based on the identification results and the identification tags, gain values corresponding to candidate feature sets in one feature set are determined.
In the embodiment of the application, after the recognition results corresponding to the candidate feature sets are obtained, the following operations are respectively executed for the candidate feature sets in the feature set: based on the identification result and the identification label corresponding to one candidate feature set, the gain value corresponding to the candidate feature set is determined, so that training time can be effectively reduced by learning through a continuous small-batch gradient descent method, insignificant cross attribute features are gradually eliminated along with the training process, and more important features are given to more batches of data so as to increase evaluation accuracy.
A2: and taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
In the embodiment of the application, after each gain value is obtained, a candidate feature set corresponding to the smallest gain is selected from the gains based on the obtained gains, and the selected candidate feature set is used as a target feature set.
S2112-2: and respectively carrying out feature intersection on the intersection attribute features with the highest orders in the target feature set and each original attribute feature to obtain corresponding new intersection attribute features.
In the embodiment of the application, the target feature set at least comprises each cross attribute feature, the respective corresponding order of each cross attribute feature is determined, the cross attribute feature with the highest order is determined from each cross attribute feature based on the determined orders, and the determined cross attribute feature is subjected to feature cross with each original attribute feature, so that the corresponding new cross attribute feature is obtained.
For example, referring to fig. 2E, an example diagram of feature intersection in the embodiment of the present application is shown, assuming that each intersection attribute feature included in the target feature set is AB and ABC, the corresponding order of the intersection attribute feature AB is determined to be second order, and the corresponding order of the intersection attribute feature ABC is determined to be third order, so that the intersection attribute feature ABC with the highest order in the target feature set is ABC, feature intersection is performed on the intersection attribute feature ABC and the original attribute feature a to obtain a new intersection attribute feature ABCA, feature intersection is performed on the intersection attribute feature ABC and the original attribute feature B to obtain a new intersection attribute feature ABCB, feature intersection is performed on the intersection attribute feature ABC and the original attribute feature C to obtain a new intersection attribute feature ABCC, and feature intersection is performed on the intersection attribute feature ABC and the original attribute feature D to obtain a new intersection attribute feature ABCD.
S2112-3: based on each new cross attribute feature, a corresponding new candidate feature set is generated.
In the embodiment of the application, the following operations are executed for each new cross attribute feature: the method for generating the target feature set in the embodiment of the application can greatly reduce the search space, and can ensure that the target feature set with better effect is obtained, and the time complexity can be reduced to O ((d 2/2) k), wherein k is the order when stopping, d and k < < n, so that the time complexity of the process is acceptable.
The process of determining the target feature set in the embodiment of the present application is described below by using a specific example, referring to fig. 2F, which is an exemplary diagram of generating the target feature set in the embodiment of the present application, assuming that each original attribute feature is A, B, C, D, first, each second-order cross attribute feature is AB, AC, …, and CD, each second-order cross attribute feature is determined, the cross attribute feature AB with the smallest gain is determined, each third-order cross attribute feature ABC and ABD are generated based on the cross attribute feature AB, the cross attribute feature ABC with the smallest gain is selected, each fourth-order cross attribute feature ABCD is generated based on the cross attribute feature ABC, and so on until there is no gain in the specific order, and the recognition model is stopped.
S212: based on the target feature set and each original attribute feature, a predicted recognition result corresponding to the object sample a is obtained, and the corresponding recognition label is combined to carry out parameter adjustment on the recognition model.
In the embodiment of the application, based on each cross attribute feature in the target feature set and each original attribute feature, a predicted recognition result corresponding to the object sample a is obtained, a loss value is determined based on the predicted recognition result and the recognition tag corresponding to the object sample a, and parameter adjustment is performed on the recognition model based on the loss value.
Optionally, in the embodiment of the present application, a possible implementation manner is provided for determining a predicted recognition result corresponding to an object sample, which specifically includes:
s212-1-1: and respectively carrying out dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features.
In the embodiment of the application, aiming at each crossed attribute feature contained in the determined target feature set, the following operations are respectively executed: and performing dimension reduction processing on one cross attribute feature to obtain the cross attribute feature after dimension reduction.
Specifically, in the embodiment of the present application, the identification network in the identification model may be implemented by using a deep & wide network, for example, refer to fig. 2G, which is a schematic diagram of the deep & wide network in the embodiment of the present application, where each cross attribute feature contained in the target feature set is used as input of the deep feature, and DNN layer is used to perform dimension reduction processing on each cross attribute feature, so as to obtain an ebedding, that is, each dimension reduced cross attribute feature.
S212-1-2: and performing feature stitching on the cross attribute features after the dimension reduction and the original attribute features to obtain sample stitching features.
In the embodiment of the application, the cross attribute characteristics after the dimension reduction and the original attribute characteristics are subjected to characteristic splicing to obtain sample splicing characteristics.
Specifically, in the embodiment of the present application, since the recognition network in the recognition model may be implemented by using a deep & trade network, as shown in fig. 2G, each original attribute feature of the first order is used as a trade feature, so that feature stitching is performed on the trade feature and the deep feature.
S212-1-3: based on a preset activation function, nonlinear conversion is carried out on the sample splicing characteristics, and a prediction recognition result corresponding to the object sample is obtained.
In the embodiment of the application, the target characteristics of the preset dimension are obtained based on a layer of neural network, and the sample splicing characteristics are subjected to nonlinear conversion based on the preset activation function, so that the prediction recognition result corresponding to one object sample is obtained.
The target feature obtained in the preset dimension based on the one-layer neural network may be a 64-dimensional feature, which is not limited.
The following describes a process of parameter adjustment for an identification network in an identification model in an embodiment of the present application, which specifically includes:
S212-2-1: and determining a loss value corresponding to the object sample a based on the predicted identification result and the identification tag.
In the embodiment of the application, after the predicted recognition result is obtained, the loss value corresponding to the object sample a is determined based on the predicted recognition result and the recognition tag.
S212-2-2: and carrying out parameter adjustment on each recognition model parameter associated with the splicing characteristic in the recognition model based on the loss value.
In the embodiment of the application, because the parameters of each identification model corresponding to the identification network are adjusted in the process, the parameters of each identification model related to the splicing characteristic in the identification model are adjusted based on the loss value.
Further, in the embodiment of the present application, after training of the recognition model is completed, the KS value and the AUC may be used to detect the model effect of the recognition model after training is completed on the test set.
The KS value is used for determining the distinguishing degree of the identification model, the higher the distinguishing degree is, the higher the detection accuracy of the identification model is, and the calculation method of the KS value is as follows:
ks=max{|cum(bad_rate)-cum(good_rate)|}
specifically, in the embodiment of the present application, the object samples included in the test set are binned, the number of positive object samples and the number of negative object samples in each binning interval are determined, the sum (bad_rate) represents the proportion of the number of accumulated negative object samples in each binning interval to the total number of negative object samples, the sum (good_rate) represents the proportion of the number of accumulated positive object samples to the total number of positive object samples, the absolute value of the difference between the ratio of accumulated negative object samples and the ratio of accumulated positive object samples in each binning interval is calculated, and a KS curve is obtained, wherein KS is the maximum value of a plurality of absolute values.
The area under the curve (AUC) is the area formed by the ROC curve and the coordinate axis, and is used for evaluating the performance of the recognition model, the ROC curve takes the positive rate (FPR) as the X axis, the True Positive Rate (TPR) as the Y axis, and for each threshold, the corresponding points TPR and FPR are calculated. Thus, an ROC curve can be obtained. The larger the AUC value, the better the model effect.
The TPR is characterized by the proportion of the samples which are actually positive and are correctly judged to be positive.
Wherein the FPR characterizes the proportion of all samples which are actually negative and are erroneously judged to be positive.
In the embodiment of the application, under the condition that the recognition model can be deployed on line, the feature richness input into the recognition model and the complexity of the recognition model are increased, the accuracy of transaction recognition is greatly improved, and the transaction risk and the fund loss are reduced.
Based on the foregoing embodiments, the following describes a process of performing object recognition based on a recognition model in the embodiment of the present application, and referring to fig. 3, a flow chart of an object recognition method in the embodiment of the present application is shown, which specifically includes:
s30: and obtaining each original attribute characteristic of the object to be identified.
In the embodiment of the application, the feature extraction is carried out on the object to be identified, and each original attribute feature of the object to be identified is obtained.
The object to be identified may be a merchant, a transaction party, a transacted party, a student, etc., which is not limited.
S31: inputting each original attribute characteristic of the object to be identified into the trained identification model, and determining a target characteristic set of the object to be identified based on each original attribute characteristic of the object to be identified.
In the embodiment of the application, each original attribute feature of the object to be identified is input into a trained identification model, and a target feature set corresponding to each original attribute feature is determined based on each original attribute feature of the object to be identified and the corresponding relation between each original attribute feature and the target feature set which are learned in advance by the identification model.
S32: and respectively carrying out dimension reduction treatment on each cross attribute feature contained in the target feature set of the object to be identified to obtain the corresponding dimension reduced cross attribute feature.
In the embodiment of the application, because the target feature set contains each cross attribute feature, the cross attribute features are subjected to dimension reduction processing to obtain the corresponding dimension reduced cross attribute features.
Specifically, deep learning frames such as Word2Vector, autoEncoding and the like can be adopted to perform dimension reduction processing on each cross attribute feature, so that the cross attribute features after corresponding dimension reduction are obtained.
S33: and performing feature stitching on the cross attribute features of the objects to be identified after the dimension reduction and the original attribute features of the objects to be identified to obtain target stitching features.
In the embodiment of the application, the cross attribute characteristics of the objects to be identified after the dimension reduction and the original attribute characteristics of the objects to be identified are subjected to characteristic splicing, so that the target splicing characteristics are obtained.
S34: and performing nonlinear conversion on the target splicing characteristics based on a preset activation function to obtain a recognition result corresponding to the object to be recognized.
In the embodiment of the application, the target characteristics of the preset dimension are obtained based on a layer of neural network, nonlinear conversion is carried out on the target splicing characteristics based on a preset activation function, so that the identification result corresponding to the object to be identified is obtained, when the identification result is abnormal, the object to be identified is an abnormal object, the abnormal risk of the transaction when the pen is determined, instructions such as reminding/interception and the like are sent to the client to prevent property loss of the client, and when the identification result is a normal object, the object to be identified is a normal object, and prompt information of' no abnormality is sent to the client.
The target feature obtained in the preset dimension based on the one-layer neural network may be a 64-dimensional feature, which is not limited.
According to the embodiment of the application, when the object to be detected is identified based on the identification model, the accuracy of identification can be improved.
Based on the above embodiments, referring to fig. 4, a first exemplary diagram of an object recognition method according to an embodiment of the present application specifically includes:
firstly, extracting data, extracting an object sample set from a database, then, respectively screening characteristics of each object sample contained in the object sample set to obtain a target characteristic set corresponding to each object sample, and training and verifying an identification network of an identification model based on each target characteristic set and corresponding original attribute information, wherein the identification network can be a deep and wide model.
And then, deploying the trained recognition model into a distributed storage system, and deploying a corresponding recognition strategy in the distributed storage system.
And finally, when the object to be detected is transacted, based on the identification model and the identification strategy, identifying the object to be detected, and when the object to be detected is determined to be a target object, reminding or intercepting the transaction of the object to be detected as a pen, so that the object to be detected realizes that the transaction as a pen has risks and cautious transaction.
Based on the foregoing embodiments, a specific example is used to describe the object recognition method in the embodiment of the present application, and referring to fig. 5, a second exemplary diagram of the object recognition method in the embodiment of the present application specifically includes:
firstly, acquiring original attribute characteristics corresponding to merchants, namely age ' 25 ', birth place ' A market ', inactivity of social activity ', whether history is complained ' not complained ', fund flow characteristics ' no fast-forward and fast-out transaction ', transaction pair characteristics ' friends ' and no large payment.
And inputting the original attribute features into a trained recognition model, obtaining a target feature set containing the crossed attribute features by feature crossing of the original attribute features, recognizing the commercial tenant based on the target feature set and the original attribute features, and determining the recognition result corresponding to the commercial tenant as a normal object.
And finally, sending prompt information of 'no abnormality detected' to the client corresponding to the merchant because the identification result corresponding to the merchant is 'normal object'.
The principle of the device for solving the problem is similar to that of the method of the embodiment, so that the implementation of the device can be referred to the implementation of the method, and the repetition is omitted.
Referring to fig. 6, a schematic structural diagram of a training apparatus for identifying a model according to an embodiment of the present application includes an obtaining module 600 and a training module 610.
An obtaining module 600, configured to obtain a set of object samples, each object sample including: each original attribute feature of the corresponding object and an identification tag obtained after the object is identified;
the training module 610 is configured to iteratively train the recognition model to be trained based on the object sample set; in a round of iterative training process, the following operations are executed for one extracted object sample:
performing feature intersection on each original attribute feature contained in one object sample to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with gains meeting a preset gain condition from each generated candidate feature set, wherein each gain is obtained based on an identification result and an identification tag of the corresponding candidate feature set;
based on the target feature set and each original attribute feature, a predicted recognition result corresponding to one object sample is obtained, and the corresponding recognition label is combined to carry out parameter adjustment on the recognition model.
Optionally, feature intersection is performed on each original attribute feature contained in one object sample, each intersection attribute feature is obtained, a corresponding candidate feature set is generated based on each intersection attribute feature, and when a target feature set meeting a preset gain condition is determined from each generated candidate feature set, the training module 610 is further configured to:
each original attribute feature contained in one object sample is subjected to feature-by-feature intersection to obtain corresponding intersection attribute features, and corresponding candidate feature sets are generated based on the intersection attribute features respectively;
the following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output:
determining a target feature set meeting a preset gain condition from the current candidate feature sets;
the intersection attribute features with the highest orders in the target feature set are respectively subjected to feature intersection with each original attribute feature, and corresponding new intersection attribute features are obtained;
based on each new cross attribute feature, a corresponding new candidate feature set is generated.
Optionally, when determining the target feature set that meets the preset gain condition from the current candidate feature sets, the training module 610 is further configured to:
Respectively inputting each current candidate feature set into the recognition model to obtain a recognition result of the corresponding candidate feature set, and respectively obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition label;
and taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
Optionally, when the current candidate feature sets are respectively input into the recognition model to obtain the recognition results of the corresponding candidate feature sets, and the gains of the corresponding candidate feature sets are respectively obtained based on the recognition results and the recognition labels, the training module 610 is further configured to:
dividing each candidate feature set determined at present into a plurality of preset feature sets in average;
for a preset plurality of feature groups, the following operations are respectively executed:
inputting each candidate feature set in one feature set into the recognition model respectively, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model;
when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in one feature group;
based on the identification results and the identification tags, gain values corresponding to candidate feature sets in one feature set are determined.
Optionally, when performing parameter adjustment on model parameters associated with each cross attribute feature included in the corresponding candidate feature set in the recognition model, the training module 610 is further configured to:
determining the cross attribute features reaching the current highest order from all the cross attribute features contained in the corresponding candidate feature sets;
and carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
Optionally, when obtaining a predicted recognition result corresponding to an object sample based on the determined target feature set and each original attribute feature, the training module 610 is further configured to:
performing dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features subjected to dimension reduction and the original attribute features to obtain sample stitching features;
based on a preset activation function, nonlinear conversion is carried out on the sample splicing characteristics, and a prediction recognition result corresponding to the object sample is obtained.
Optionally, when the parameters of the recognition model are adjusted, the training module 610 is further configured to:
determining a loss value corresponding to an object sample based on the predicted recognition result and the recognition tag;
And carrying out parameter adjustment on each recognition model parameter associated with the splicing characteristic in the recognition model based on the loss value.
The principle of solving the problem of the object recognition device is similar to that of the method of the above embodiment, so that the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 7, a schematic structural diagram of an object recognition device according to an embodiment of the present application includes an obtaining 700, a first determining module 710, a dimension-reducing module 720, a stitching module 730, and a second determining module 740.
An obtaining module 700, configured to obtain each original attribute feature of the object to be identified;
a first determining module 710, configured to input each original attribute feature of the object to be identified into a trained identification model, and determine a target feature set of the object to be identified based on each original attribute feature of the object to be identified;
the dimension reduction module 720 is configured to perform dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified, so as to obtain corresponding dimension reduced cross attribute features;
the stitching module 730 is configured to perform feature stitching on the cross attribute features of the objects to be identified after the dimension reduction and the original attribute features of the objects to be identified, so as to obtain target stitching features;
The second determining module 740 is configured to perform nonlinear conversion on the target stitching feature based on a preset activation function, so as to obtain a recognition result corresponding to the object to be recognized.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
In some possible embodiments, the object recognition device or training device of the recognition model according to the application may comprise at least a processor and a memory. Wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps in the object recognition method or training method of recognition models according to various exemplary embodiments of the application described in this specification. For example, the processor may perform the steps as shown in fig. 2A or fig. 3.
The embodiment of the application also provides electronic equipment based on the same conception as the embodiment of the method. In one embodiment, the electronic device may be a server 120 as shown in fig. 1, and in this embodiment, the electronic device may be configured as shown in fig. 8, including a memory 801, a communication module 803, and one or more processors 802.
A memory 801 for storing a computer program for execution by the processor 802. The memory 801 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, a program required for running an instant communication function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
The memory 801 may be a volatile memory (RAM) such as a random-access memory (RAM); the memory 801 may also be a nonvolatile memory (non-volatile memory), such as a read-only memory, a flash memory (flash memory), a hard disk (HDD) or a Solid State Drive (SSD); or memory 501, is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory 801 may be a combination of the above memories.
The processor 802 may include one or more central processing units (central processing unit, CPU) or digital processing units, etc. A processor 802 for implementing the above-described object recognition method or training method of the recognition model when calling the computer program stored in the memory 801.
The communication module 803 is used for communicating with a terminal device and other servers.
The specific connection medium between the memory 801, the communication module 803, and the processor 802 is not limited in the embodiment of the present application. The embodiment of the present application is illustrated in fig. 8 by a bus 804 between the memory 801 and the processor 802, where the bus 804 is illustrated in fig. 8 by a bold line, and the connection between other components is merely illustrative, and not limiting. The bus 804 may be classified as an address bus, a data bus, a control bus, or the like. For ease of description, only one thick line is depicted in fig. 8, but only one bus or one type of bus is not depicted.
The memory 801 stores therein a computer storage medium in which computer executable instructions for implementing the object recognition method or the training method of the recognition model according to the embodiment of the present application are stored. The processor 802 is configured to perform the object recognition method or training method of the recognition model described above, as shown in fig. 2A or 3.
In some possible embodiments, aspects of the object recognition method or training method of the recognition model provided by the present application may also be implemented in the form of a program product comprising program code for causing a computer device to perform the steps of the object recognition method or training method of the recognition model according to the various exemplary embodiments of the present application described herein above when the program product is run on a computer device, e.g. the computer device may perform the steps as shown in fig. 2A or fig. 3.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code and may run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's equipment, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required to either imply that the operations must be performed in that particular order or that all of the illustrated operations be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (18)

1. An object recognition method, comprising:
acquiring original attribute characteristics of an object to be identified;
inputting original attribute characteristics of the object to be identified into a trained identification model, and determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified;
performing dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified respectively to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features of the objects to be identified after the dimension reduction and the original attribute features of the objects to be identified to obtain target stitching features;
and performing nonlinear conversion on the target splicing characteristics based on a preset activation function to obtain a recognition result corresponding to the object to be recognized.
2. A method of training an identification model, comprising:
obtaining a set of object samples, each object sample comprising: each original attribute feature of the corresponding object and an identification tag obtained after identification of the object;
performing iterative training on the recognition model to be trained based on the object sample set; in a round of iterative training process, the following operations are executed for one extracted object sample:
performing feature intersection on each original attribute feature contained in the object sample to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with gain meeting a preset gain condition from each generated candidate feature set, wherein each gain is obtained based on an identification result and an identification tag of the corresponding candidate feature set;
based on the target feature set and the original attribute features, a predicted recognition result corresponding to the object sample is obtained, and the corresponding recognition label is combined to perform parameter adjustment on the recognition model.
3. The method of claim 1, wherein feature-interleaving each original attribute feature contained in the one object sample to obtain each interleaved attribute feature, generating a corresponding candidate feature set based on each interleaved attribute feature, and determining a target feature set satisfying a preset gain condition from each generated candidate feature set, comprises:
Each original attribute feature contained in the object sample is subjected to feature-by-feature intersection to obtain corresponding intersection attribute features, and corresponding candidate feature sets are generated based on the intersection attribute features respectively;
the following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output:
determining a target feature set meeting the preset gain condition from each current candidate feature set;
the intersection attribute features with the highest orders in the target feature set are respectively subjected to feature intersection with the original attribute features to obtain corresponding new intersection attribute features;
based on each new cross attribute feature, a corresponding new candidate feature set is generated.
4. The method of claim 3, wherein determining a target feature set that meets the preset gain condition from the current candidate feature sets comprises:
respectively inputting each current candidate feature set into a recognition model to obtain a recognition result of the corresponding candidate feature set, and respectively obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition tag;
And taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
5. The method of claim 4, wherein inputting each current candidate feature set into the recognition model to obtain recognition results of the corresponding candidate feature set, and obtaining gains of the corresponding candidate feature set based on each recognition result and the recognition tag, respectively, comprises:
dividing each candidate feature set determined at present into a plurality of preset feature sets in average;
for the preset plurality of feature groups, the following operations are respectively executed:
inputting each candidate feature set in a feature set into a recognition model, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model;
when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in the feature group;
and determining gain values corresponding to candidate feature sets in the feature group based on the identification results and the identification labels.
6. The method of claim 5, wherein parameter tuning model parameters in the recognition model that are each associated with each of the cross attribute features contained in the corresponding candidate feature set, comprises:
Determining the cross attribute features reaching the current highest order from all the cross attribute features contained in the corresponding candidate feature sets;
and carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
7. The method according to any one of claims 1-6, wherein obtaining a predicted recognition result corresponding to the one object sample based on the determined target feature set and the original attribute features includes:
performing dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features subjected to the dimension reduction and the original attribute features to obtain sample stitching features;
and based on a preset activation function, performing nonlinear conversion on the sample splicing characteristics to obtain a predicted recognition result corresponding to the object sample.
8. The method of claim 7, wherein parameter tuning the recognition model comprises:
determining a loss value corresponding to the one object sample based on the predicted identification result and the identification tag;
And carrying out parameter adjustment on each recognition model parameter associated with the splicing characteristic in the recognition model based on the loss value.
9. An object recognition apparatus, comprising:
the acquisition module is used for acquiring original attribute characteristics of the object to be identified;
the first determining module is used for inputting original attribute characteristics of the object to be identified into a trained identification model, and determining a target feature set of the object to be identified based on the original attribute characteristics of the object to be identified;
the dimension reduction module is used for respectively carrying out dimension reduction processing on each cross attribute feature contained in the target feature set of the object to be identified to obtain corresponding dimension reduced cross attribute features;
the splicing module is used for carrying out characteristic splicing on the cross attribute characteristics of the objects to be identified after the dimension reduction and the original attribute characteristics of the objects to be identified to obtain target splicing characteristics;
and the second determining module is used for carrying out nonlinear conversion on the target splicing characteristics based on a preset activation function to obtain a recognition result corresponding to the object to be recognized.
10. A training device for identifying a model, comprising:
An acquisition module for acquiring a set of object samples, each object sample comprising: each original attribute feature of the corresponding object and an identification tag obtained after identification of the object;
the training module is used for carrying out iterative training on the recognition model to be trained based on the object sample set; in a round of iterative training process, the following operations are executed for one extracted object sample:
performing feature intersection on each original attribute feature contained in the object sample to obtain each intersection attribute feature, respectively generating a corresponding candidate feature set based on each intersection attribute feature, and determining a target feature set with gain meeting a preset gain condition from each generated candidate feature set, wherein each gain is obtained based on an identification result and an identification tag of the corresponding candidate feature set;
based on the target feature set and the original attribute features, a predicted recognition result corresponding to the object sample is obtained, and the corresponding recognition label is combined to perform parameter adjustment on the recognition model.
11. The apparatus of claim 10, wherein the training module is further configured to, when performing feature intersection on each original attribute feature included in the one object sample to obtain each intersection attribute feature, generate a corresponding candidate feature set based on each intersection attribute feature, and determine, from each generated candidate feature set, a target feature set that meets a preset gain condition:
Each original attribute feature contained in the object sample is subjected to feature-by-feature intersection to obtain corresponding intersection attribute features, and corresponding candidate feature sets are generated based on the intersection attribute features respectively;
the following operations are executed in an iterative mode until each candidate feature set does not meet the preset gain condition, and finally determined target feature sets are output:
determining a target feature set meeting the preset gain condition from each current candidate feature set;
the intersection attribute features with the highest orders in the target feature set are respectively subjected to feature intersection with the original attribute features to obtain corresponding new intersection attribute features;
based on each new cross attribute feature, a corresponding new candidate feature set is generated.
12. The apparatus of claim 11, wherein when determining a target feature set that meets the preset gain condition from the current candidate feature sets, the training module is further configured to:
respectively inputting each current candidate feature set into a recognition model to obtain a recognition result of the corresponding candidate feature set, and respectively obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition tag;
And taking the candidate feature set with the smallest gain as a target feature set based on the obtained gains.
13. The apparatus of claim 12, wherein when inputting each current candidate feature set into the recognition model to obtain a recognition result of the corresponding candidate feature set, and obtaining the gain of the corresponding candidate feature set based on each recognition result and the recognition tag, respectively, the training module is further configured to:
dividing each candidate feature set determined at present into a plurality of preset feature sets in average;
for the preset plurality of feature groups, the following operations are respectively executed:
inputting each candidate feature set in a feature set into a recognition model, and carrying out parameter adjustment on feature model parameters respectively associated with each cross attribute feature contained in the corresponding candidate feature set in the recognition model;
when the parameters of each feature model are adjusted, obtaining the identification results corresponding to each candidate feature set in the feature group;
and determining gain values corresponding to candidate feature sets in the feature group based on the identification results and the identification labels.
14. The apparatus of claim 13, wherein when performing parameter adjustment on model parameters in the recognition model that are each associated with each of the cross attribute features included in the corresponding candidate feature set, the training module is further configured to:
Determining the cross attribute features reaching the current highest order from all the cross attribute features contained in the corresponding candidate feature sets;
and carrying out parameter adjustment on the characteristic model parameters associated with the cross attribute characteristics reaching the current highest order.
15. The apparatus according to any one of claims 10 to 14, wherein, when the predicted recognition result corresponding to the one object sample is obtained based on the determined target feature set and the original attribute features, the training module is further configured to:
performing dimension reduction processing on each cross attribute feature contained in the determined target feature set to obtain corresponding dimension reduced cross attribute features;
performing feature stitching on the cross attribute features subjected to the dimension reduction and the original attribute features to obtain sample stitching features;
and based on a preset activation function, performing nonlinear conversion on the sample splicing characteristics to obtain a predicted recognition result corresponding to the object sample.
16. An electronic device comprising a processor and a memory, wherein the memory stores program code that, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1-8.
17. A computer readable storage medium, characterized in that it comprises a program code for causing an electronic device to perform the steps of the method according to any of claims 1-8, when said program code is run on the electronic device.
18. A computer program product comprising computer instructions stored in a computer readable storage medium; when the computer instructions are read from the computer-readable storage medium by a processor of an electronic device, the processor executes the computer instructions, causing the electronic device to perform the steps of the method of any one of claims 1-8.
CN202211236681.2A 2022-10-10 2022-10-10 Training method of object recognition and recognition model and related device Pending CN116976885A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211236681.2A CN116976885A (en) 2022-10-10 2022-10-10 Training method of object recognition and recognition model and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211236681.2A CN116976885A (en) 2022-10-10 2022-10-10 Training method of object recognition and recognition model and related device

Publications (1)

Publication Number Publication Date
CN116976885A true CN116976885A (en) 2023-10-31

Family

ID=88471953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211236681.2A Pending CN116976885A (en) 2022-10-10 2022-10-10 Training method of object recognition and recognition model and related device

Country Status (1)

Country Link
CN (1) CN116976885A (en)

Similar Documents

Publication Publication Date Title
US20190333118A1 (en) Cognitive product and service rating generation via passive collection of user feedback
CN109922032B (en) Method, device, equipment and storage medium for determining risk of logging in account
CN111681091B (en) Financial risk prediction method and device based on time domain information and storage medium
CN108596616B (en) User data authenticity analysis method and device, storage medium and electronic equipment
CN112749749B (en) Classification decision tree model-based classification method and device and electronic equipment
CN111401558A (en) Data processing model training method, data processing device and electronic equipment
US11423307B2 (en) Taxonomy construction via graph-based cross-domain knowledge transfer
US11563727B2 (en) Multi-factor authentication for non-internet applications
CN111931047B (en) Artificial intelligence-based black product account detection method and related device
US11972382B2 (en) Root cause identification and analysis
CN113221104A (en) User abnormal behavior detection method and user behavior reconstruction model training method
US20230092274A1 (en) Training example generation to create new intents for chatbots
CN114358147A (en) Training method, identification method, device and equipment of abnormal account identification model
CN112348321A (en) Risk user identification method and device and electronic equipment
US20210019120A1 (en) Automated script review utilizing crowdsourced inputs
CN114330966A (en) Risk prediction method, device, equipment and readable storage medium
CN113762973A (en) Data processing method and device, computer readable medium and electronic equipment
CN114548300B (en) Method and device for explaining service processing result of service processing model
CN115204886A (en) Account identification method and device, electronic equipment and storage medium
CN114611081B (en) Account type identification method, device, equipment, storage medium and product
US11922129B2 (en) Causal knowledge identification and extraction
CN113610625A (en) Overdue risk warning method and device and electronic equipment
CN113472860A (en) Service resource allocation method and server under big data and digital environment
CN110704614B (en) Information processing method and device for predicting user group type in application
CN113568739B (en) User resource quota allocation method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination