CN113673620A - Method, system, device, medium and program product for model generation - Google Patents

Method, system, device, medium and program product for model generation Download PDF

Info

Publication number
CN113673620A
CN113673620A CN202110997585.9A CN202110997585A CN113673620A CN 113673620 A CN113673620 A CN 113673620A CN 202110997585 A CN202110997585 A CN 202110997585A CN 113673620 A CN113673620 A CN 113673620A
Authority
CN
China
Prior art keywords
item
unknown
comparison result
category
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110997585.9A
Other languages
Chinese (zh)
Inventor
王雅楠
权爱荣
马晓楠
张华�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial and Commercial Bank of China Ltd ICBC
ICBC Technology Co Ltd
Original Assignee
Industrial and Commercial Bank of China Ltd ICBC
ICBC Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial and Commercial Bank of China Ltd ICBC, ICBC Technology Co Ltd filed Critical Industrial and Commercial Bank of China Ltd ICBC
Priority to CN202110997585.9A priority Critical patent/CN113673620A/en
Publication of CN113673620A publication Critical patent/CN113673620A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a model generation method, which relates to the technical field of computers and can be used in the technical field of finance, and the method comprises the following steps: collecting items operated by a user in a time period t; inputting the matters into an LSTM deep neural network model, and outputting embedded vectors corresponding to the matters; extracting an embedded vector, and constructing a prototype network by using the embedded vector; putting the unknown items X into a prototype network, and outputting a comparison result of each item and the unknown items X; and generating a recall list according to the comparison result. The method for generating the model is suitable for the condition that no sample or few samples exist, new samples can be accurately recommended and precisely classified, the accuracy of the recommendation result is effectively improved, the method is based on a mature LSTM deep neural network model, has enough flexibility, does not need additional fitting parameters, and has simplicity and effectiveness under the recommendation scene of legal users. The present application also provides a model generation system, apparatus, medium, and program product.

Description

Method, system, device, medium and program product for model generation
Technical Field
The present application relates to the field of computer technology, which may be used in the field of finance, and in particular to a method, system, device, medium and program product for model generation.
Background
At present, in an e-government scene, a recommendation module for a legal user is still in a starting stage, under a common condition, machine learning modeling is carried out by analyzing behavior data of a large number of users and screening behavior characteristics, and a recommendation result is output by predicting user preference behavior through a machine learning algorithm.
Disclosure of Invention
The present application is directed to solving at least one of the problems in the prior art.
For example, the application provides a model generation method, which aims at the problems that training sample data is too little, sample labels are few, and the effect of the traditional modeling mode is not ideal, and can be used for learning and modeling based on a very small amount of samples, namely, new samples can be accurately classified on the basis of no training sample labels, which cannot be realized by the traditional machine learning model.
In view of the above problems, a first aspect of the present application provides a method of model generation, comprising the steps of:
collecting items operated by a user in a time period t;
inputting the item into an LSTM deep neural network model, and outputting an embedded vector corresponding to the item;
extracting the embedded vector, and constructing a prototype network by using the embedded vector;
putting unknown items X into the prototype network, and outputting a comparison result of each item and the unknown item X;
and generating a recall list according to the comparison result.
The method for generating the model is suitable for the condition that no sample or few samples exist, new samples can be accurately recommended and precisely classified, the accuracy of the recommendation result is effectively improved, the method is based on a mature LSTM deep neural network model, has enough flexibility, does not need additional fitting parameters, and has simplicity and effectiveness under the recommendation scene of legal users.
Further, inputting the transaction into the LSTM deep neural network model, and outputting an embedded vector corresponding to the transaction, including:
performing one-hot coding on each item, wherein the item and the one-hot coding form a mapping relation;
inputting the one-hot code into an LSTM deep neural network model;
and obtaining the embedded vector of the one-hot coding corresponding item through the operation of an LSTM deep neural network model.
Further, obtaining an embedded vector of the transaction through an operation of the LSTM deep neural network model, including:
extracting a weight matrix of a hidden layer of the LSTM deep neural network model through embedding _ lookup;
and multiplying the one-hot code by the weight matrix to obtain an embedded vector of the item corresponding to the one-hot code.
Further, extracting the embedded vector and constructing a prototype network with the embedded vector, comprising:
selecting m embedded vectors, and randomly dividing the m embedded vectors into K categories;
according to a first formula, a mean vector under each category is calculated.
Further, the first formula is:
Figure RE-GDA0003264977940000021
wherein, ckIs the mean vector, | SkL is the number of embedded vectors,
Figure RE-GDA0003264977940000022
is an embedded vector.
Further, placing unknown items X into the prototype network, and outputting a comparison result between each of the items and the unknown item X, including:
putting unknown items X into the prototype network, and comparing the unknown items X with the mean vector of each category to obtain a first comparison result;
obtaining the category of the unknown item X according to the first comparison result;
comparing each item in the category with the unknown item X to obtain a second comparison result,
wherein the second comparison result is output as the comparison result.
Further, placing the unknown item X in the prototype network, and comparing the unknown item X with the mean vector of each category to obtain a first comparison result, including:
calculating the Euclidean distance between the feature vector of the unknown item X and the mean vector of each category;
acquiring a minimum Euclidean distance as a first result, wherein the first result is output as the first comparison result.
Further, obtaining the category to which the unknown item X belongs according to the first comparison result includes:
according to the n categories corresponding to the first result, checking the probability that the unknown item X belongs to the n categories by using a softmax function, and acquiring the maximum probability as a second result;
and the category corresponding to the second result is used as the category of the unknown item X.
Further, comparing each item in the category to the unknown item X to obtain a second comparison result, including:
and comparing each item in the category with the unknown item X by using at least one of Euclidean distance, Mahalanobis distance and cosine function to obtain a result set of comparison between each item in the category and the unknown item X.
Further, generating a recall list according to the comparison result, comprising:
and sorting the items in the result set from high to low according to the similarity, and selecting a plurality of items from high to low according to the sorting to generate a recall list.
Further, the event includes: click, submit, search, comment.
A second aspect of the present application provides a model generation system comprising: the collection module is used for collecting items operated by a user in a time period t; a first input module to: inputting the item into an LSTM deep neural network model, and outputting an embedded vector corresponding to the item; the extraction module is used for extracting the embedded vector and constructing a prototype network by using the embedded vector; a second input module to: putting unknown items X into the prototype network, and outputting a comparison result of each item and the unknown item X; and the generating module is used for generating a recall list according to the comparison result.
A third aspect of the present application provides an electronic device comprising: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of model generation described above.
The fourth aspect of the present application also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the method of model generation described above.
A fifth aspect of the present application also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of model generation described above.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following description of embodiments of the application with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates an application scenario diagram of methods, systems, devices, media and program products of model generation according to embodiments of the application;
FIG. 2 schematically illustrates an operational flow diagram of a method of model generation according to an embodiment of the present application;
FIG. 3 schematically illustrates a flow chart of steps of a method of model generation according to an embodiment of the present application;
FIG. 4 is a flow chart schematically illustrating the steps of inputting a transaction into an LSTM deep neural network model and obtaining an embedded vector according to an embodiment of the present application;
FIG. 5 is a flow chart schematically illustrating the steps of obtaining an embedded vector according to an embodiment of the present application;
FIG. 6 is a flow chart schematically illustrating the steps of averaging vectors according to an embodiment of the present application;
FIG. 7 schematically illustrates a step flow diagram for comparing each item with an unknown X in an embodiment in accordance with the present application;
FIG. 8 is a flow chart schematically illustrating the steps of comparing an unknown X to each category according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating the determination of a mean vector for each class according to an embodiment of the present application;
FIG. 10 is a diagram schematically illustrating the determination of Euclidean distances between an unknown X and each category according to an embodiment of the present application;
FIG. 11 schematically illustrates a block diagram of a model generation system according to an embodiment of the present application; and
FIG. 12 schematically shows a block diagram of an electronic device adapted to implement a method of model generation according to an embodiment of the present application.
Detailed Description
Hereinafter, embodiments of the present application will be described with reference to the accompanying drawings. It is to be understood that such description is merely illustrative and not intended to limit the scope of the present application. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the application. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present application.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
At present, the application program of the government affairs class can realize one-network communication, a user can handle matters in different fields only by logging in the application program, wherein the user comprises a natural user and a legal user, the coverage range of the matters reaches thousands, the intelligent recommendation service of the natural user also comprises living service, travel, recruitment, examination and the like besides the government affairs class, a certain effect is achieved, in the common modeling, a large amount of behavior data needs to be analyzed, characteristics are screened to perform machine learning modeling, and the predicted preferred behavior is used as an output recommendation result. However, because the behavior data of legal users is far less than that of natural users, the transaction items are different from those of natural users, the transaction categories tend to be centralized, and great challenges are brought to recommendation work.
At present, recommended research of users of legal people in an e-government affairs scene is in a starting stage at present, characteristics are screened by using methods such as experience and data observation and the like which are the same as those of modeling of users of natural people to train a model in a characteristic engineering stage, but behaviors of the legal people are extremely sparse, training sample data are too few, and great challenge is brought to modeling work. Therefore, the result of the item recommendation is not optimistic and is not efficient.
The embodiment of the application provides a model generation method, and the method can be used for learning and modeling based on a very small amount of samples aiming at the problems of too few training sample data, very few sample labels and unsatisfactory effect of the traditional modeling mode, namely, the new samples can be accurately classified on the basis of no training sample label, which cannot be realized by the traditional machine learning model.
It should be noted that the method for generating the model of the application can be applied to the field of intelligent services, and particularly can also be applied to module recommendation in the financial field, and is applied to application programs of service classes, for example: the application program of the government affairs class can also expand the matters including living service, travel, recruitment examination and the like, and the application does not limit the specific matters.
Fig. 1 schematically illustrates an application scenario diagram of a corporate person interacting with a server when operating an application according to an embodiment of the present application.
As shown in fig. 1, network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Corporate users may use terminal devices 101, 102, 103 to interact with server 105 over network 104 to receive or send messages and the like. The terminal devices 101, 102, 103 may have installed thereon various communication client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only).
The terminal devices 101, 102, 103 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that provides various services, such as a background management server (for example only) that provides support for applications used by legal users with the terminal devices 101, 102, 103. The background management server may analyze and perform other processing on the received data such as the corporate user request, and feed back a processing result (for example, a webpage, information, or data obtained or generated according to the corporate user request) to the terminal device.
It should be noted that the method for generating the model provided in the embodiment of the present application may be generally executed by the server 105. Accordingly, the model generation system provided in the embodiments of the present application may be generally disposed in the server 105. The method of model generation provided in the embodiments of the present application may also be performed by a server or a cluster of servers different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105. Accordingly, the model generation system provided in the embodiment of the present application may also be disposed in a server or a server cluster different from the server 105 and capable of communicating with the terminal devices 101, 102, 103 and/or the server 105.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The method for applying the model generation of the embodiment will be described in detail below with reference to fig. 2 to 10 based on the scenario described in fig. 1.
Fig. 2 schematically shows a flow chart of a method of model generation according to an embodiment of the application.
As shown in fig. 2, the method of generating a model of this embodiment includes operations S210 to S250, and the method of generating a model may be performed by the model generation system in fig. 11.
Since the recommendation method is generated under the condition of few samples, the application program is taken as an example for writing, and the condition of using the legal application program by the legal user accords with the description stated in the background art, the user can be understood as the legal user in the following specific description, of course, the specific application program and the specific object are not limited by the method, the condition that the model generation method is applied under the condition that the samples are few and the modeling is difficult in the traditional mode is met, and the method is in the protection range of the method.
In operation S210, events operated by the user during the time period t are collected.
Collecting the events operated by the user when using the application program in a period of time t. For example: and (4) sorting the user operation items of each login application program, and counting all the operation items of the user in the application program within 6 months.
In the application, the items refer to behavior items of the user, and specifically refer to behaviors of clicking, submitting, searching and commenting summarized by the user in the application program.
In operation S220, a transaction is input to the LSTM deep neural network model, and an embedded vector corresponding to the transaction is output.
An LSTM (Long short-term memory) Network, which is a special RNN (Recurrent Neural Network), mainly aims to solve the problems of gradient disappearance and gradient explosion during Long sequence training, and it can be understood that the LSTM can perform better in a longer sequence than the RNN. The LSTM controls the transmission state by the gating state, remembers that the LSTM needs long-term memory and forgets unimportant information, and the RNN can only have a memory superposition mode, so the LSTM is more suitable for a plurality of tasks needing long-term memory.
The key to LSTM is the cellular state CtAnd is used for saving the state information of the current LSTM and transmitting the state information to the LSTM at the next moment. The current LSTM receives the cell state C from the last time instantt-1And is input x with the currently received signal of LSTMtCoaction to generate the cellular state C of the current LSTMt
The LSTM mainly includes three different gate structures: forgetting gate, memory gate and output gate. These three gates are used to control the retention and transmission of information by the LSTM, which is ultimately reflected in the cellular state CtAnd an output signal htThe forgetting gate is formed by a sigmod neural network layer and a bitwise multiplication operation; the memory gate is composed of an input gate, a tanh neural network layer and a bitwise multiplication operation; the output gate, in conjunction with the tanh function and the bitwise multiplication operation, delivers the cell state and the input signal to the output.
Inputting the transaction into the LSTM deep neural network model, i.e., the transaction as xtAnd inputting the data into the LSTM deep neural network model, and finally outputting the embedded vector corresponding to the matters through the operation of the LSTM deep neural network model.
It can be understood that each transaction corresponds to a unique embedding vector, the embedding vector and the transaction form a mapping relation, the embedding vector can be used as a representative symbol of the transaction, after operation, the output is the embedding vector, and the transaction can be reversely deduced through the embedding vector.
In operation S230, the embedded vector is extracted, and a prototype network is constructed with the embedded vector.
Extracting embedded vectors in the LSTM deep neural network model, randomly classifying the embedded vectors and taking the mean value, and mapping the mean value of each category as a data sample in most spaces to form a prototype network.
In operation S240, the unknown item X is put into the prototype network, and a comparison result between each item and the unknown item X is output.
The unknown item X is an operation of an unknown user, the action to be taken is unknown, and it is necessary to recommend an item to the unknown user, and it is necessary to first determine which category the item belongs to, and then confirm the specific item in the category.
After a prototype network is formed, the unknown item X is put into the prototype network, a class where the unknown item X is most likely to be located is found through comparison calculation, then, a plurality of results which are most likely to be found from the class are compared again, and the plurality of results can be used as comparison results.
In operation S250, a recall list is generated according to the comparison result.
And arranging a plurality of results, generating a retrieval list, and pushing the retrieval list to the downstream of the server to be used as a recommended item of the unknown item X.
By comparing the model recommendation result established by using the traditional basic characteristics with the recommendation result of the model generated by using few samples in the application in an AB-Test manner, the accuracy of the model generated by the method is improved by nearly 26%, the effect of the scheme of the application is superior to that of the modeling mode of the traditional basic characteristics, and the related evaluation indexes of the recommended items are effectively improved.
The procedure of the model generation method will be described in detail below, as shown in fig. 3 to 10.
Fig. 3 schematically shows a flow chart of a method of model generation according to an embodiment of the application, the flow comprising steps S310-S360.
According to an embodiment of the application, a method of model generation, comprises the steps of:
in step S310, events operated by the user in the time period t are collected, and the events include: click, submit, search, comment.
Opening historical data in the application program, and collecting behavior items of each user logging in the application program in a time period t, wherein the operation behaviors of the user on the application may include: click, submit, search, comment, etc.
After all the items are collected, step S320 is executed.
In step S320, the transaction is input to the LSTM deep neural network model, and the embedded vector corresponding to the transaction is output.
The embedded vector is an embedding vector, and the embedding vector can be used for raising the dimension of low-dimensional data, amplifying some features and converting a word into a vector representation with a fixed length, so that mathematical processing is facilitated. In the present application, embedding can convert one transaction into a vector representation with a fixed length, assuming that there are four transactions of "a", "B", "C" and "D", and by performing an operation of the LSTM deep neural network, a vector corresponding to the transaction a, a vector corresponding to the transaction B, a vector corresponding to the transaction C, and a vector corresponding to the transaction D can be obtained.
All matters are converted into embedded vectors, so that calculation and comparison are convenient, and the method can be divided into steps S321-S323 according to the operation principle.
In step S321, one-hot encoding is performed on each item, where the item and the one-hot encoding form a mapping relationship.
Processing the matters by using a mathematical model, converting all the matters into a mathematical expression form, and adopting one-hot coding, namely a one-hot matrix, wherein the one-hot matrix is a matrix with 1 element in each row and 0 elements in other rows. And allocating a unique one-hot matrix for each item, carrying out one-hot coding on each item, and enabling the item and the one-hot coding to form a mapping relation.
That is, when a sequence of events is expressed mathematically, each event in the sequence of events is converted into a code for that event, and the event corresponds to a one-hot matrix with a position of 1.
After each event is encoded, step S322 is executed.
In step S322, one-hot codes are input into the LSTM deep neural network model.
After putting into the LSTM deep neural network model, step S323 is performed.
In step S323, an embedded vector of the one-hot code correspondence item is obtained through the calculation of the LSTM deep neural network model.
And multiplying the input one-hot code by the W matrix to obtain an embedded vector of the one-hot code corresponding matter. The W matrix may be understood as a weight matrix, and an embedding vector corresponding to the hidden layer transaction is extracted through embedding _ lookup, as shown in fig. 5.
In step S3231, a weight matrix of the hidden layer of the LSTM deep neural network model is extracted by embedding _ lookup. I.e. to extract the W matrix.
In step S3232, the one-hot code is multiplied by the weight matrix to obtain an embedded vector of the one-hot code correspondence item.
The embedding _ lookup can be regarded as a fully connected layer, and the weight matrix W is [ feature _ size, embedded _ size ], where "issue _ size" is n and "mbed _ size" is m. In the embedding _ lookup (W, the transaction), a vector has a size of [1, feature _ size ], only one digit represents the transaction as 1, and other digits are all 0, and the vector is multiplied by the W matrix to be a vector of [1, embedded _ size ], that is, m rows in the W matrix corresponding to m columns where the digit 1 is actually located.
In step S330, the embedded vector is extracted, and a prototype network is constructed with the embedded vector.
The matrix of the multiplied outputs is an embedded vector with which a prototype network can be constructed, as shown in fig. 6.
In step S331, m embedded vectors are selected and randomly classified into K categories.
Randomly selecting m embedded vectors from all the generated embedded vectors, and randomly dividing the m embedded vectors into K categories.
For example: a, B, C, D, E is generated, 5 embedded vectors are generated, 5 embedded vectors correspond to 5 items, m takes a value of 4, K takes a value of 2, 4 items are randomly selected from the 5 items to serve as the basis for constructing a prototype network, A, B, C, D and 4 embedded vectors are selected, A, B, C, D is randomly divided into 2 categories which can be divided into 2 categories of 6 groups, namely A and BCD, B and ACD, C and ABD, D and ABC, AB and CD, AC and BD.
After the classification is completed, step S332 is performed.
In step S332, a mean vector under each category is calculated according to a first formula. The first formula is:
Figure RE-GDA0003264977940000111
wherein, ckIs the mean vector, | SkL is the number of embedded vectors,
Figure RE-GDA0003264977940000112
is an embedded vector.
And a total of K categories, calculating the mean vector of the K categories respectively, generating the mean vector in the prototype network, and taking the mean vector calculated by each category as the sample value of the prototype network. c. CkAs a mean vector of the embedded vectors, SkThe number of samples within a category,
Figure RE-GDA0003264977940000121
for embedding vectors, the first formula is to add all the embedded vectors within one of the classes, and divide the sum by the number of added embedded vectors to obtain a mean vector.
For example: taking one of the 6 groups as an example, in the two categories of AB and CD, the mean vector c of AB is respectively obtained by the first formula1Mean vector c of CD2Wherein c is1Is a mean vector c obtained by adding an embedded vector of A and an embedded vector of B, within AB two samples are thus divided by 2 with the sum1
As shown in fig. 9, the obtained mean vector is generated in the prototype network.
In step S340, the unknown item X is put into the prototype network, and a comparison result between each item and the unknown item X is output.
The unknown item X is an item that the user wants to operate, and it can be understood that the unknown item X is the most likely item that the application model is to recommend to the user. And putting the unknown item X into the prototype network, comparing the unknown item X with each item, and outputting a comparison result, wherein the comparison result can accurately obtain which item the unknown item X is specific to.
As shown in fig. 7, the result of comparing each item with the unknown item X is realized through steps S341 to S343.
In step S341, the unknown item X is placed in the prototype network and compared with the mean vector of each category to obtain a first comparison result.
When the feature vector of the unknown item X is compared with the mean vector of each category, a plurality of means, such as cosine function, mahalanobis distance, euclidean distance, etc., may be adopted to obtain a first comparison result, and the first comparison result may approximately obtain which category the unknown item X belongs to.
The present application is a step of calculating a mean vector between a feature vector of an unknown X and each class by euclidean distance, and calculating a first comparison result by euclidean distance as shown in fig. 8.
In step S3411, the euclidean distance between the feature vector of the unknown item X and the mean vector of each category is calculated.
As shown in fig. 10, the dotted line is the straight-line distance from the unknown X to the mean vector of each category, i.e., the euclidean distance, and each calculation using the euclidean distance yields a result.
In step S3412, the minimum euclidean distance is acquired as a first result, which is output as a first comparison result.
The magnitudes of the results are compared, with the smallest value being the first result, i.e., the distance of the unknown X from the class, with the class closest to the first result being output as the first comparison result.
In step S342, the category to which the unknown X belongs is obtained from the first comparison result.
The first comparison result value is the smallest one of all the results, i.e., the category closest to the unknown item X, as the category to which the unknown item X belongs.
More specifically, step S342 includes:
according to the n categories corresponding to the first result, checking the probability that the unknown item X belongs to the n categories by using a softmax function, and acquiring the maximum probability as a second result; wherein, the category corresponding to the second result is taken as the category of the unknown item X.
According to the first comparison result obtained by the euclidean distance, n values may be simultaneously the minimum value, that is, the unknown item X is as close as n categories, at this time, it is necessary to further confirm which category the unknown item X specifically belongs to, it is necessary to obtain the probability distribution of the category of the unknown item X through the softmax function, the probability that the unknown item X belongs to the n categories is checked through the softmax function, and the probability value represents the number of times that the unknown item X hits the n categories, that is, the category to which the unknown item X specifically belongs can be judged through the number of times.
The softmax function is formulated as:
Figure RE-GDA0003264977940000131
in the formula of the softmax function, k represents categories, and x is an input item, which can be understood as some characteristic indexes of the item in one of the k categories.
The meaning of each term in the solution method and formula of the softmax function is common knowledge in the art and is not explained herein.
All results can be set to 1, where the category to which the hit rate is the highest is shown in FIG. 10, which includes three categories, respectively c1、c2And c3The whole plate quilt c1、c2And c3In three parts, the unknown item X falls into which panel, i.e., which category it belongs to after probability calculation. Unknown X falls on c as in FIG. 102Within, i.e. belong to c2The probability of (a) will be greater.
The classification of the unknown item X is screened for the first time through the Euclidean distance, if a plurality of minimum distances do not exist, the only minimum distance is directly output as a first comparison result, and the unknown item X belongs to the classification of the only minimum distance; if there are multiple minimum distances, the class of the unknown item X needs to be screened for the second time within the multiple minimum distances through the softmax function, wherein the class to which the unknown item X belongs has the highest probability.
After the category to which the unknown item X belongs is found, all items in the category to which the unknown item X belongs are obtained and compared again, and the item with the most similar unknown item X is obtained.
In step S343, each item in the category to which the item belongs is compared with the unknown item X to obtain a second comparison result, wherein the second comparison result is output as a comparison result.
Specifically, each item in the category to which the item belongs may be compared with the unknown item X by using at least one of the euclidean distance, the mahalanobis distance, and the cosine function, so as to obtain a result set of comparison between each item in the category and the unknown item X.
In step S350, a recall list is generated according to the comparison result.
And sorting the items in the result set from high to low according to the similarity, and selecting a plurality of items from high to low according to the sorting to generate a recall list.
After a result set is obtained by utilizing at least one of Euclidean distance, Mahalanobis distance and cosine function, similarity values of the unknown item X and all items in the category to which the unknown item X belongs are calculated, the similarity values are sorted from large to small, namely the most similar items are arranged at the head, the most similar items are sequentially arranged according to the difference of the similarity values, and a plurality of items are selected from large to small to generate a recall list.
For example: the total of 5 items A, B, C, D, E in the subject item, similarity values of the unknown item X and the 5 items are calculated respectively, and are arranged in E, A, C, D, B according to the similarity, that is, the unknown item X is most similar to the unknown item E and has the largest difference with the unknown item B in the subject category, and only 3 values in the 5 items are taken as a recall list, that is, E, A, C is taken as the recall list.
After the recall list is generated, step S360 is performed.
In step S360, the recall list is input to the ranking layer as a recommendation result, and is presented in a sequential form.
For example, recalling list E, A, C is entered as a result into the ranking layer, and at the time of recommendation, recommendations are generated to appear in turn in the order of E, A, C.
The method for generating the model is suitable for the condition that no sample or few samples exist, new samples can be accurately recommended and precisely classified, the accuracy of the recommendation result is effectively improved, the method is based on a mature LSTM deep neural network model, has enough flexibility, does not need additional fitting parameters, and has simplicity and effectiveness under the recommendation scene of legal users.
Based on the model generation method, the application also provides a model generation system. The apparatus will be described in detail below with reference to fig. 11.
Fig. 11 schematically shows a block diagram of a model generation system according to an embodiment of the present application.
As shown in fig. 11, the model generation system 400 of this embodiment includes: a collection module 410, a first input module 420, an extraction module 430, a second input module 440, and a generation module 450.
The collecting module 410 is used for collecting the events operated by the user in the time period t. In an embodiment, the collecting module 410 may be configured to perform the operation S210 described above, and collect the items operated by the user in the time period t, which is not described herein again.
The first input module 420 is configured to: the method includes inputting a transaction into an LSTM deep neural network model, and outputting an embedded vector corresponding to the transaction. In an embodiment, the first input module 420 may be configured to perform the operation S220 described above, input the transaction into the LSTM deep neural network model, and output the embedded vector corresponding to the transaction, which is not described herein again.
The extraction module 430 is used to extract the embedded vectors and construct prototype networks with the embedded vectors. In an embodiment, the extracting module 430 may be configured to perform the operation S230 described above, extract the embedded vector, and construct a prototype network with the embedded vector, which is not described herein again.
The second input module 440 is configured to: and putting the unknown items X into the prototype network, and outputting a comparison result of each item and the unknown items X. In an embodiment, the second input module 440 may be configured to perform the operation S240 described above, put the unknown item X into the prototype network, and output a comparison result between each item and the unknown item X, which is not described herein again.
The generating module 450 is configured to generate a recall list according to the comparison result. In an embodiment, the generating module 450 may be configured to perform the operation S250 described above, and generate the recall list according to the comparison result, which is not described herein again.
According to the model generation system in the embodiment of the application, the method for generating the model in operation S210-operation S250 can be executed, the model generation system is suitable for the situation that no sample or only few samples exist, new samples can be accurately recommended and finely classified, the accuracy of the recommendation result is effectively improved, the model generation system is based on a mature LSTM deep neural network model, has enough flexibility, does not need additional fitting parameters, and has simplicity and effectiveness in the recommendation scene of legal users.
According to an embodiment of the present application, any plurality of the collection module 410, the first input module 420, the extraction module 430, the second input module 440, and the generation module 450 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present application, at least one of the collection module 410, the first input module 420, the extraction module 430, the second input module 440, and the generation module 450 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of three implementations of software, hardware, and firmware, or in any suitable combination of any of them. Alternatively, at least one of the collection module 410, the first input module 420, the extraction module 430, the second input module 440 and the generation module 450 may be at least partially implemented as a computer program module, which when executed may perform a corresponding function.
FIG. 12 schematically shows a block diagram of an electronic device adapted to implement a method of model generation according to an embodiment of the present application.
As shown in fig. 12, an electronic device 500 according to an embodiment of the present application includes a processor 501 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage section 508 into a Random Access Memory (RAM) 503. The processor 501 may comprise, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 501 may also include onboard memory for caching purposes. The processor 501 may comprise a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present application.
In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are stored. The processor 501, the ROM 502, and the RAM 503 are connected to each other by a bus 504. The processor 501 performs various operations of the method flows according to the embodiments of the present application by executing programs in the ROM 502 and/or the RAM 503. Note that the programs may also be stored in one or more memories other than the ROM 502 and the RAM 503. The processor 501 may also perform various operations of method flows according to embodiments of the present application by executing programs stored in the one or more memories.
According to an embodiment of the present application, the electronic device 500 may further include an input/output (I/O) interface 505, the input/output (I/O) interface 505 also being connected to the bus 504. The electronic device 500 may also include one or more of the following components connected to the I/O interface 505: an input portion 506 including a keyboard, a mouse, and the like; an output portion 507 including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 508 including a hard disk and the like; and a communication section 509 including a network interface card such as a LAN card, a modem, or the like. The communication section 509 performs communication processing via a network such as the internet. The driver 510 is also connected to the I/O interface 505 as necessary. A removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 510 as necessary, so that a computer program read out therefrom is mounted into the storage section 508 as necessary.
The present application also provides a computer-readable storage medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the present application.
According to embodiments of the present application, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present application, a computer-readable storage medium may include ROM 502 and/or RAM 503 and/or one or more memories other than ROM 502 and RAM 503 described above.
Embodiments of the present application also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided in the embodiment of the present application.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present application when executed by the processor 501. According to embodiments of the present application, the above-described systems, apparatuses, modules, units, etc. may be implemented by computer program modules.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed in the form of a signal on a network medium, downloaded and installed through the communication section 509, and/or installed from the removable medium 511. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 509, and/or installed from the removable medium 511. The computer program performs the above-described functions defined in the system of the embodiment of the present application when executed by the processor 501. According to embodiments of the present application, the above-described systems, devices, apparatuses, modules, units, etc. may be implemented by computer program modules.
According to embodiments of the present application, program code for executing computer programs provided in embodiments of the present application may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be appreciated by a person skilled in the art that various combinations and/or combinations of features described in the various embodiments and/or claims of the present application are possible, even if such combinations or combinations are not explicitly described in the present application. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present application may be made without departing from the spirit and teachings of the present application. All such combinations and/or associations are intended to fall within the scope of this application.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The embodiments of the present application are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present application. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination. The scope of the application is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present application, and such alternatives and modifications are intended to be within the scope of the present application.

Claims (15)

1. A method of model generation, comprising the steps of:
collecting items operated by a user in a time period t;
inputting the item into an LSTM deep neural network model, and outputting an embedded vector corresponding to the item;
extracting the embedded vector, and constructing a prototype network by using the embedded vector;
putting unknown items X into the prototype network, and outputting a comparison result of each item and the unknown item X;
and generating a recall list according to the comparison result.
2. The method of claim 1, wherein inputting the transaction into an LSTM deep neural network model and outputting an embedded vector corresponding to the transaction comprises:
performing one-hot coding on each item, wherein the item and the one-hot coding form a mapping relation;
inputting the one-hot code into an LSTM deep neural network model;
and obtaining the embedded vector of the one-hot coding corresponding item through the operation of an LSTM deep neural network model.
3. The method of claim 2, wherein obtaining the embedded vector of the transaction through operation of the LSTM deep neural network model comprises:
extracting a weight matrix of a hidden layer of the LSTM deep neural network model through embedding _ lookup;
and multiplying the one-hot code by the weight matrix to obtain an embedded vector of the item corresponding to the one-hot code.
4. The method of claim 1, wherein extracting the embedded vectors and constructing a prototype network with the embedded vectors comprises:
selecting m embedded vectors, and randomly dividing the m embedded vectors into K categories;
according to a first formula, a mean vector under each category is calculated.
5. The method of claim 4, wherein the first formula is:
Figure FDA0003234365820000011
wherein, ckIs the mean vector, | SkL is the number of embedded vectors,
Figure FDA0003234365820000021
is an embedded vector.
6. The method of claim 4, wherein placing an unknown item X into said prototype network and outputting a comparison of each of said items with said unknown item X comprises:
putting unknown items X into the prototype network, and comparing the unknown items X with the mean vector of each category to obtain a first comparison result;
obtaining the category of the unknown item X according to the first comparison result;
comparing each item in the category with the unknown item X to obtain a second comparison result,
wherein the second comparison result is output as the comparison result.
7. The method of claim 6, wherein placing an unknown X into the prototype network, and comparing the unknown X with the mean vector of each class to obtain a first comparison result comprises:
calculating the Euclidean distance between the feature vector of the unknown item X and the mean vector of each category;
acquiring a minimum Euclidean distance as a first result, wherein the first result is output as the first comparison result.
8. The method according to claim 7, wherein obtaining the category of the unknown item X according to the first comparison result comprises:
according to the n categories corresponding to the first result, checking the probability that the unknown item X belongs to the n categories by using a softmax function, and acquiring the maximum probability as a second result;
and the category corresponding to the second result is used as the category of the unknown item X.
9. The method of claim 6, wherein comparing each item within the category to the unknown item X to obtain a second comparison result comprises:
and comparing each item in the category with the unknown item X by using at least one of Euclidean distance, Mahalanobis distance and cosine function to obtain a result set of comparison between each item in the category and the unknown item X.
10. The method of claim 9, wherein generating a recall list based on the comparison comprises:
and sorting the items in the result set from high to low according to the similarity, and selecting a plurality of items from high to low according to the sorting to generate a recall list.
11. The method of any one of claims 1-10, wherein the transaction comprises: click, submit, search, comment.
12. A model generation system, comprising:
the collection module is used for collecting items operated by a user in a time period t;
a first input module to: inputting the item into an LSTM deep neural network model, and outputting an embedded vector corresponding to the item;
the extraction module is used for extracting the embedded vector and constructing a prototype network by using the embedded vector;
a second input module to: putting unknown items X into the prototype network, and outputting a comparison result of each item and the unknown item X;
and the generating module is used for generating a recall list according to the comparison result.
13. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-11.
14. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 11.
15. A computer program product comprising a computer program which, when executed by a processor, implements a method according to any one of claims 1 to 11.
CN202110997585.9A 2021-08-27 2021-08-27 Method, system, device, medium and program product for model generation Pending CN113673620A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110997585.9A CN113673620A (en) 2021-08-27 2021-08-27 Method, system, device, medium and program product for model generation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110997585.9A CN113673620A (en) 2021-08-27 2021-08-27 Method, system, device, medium and program product for model generation

Publications (1)

Publication Number Publication Date
CN113673620A true CN113673620A (en) 2021-11-19

Family

ID=78547173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110997585.9A Pending CN113673620A (en) 2021-08-27 2021-08-27 Method, system, device, medium and program product for model generation

Country Status (1)

Country Link
CN (1) CN113673620A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364028A (en) * 2018-03-06 2018-08-03 中国科学院信息工程研究所 A kind of internet site automatic classification method based on deep learning
EP3564889A1 (en) * 2018-05-04 2019-11-06 The Boston Consulting Group, Inc. Systems and methods for learning and predicting events
CN110689164A (en) * 2019-08-26 2020-01-14 阿里巴巴集团控股有限公司 Prediction method and system for user reduction behavior
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm
US20200327445A1 (en) * 2019-04-09 2020-10-15 International Business Machines Corporation Hybrid model for short text classification with imbalanced data
CN111931052A (en) * 2020-08-10 2020-11-13 齐鲁工业大学 Context perception recommendation method and system based on feature interaction graph neural network
CN113032534A (en) * 2019-12-24 2021-06-25 中国移动通信集团四川有限公司 Dialog text classification method and electronic equipment
CN113255908A (en) * 2021-05-27 2021-08-13 支付宝(杭州)信息技术有限公司 Method, neural network model and device for service prediction based on event sequence

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108364028A (en) * 2018-03-06 2018-08-03 中国科学院信息工程研究所 A kind of internet site automatic classification method based on deep learning
EP3564889A1 (en) * 2018-05-04 2019-11-06 The Boston Consulting Group, Inc. Systems and methods for learning and predicting events
US20200327445A1 (en) * 2019-04-09 2020-10-15 International Business Machines Corporation Hybrid model for short text classification with imbalanced data
CN110689164A (en) * 2019-08-26 2020-01-14 阿里巴巴集团控股有限公司 Prediction method and system for user reduction behavior
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm
CN113032534A (en) * 2019-12-24 2021-06-25 中国移动通信集团四川有限公司 Dialog text classification method and electronic equipment
CN111931052A (en) * 2020-08-10 2020-11-13 齐鲁工业大学 Context perception recommendation method and system based on feature interaction graph neural network
CN113255908A (en) * 2021-05-27 2021-08-13 支付宝(杭州)信息技术有限公司 Method, neural network model and device for service prediction based on event sequence

Similar Documents

Publication Publication Date Title
US11417131B2 (en) Techniques for sentiment analysis of data using a convolutional neural network and a co-occurrence network
Das et al. Real-time sentiment analysis of twitter streaming data for stock prediction
JP7206288B2 (en) Music recommendation method, apparatus, computing equipment and medium
US11250342B2 (en) Systems and methods for secondary knowledge utilization in machine learning
US20230102337A1 (en) Method and apparatus for training recommendation model, computer device, and storage medium
Ignatov et al. Can triconcepts become triclusters?
US11645500B2 (en) Method and system for enhancing training data and improving performance for neural network models
CN107729473B (en) Article recommendation method and device
CN110264270A (en) A kind of behavior prediction method, apparatus, equipment and storage medium
CN112509690A (en) Method, apparatus, device and storage medium for controlling quality
CN113051480A (en) Resource pushing method and device, electronic equipment and storage medium
CN111221881B (en) User characteristic data synthesis method and device and electronic equipment
CN114175018A (en) New word classification technique
CN110543996A (en) job salary assessment method, apparatus, server and storage medium
CN114090601B (en) Data screening method, device, equipment and storage medium
CN110826327A (en) Emotion analysis method and device, computer readable medium and electronic equipment
US20230359825A1 (en) Knowledge graph entities from text
Amirian et al. Data science and analytics
Kumar et al. Hybrid evolutionary techniques in feed forward neural network with distributed error for classification of handwritten Hindi ‘SWARS’
CN113673620A (en) Method, system, device, medium and program product for model generation
Kandanaarachchi et al. Leave-one-out kernel density estimates for outlier detection
Kumbhar et al. Web mining: A Synergic approach resorting to classifications and clustering
Mannseth et al. On the application of improved symplectic integrators in Hamiltonian Monte Carlo
CN114358024A (en) Log analysis method, apparatus, device, medium, and program product
Mohindru et al. Mining challenges in large-scale IoT data framework–a machine learning perspective

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination