CN116524599A - Analysis system and method for identifying user behavior based on AI - Google Patents
Analysis system and method for identifying user behavior based on AI Download PDFInfo
- Publication number
- CN116524599A CN116524599A CN202310580846.6A CN202310580846A CN116524599A CN 116524599 A CN116524599 A CN 116524599A CN 202310580846 A CN202310580846 A CN 202310580846A CN 116524599 A CN116524599 A CN 116524599A
- Authority
- CN
- China
- Prior art keywords
- layer
- training
- convolution layer
- output
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 title claims description 7
- 230000006399 behavior Effects 0.000 claims abstract description 41
- 238000003062 neural network model Methods 0.000 claims abstract description 35
- 230000002159 abnormal effect Effects 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000013135 deep learning Methods 0.000 abstract description 3
- 238000005516 engineering process Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 17
- 230000005856 abnormality Effects 0.000 description 2
- 238000002203 pretreatment Methods 0.000 description 2
- 206010000117 Abnormal behaviour Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Abstract
The invention relates to the technical field of AI (advanced technology attachment), and discloses an analysis method for identifying user behaviors based on AI, which comprises the following steps: collecting common images and infrared images corresponding to the behaviors of financial website personnel, and obtaining a common image set and an infrared image set which are ordered according to time; constructing a neural network model, wherein the neural network model comprises a first convolution layer, a second convolution layer, a first hiding layer and an output layer, the first convolution layer inputs a common image, and the second convolution layer inputs an infrared image; training a neural network model; inputting the normal image and the infrared image of the behavior to be identified into a neural network model to predict whether the behavior to be identified is abnormal; the neural network model is obtained by training based on deep learning, and the abnormal operation behaviors of financial website personnel can be accurately judged.
Description
Technical Field
The invention relates to the technical field of AI, in particular to an analysis method for identifying user behaviors based on AI.
Background
In the prior art, the monitoring of the working behaviors of the financial website personnel is mostly dependent on manual judgment, but the abnormal behaviors of the financial website personnel are high in concealment, and abnormal factors comprise subjective factors of the financial website personnel and objective factors of the body of the financial website personnel, so that the abnormality of the operation behaviors of the financial website personnel is difficult to accurately and automatically judge.
Disclosure of Invention
The invention provides an analysis method for identifying user behaviors based on AI, which solves the technical problem that the abnormality of the behaviors of financial website personnel is difficult to accurately and automatically judge in the related technology.
The invention provides an analysis method for identifying user behaviors based on AI, which comprises the following steps:
step 101, collecting ordinary images and infrared images corresponding to the behaviors of financial website personnel, and obtaining an ordinary image set and an infrared image set which are ordered according to time;
step 102, constructing a neural network model, wherein the neural network model comprises a first convolution layer, a second convolution layer, a first hiding layer and an output layer, the first convolution layer inputs a common image, and the second convolution layer inputs an infrared image;
step 103, training the weight parameters of the first convolution layer by connecting the output of the first convolution layer to a first training classification layer, wherein the classification set of the first training classification layer is Q= { Q 1 ,q 2 ,…,q k -a }; the classification labels of the classification set Q respectively correspond to different actions of the human body;
step 104, training the weight parameters of the second convolution layer by connecting the output of the second convolution layer to a second training classification layer, wherein the classification set of the second training classification layer is R= { R 1 ,r 2 ,…,r m -a }; the classification labels of the classification set R respectively correspond to different actions of human bodies;
step 105, connecting the output of the trained first convolution layer and the output of the second convolution layer with the input of the first hidden layer;
the first hidden layer comprises N hidden nodes and M decision nodes, M is larger than N, each decision node is connected with the output of one second convolution layer, each hidden node comprises two inputs, one input of the hidden node is connected with the output of one first convolution layer, and the other input of the hidden node is connected with the decision node;
the decision function of the t decision node is:
wherein s is t-1 Initializing s for the decision function value of the t-1 th decision node 1 =1,s 0 =0,A T Being the inverse of the first weight vector, gamma being the second parameter, A and gamma being both trainable parameters, x t2 Is the vector of inputs from the second convolutional layer for the t-th decision node.
The decision function value of the decision node is equal to the sequence number of the hidden node to be input by the output of the second convolution layer of the input decision node;
the output of the last hidden node is connected with an output layer, the output weight matrix of the output layer maps the output of the hidden layer node to a final classification space, and the output layer comprises two outputs which respectively correspond to a normal classification label and an abnormal classification label.
Step 106, training the neural network model;
and step 107, inputting the normal image and the infrared image of the behavior to be identified into a neural network model to predict whether the behavior to be identified is abnormal.
Further, the common image set is preprocessed, and only key common images are reserved.
Further, the pretreatment method comprises the following steps:
traversing backward from the common image with the earliest time, wherein the condition for stopping traversing is that a first distance between the current traversed common image and the common image at the beginning of traversing is larger than a set first distance threshold; the normal image between the start of the traversal and the end of the traversal is deleted, and then the traversal is restarted from the normal image terminated by the last traversal until all the normal images are traversed.
Further, the common image and the infrared image are the same, and are three-channel RGB images, and the number of channels of the corresponding first convolution layer and the corresponding second convolution layer is 3.
Further, the loss function when the first convolution layer is trained is:
wherein y is i,q Samples representing the training set of the ith first convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, k is the total number of classes in class space, n is the total number of samples of the training set of the first convolutional layer,representing a predicted value indicating that the ith sample belongs to the jth class.
Further, the loss function when the second convolution layer is trained is:
wherein u is i,r Samples representing the training set of the ith second convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, m is the total number of classes in class space, n is the total number of samples of the training set of the second convolutional layer,representing predictions indicating that the ith sample belongs to the jth classValues.
Further, the hidden node applies the structure of the LSTM cell.
Further, the calculating process of the hidden node includes:
input gate i is calculated t And candidate cell states
Wherein W is ix 、W ih 、W cx 、W ch 、b i 、b c Is a parameter of the hidden node, and sigma and tanh are a sigmoid function and a hyperbolic tangent function respectively;
calculating forgetting door f t And current cell state c t ;
Wherein, as follows, the addition of element by element, W fx 、W fh And b f Is a parameter of the hidden node;
calculating the output gate o t And hidden state h t ;
h t =o t ⊙tanh(C t )
W ox 、W oh 、b o Is a parameter of the hidden node;
definition of h t To hide the output of the node, h 0 =0,C 0 =0。
Further, the loss function for training the neural network model is:
wherein y is i True value representing class of the ith sample, true class of sample is y at normal time i Valuing, otherwise, the value of yi is 0, n is the total number of samples,representing the predicted value that the representation model is normal to the ith sample.
An analysis system for identifying user behavior based on AI, for executing the above analysis method for identifying user behavior based on AI, comprising:
the image extraction module is used for acquiring ordinary images and infrared images corresponding to the behaviors of financial website personnel and obtaining an ordinary image set and an infrared image set which are ordered according to time;
the first convolution layer training module is used for training the first convolution layer;
the second convolution layer training module is used for training the second convolution layer;
a neural network model generation module for generating a neural network model;
the neural network model training module is used for training the neural network model;
and the result generation module is used for inputting the normal image and the infrared image of the behavior to be identified into the neural network model to predict whether the behavior to be identified is abnormal.
The invention has the beneficial effects that:
the neural network model is obtained by training based on deep learning, and the abnormal operation behaviors of financial website personnel can be accurately judged.
Drawings
Fig. 1 is a flowchart of an analysis method for identifying user behavior based on AI of the present invention.
Detailed Description
The subject matter described herein will now be discussed with reference to example embodiments. It is to be understood that these embodiments are merely discussed so that those skilled in the art may better understand and implement the subject matter described herein and that changes may be made in the function and arrangement of the elements discussed without departing from the scope of the disclosure herein. Various examples may omit, replace, or add various procedures or components as desired. In addition, features described with respect to some examples may be combined in other examples as well.
Example 1
As shown in fig. 1, an analysis method for identifying user behavior based on AI includes the following steps:
step 101, collecting ordinary images and infrared images corresponding to the behaviors of financial website personnel, and obtaining an ordinary image set and an infrared image set which are ordered according to time;
preprocessing a common image set, and only keeping key common images;
the pretreatment method comprises the following steps:
traversing backward from the common image with the earliest time, wherein the condition for stopping traversing is that a first distance between the current traversed common image and the common image at the beginning of traversing is larger than a set first distance threshold; the normal image between the start of the traversal and the end of the traversal is deleted, and then the traversal is restarted from the normal image terminated by the last traversal until all the normal images are traversed.
In one embodiment of the invention, the first distance is a euclidean distance.
Step 102, constructing a neural network model, wherein the neural network model comprises a first convolution layer, a second convolution layer, a first hiding layer and an output layer, the first convolution layer inputs a common image, and the second convolution layer inputs an infrared image;
in one embodiment of the present invention, the common image and the infrared image are the same, and are three-channel RGB images, and the number of channels of the corresponding first convolution layer and the second convolution layer is 3.
Step 103, training the weight parameters of the first convolution layer by connecting the output of the first convolution layer to a first training classification layer, wherein the classification set of the first training classification layer is Q= { Q 1 ,q 2 ,…,q k -a }; the classification labels of the classification set Q correspond to different actions of the human body, such as standing, sitting, lying prone, lying prone, etc., respectively, and the classification labels of the training samples can be marked manually.
The loss function at the time of the first convolution layer training is:
wherein y is i,q Samples representing the training set of the ith first convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, k is the total number of classes in class space, n is the total number of samples of the training set of the first convolutional layer,representing a predicted value indicating that the ith sample belongs to the jth class;
step 104, training the weight parameters of the second convolution layer by connecting the output of the second convolution layer to a second training classification layer, wherein the classification set of the second training classification layer is R= { R 1 ,r 2 ,…,r m -a }; the classification labels of the classification set R respectively correspond to different actions of a human body, such as standing, sitting, lying prone, lying prone and the like, and the classification labels of the training samples can be marked manually;
the loss function at the time of training of the second convolution layer is:
wherein u is i,r Samples representing the training set of the ith second convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, m is the total number of classes in class space, n is the total number of samples of the training set of the second convolutional layer,representing a predicted value indicating that the ith sample belongs to the jth class;
step 105, connecting the output of the trained first convolution layer and the output of the second convolution layer with the input of the first hidden layer;
the first hidden layer comprises N hidden nodes and M decision nodes, M is larger than N, each decision node is connected with the output of one second convolution layer, each hidden node comprises two inputs, one input of the hidden node is connected with the output of one first convolution layer, and the other input of the hidden node is connected with the decision node;
the decision function of the t decision node is:
wherein s is t-1 Initializing s for the decision function value of the t-1 th decision node 1 =1,s 0 =0,A T Is the inverse of the first weight vector, gamma is the second parameter, x t2 Is the vector of inputs from the second convolutional layer for the t-th decision node.
The decision function value of the decision node is equal to the sequence number of the hidden node to be input by the output of the second convolution layer of the input decision node;
for example s t =2, then the output of the second convolutional layer, which is input to the t decision node, is input to the 2 nd hidden node;
the hidden node applies the structure of the LSTM unit;
the hidden node calculation process is as follows:
1. input gate i is calculated t And candidate cell states
Wherein W is ix 、W ih 、W cx 、W ch 、b i 、b c Is a parameter of the hidden node, sigma and tanh are sigmoid function and hyperbolic tangent function, respectively.
Calculating forgetting door f t And current cell state c t ;
Wherein, as follows, the addition of element by element, W fx 、W fh And b f Is a parameter of the hidden node.
Calculating the output gate o t And hidden state h t ;
h t =o t ⊙tanh(C t )
W ox 、W oh 、b o Is a parameter of the hidden node.
Definition of h t Output as hidden node,h 0 =0,C 0 =0;
The output of the last hidden node is connected with an output layer, the output weight matrix of the output layer maps the output of the hidden layer node to a final classification space, and the output layer comprises two outputs which respectively correspond to a normal classification label and an abnormal classification label.
X t =θX t1 +μx t2
Wherein x is t1 Vector, x, of inputs from the first convolutional layer for the t-th decision node t2 Vector, θ and for the input of the t-th decision node from the second convolutional layer μ The parameters for the hidden node may be trainable parameters.
Step 106, training the neural network model;
a sample of training includes a set of normal images and a corresponding set of infrared images.
The trained loss function is:
wherein y is i The true value representing the class of the ith sample, takes the value 1 (true label normal) or 0 (true label abnormal), n is the total number of samples,representing the predicted value that the representation model is normal to the ith sample.
And step 107, inputting the normal image and the infrared image of the behavior to be identified into a neural network model to predict whether the behavior to be identified is abnormal.
According to the invention, the common image with more action features and the infrared image with more hidden features related to the action features are comprehensively considered, and as the time lag between the common image and the action is extremely short, the time of the action features related to the hidden features of the infrared image is earlier, such as the thermal features of all parts of the body, and the asynchronism of the common image features and the infrared image features in time is reduced through decision nodes in a hidden layer, and a more accurate judgment result is obtained through deep learning optimization.
Based on the analysis method for identifying the user behavior based on the AI, the invention provides an analysis system for identifying the user behavior based on the AI, which comprises the following steps:
the image extraction module is used for acquiring ordinary images and infrared images corresponding to the behaviors of financial website personnel and obtaining an ordinary image set and an infrared image set which are ordered according to time;
the first convolution layer training module is used for training the first convolution layer;
the second convolution layer training module is used for training the second convolution layer;
a neural network model generation module for generating a neural network model;
the neural network model training module is used for training the neural network model;
and the result generation module is used for inputting the normal image and the infrared image of the behavior to be identified into the neural network model to predict whether the behavior to be identified is abnormal.
The embodiment has been described above with reference to the embodiment, but the embodiment is not limited to the above-described specific implementation, which is only illustrative and not restrictive, and many forms can be made by those of ordinary skill in the art, given the benefit of this disclosure, are within the scope of this embodiment.
Claims (10)
1. An analysis method for identifying user behavior based on AI, which is characterized by comprising the following steps:
step 101, collecting ordinary images and infrared images corresponding to the behaviors of financial website personnel, and obtaining an ordinary image set and an infrared image set which are ordered according to time;
step 102, constructing a neural network model, wherein the neural network model comprises a first convolution layer, a second convolution layer, a first hiding layer and an output layer, the first convolution layer inputs a common image, and the second convolution layer inputs an infrared image;
step 103, training the weight parameters of the first convolution layer by connecting the output of the first convolution layer to a first training classification layer, wherein the classification set of the first training classification layer is Q= { Q 1 ,q 2 ,…,q k -a }; the classification labels of the classification set Q respectively correspond to different actions of the human body;
step 104, training the weight parameters of the second convolution layer by connecting the output of the second convolution layer to a second training classification layer, wherein the classification set of the second training classification layer is R= { R 1 ,r 2 ,…,r m -a }; the classification labels of the classification set R respectively correspond to different actions of human bodies;
step 105, connecting the output of the trained first convolution layer and the output of the second convolution layer with the input of the first hidden layer;
the first hidden layer comprises N hidden nodes and M decision nodes, M is larger than N, each decision node is connected with the output of one second convolution layer, each hidden node comprises two inputs, one input of the hidden node is connected with the output of one first convolution layer, and the other input of the hidden node is connected with the decision node;
the output of the last hidden node is connected with an output layer, the output weight matrix of the output layer maps the output of the hidden layer node to a final classification space, and the output layer comprises two outputs which respectively correspond to a normal classification label and an abnormal classification label;
step 106, training the neural network model;
and step 107, inputting the normal image and the infrared image of the behavior to be identified into a neural network model to predict whether the behavior to be identified is abnormal.
2. The AI-based analysis method of claim 1, wherein the collection of normal images is preprocessed to retain only critical normal images.
3. The AI-based analysis method for identifying user behavior of claim 2, wherein the preprocessing method comprises:
traversing backward from the common image with the earliest time, wherein the condition for stopping traversing is that a first distance between the current traversed common image and the common image at the beginning of traversing is larger than a set first distance threshold; the normal image between the start of the traversal and the end of the traversal is deleted, and then the traversal is restarted from the normal image terminated by the last traversal until all the normal images are traversed.
4. The AI-based analysis method of claim 1, wherein the common image and the infrared image are the same, are three-channel RGB images, and the number of channels of the corresponding first and second convolution layers is 3.
5. The AI-based analysis method of claim 1, wherein the first convolution layer training has a loss function of:
wherein y is i,q Samples representing the training set of the ith first convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, k is the total number of classes in class space, n is the total number of samples of the training set of the first convolutional layer,representing a predicted value indicating that the ith sample belongs to the jth class.
6. The AI-based analysis method of claim 1, wherein the second convolution layer training has a loss function of:
wherein u is i,r Samples representing the training set of the ith second convolutional layer belong to the predicted value of the qth class, q is the true class of the ith sample, m is the total number of classes in class space, n is the total number of samples of the training set of the second convolutional layer,representing a predicted value indicating that the ith sample belongs to the jth class.
7. The AI-based analysis method of claim 1, wherein the decision function of the t-th decision node is:
wherein s is t-1 Initializing s for the decision function value of the t-1 th decision node 1 =1,s 0 =0,A T Is the inverse of the first weight vector, gamma is the second parameter, x t2 A vector of inputs from the second convolutional layer for the t-th decision node;
the decision function value of the decision node is equal to the sequence number of the hidden node to be input by the output of the second convolution layer of the input decision node.
8. The AI-based analysis method for identifying user behavior of claim 1, wherein the hidden node applies a structure of LSTM cells; the calculation process of the hidden node comprises the following steps:
input gate i is calculated t And candidate cell states
Wherein W is ix 、W ih 、W cx 、W ch 、b i 、b c Is a parameter of the hidden node, and sigma and tanh are a sigmoid function and a hyperbolic tangent function respectively;
calculating forgetting door f t And current cell state c t ;
Wherein, as follows, the addition of element by element, W fx 、W fh And b f Is a parameter of the hidden node;
calculating the output gate o t And hidden state h t ;
h t =o t ⊙tanh(C t )
W ox 、W oh 、b o Is a parameter of the hidden node;
definition of h t To hide the output of the node, h 0 =0,C 0 =0。
9. The AI-based analysis method of claim 1, wherein the loss function for training the neural network model is:
wherein y is i True value representing class of the ith sample, true class of sample is y at normal time i Take the value of y otherwise i The value is 0, n is the total number of samples,representing the predicted value that the representation model is normal to the ith sample.
10. An AI-based analysis system for identifying user behavior, characterized in that it is configured to perform an AI-based analysis method as claimed in any one of claims 1 to 9, the AI-based analysis system comprising:
the image extraction module is used for acquiring ordinary images and infrared images corresponding to the behaviors of financial website personnel and obtaining an ordinary image set and an infrared image set which are ordered according to time;
the first convolution layer training module is used for training the first convolution layer;
the second convolution layer training module is used for training the second convolution layer;
a neural network model generation module for generating a neural network model;
the neural network model training module is used for training the neural network model;
and the result generation module is used for inputting the normal image and the infrared image of the behavior to be identified into the neural network model to predict whether the behavior to be identified is abnormal.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310580846.6A CN116524599A (en) | 2023-05-22 | 2023-05-22 | Analysis system and method for identifying user behavior based on AI |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310580846.6A CN116524599A (en) | 2023-05-22 | 2023-05-22 | Analysis system and method for identifying user behavior based on AI |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116524599A true CN116524599A (en) | 2023-08-01 |
Family
ID=87399410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310580846.6A Pending CN116524599A (en) | 2023-05-22 | 2023-05-22 | Analysis system and method for identifying user behavior based on AI |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116524599A (en) |
-
2023
- 2023-05-22 CN CN202310580846.6A patent/CN116524599A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112163426B (en) | Relationship extraction method based on combination of attention mechanism and graph long-time memory neural network | |
CN110909926A (en) | TCN-LSTM-based solar photovoltaic power generation prediction method | |
CN108459955B (en) | Software defect prediction method based on deep self-coding network | |
CN111079931A (en) | State space probabilistic multi-time-series prediction method based on graph neural network | |
CN111310672A (en) | Video emotion recognition method, device and medium based on time sequence multi-model fusion modeling | |
Nguyen et al. | Practical and theoretical aspects of mixture‐of‐experts modeling: An overview | |
US20230084910A1 (en) | Semantic segmentation network model uncertainty quantification method based on evidence inference | |
Singh et al. | Comparative analysis of regression and machine learning methods for predicting fault proneness models | |
CN111523421B (en) | Multi-person behavior detection method and system based on deep learning fusion of various interaction information | |
CN112765896A (en) | LSTM-based water treatment time sequence data anomaly detection method | |
US20140343903A1 (en) | Factorial hidden markov models estimation device, method, and program | |
CN110688471B (en) | Training sample obtaining method, device and equipment | |
CN113673482B (en) | Cell antinuclear antibody fluorescence recognition method and system based on dynamic label distribution | |
Estevez-Velarde et al. | AutoML strategy based on grammatical evolution: A case study about knowledge discovery from text | |
CN111881299A (en) | Outlier event detection and identification method based on duplicate neural network | |
CN115658673A (en) | Power data quality outlier detection method based on big data modeling | |
CN116152554A (en) | Knowledge-guided small sample image recognition system | |
CN113762716A (en) | Method and system for evaluating running state of transformer area based on deep learning and attention | |
Tellez et al. | An Assure AI Bot (AAAI bot) | |
CN112633503A (en) | Tool variable generation and counterfactual reasoning method and device based on neural network | |
CN112084944A (en) | Method and system for identifying dynamically evolved expressions | |
CN111858343A (en) | Countermeasure sample generation method based on attack capability | |
CN117078007A (en) | Multi-scale wind control system integrating scale labels and method thereof | |
CN115438190B (en) | Power distribution network fault auxiliary decision knowledge extraction method and system | |
CN116524599A (en) | Analysis system and method for identifying user behavior based on AI |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |