CN115713099A - Model design method, device, equipment and storage medium - Google Patents

Model design method, device, equipment and storage medium Download PDF

Info

Publication number
CN115713099A
CN115713099A CN202310000603.0A CN202310000603A CN115713099A CN 115713099 A CN115713099 A CN 115713099A CN 202310000603 A CN202310000603 A CN 202310000603A CN 115713099 A CN115713099 A CN 115713099A
Authority
CN
China
Prior art keywords
model
target
data
network model
dnn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310000603.0A
Other languages
Chinese (zh)
Other versions
CN115713099B (en
Inventor
高庆
沈彬剑
陈文浩
张旭立
王伟
岑浩铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shuiyou Information Technology Co ltd
Original Assignee
Shuiyou Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shuiyou Information Technology Co ltd filed Critical Shuiyou Information Technology Co ltd
Priority to CN202310000603.0A priority Critical patent/CN115713099B/en
Publication of CN115713099A publication Critical patent/CN115713099A/en
Application granted granted Critical
Publication of CN115713099B publication Critical patent/CN115713099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a model design method, a device, equipment and a storage medium, which relate to the technical field of computers and comprise the following steps: acquiring target table data and acquiring a corresponding data set based on the target table data; integrating all the data sets by using a preset integration method to obtain a target data set; and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to a prediction result. The deep learning target network model is constructed based on the DNN model, the naive Bayes model and the CNN model, the DNN model conducts preliminary prediction, the naive Bayes model conducts probability analysis, and the CNN model conducts decision making, so that automatic replacement of scripts is achieved, operation and maintenance efficiency is improved, labor cost is saved, and the risk of human errors is reduced.

Description

Model design method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a model design method, apparatus, device, and storage medium.
Background
With the system becoming large and stable, the production system is faced with not only system level problems but also business level problems, such as error data, redundant data, dirty data, etc. When the business problem on data is involved, operation and maintenance personnel are often required to query a background database, judge the occurrence situation according to the characteristic field of the business table and further provide an operation and maintenance script. Because the operation and maintenance personnel are limited, but along with the continuous improvement of a service system, the probability of occurrence of repeatability problems is greatly increased, and the maintenance efficiency is influenced.
Disclosure of Invention
In view of this, the present invention provides a model design method, apparatus, device and storage medium, which can implement automatic replacement of scripts, improve operation and maintenance efficiency, save labor cost and reduce risk of human error. The specific scheme is as follows:
in a first aspect, the present application discloses a model design method, comprising:
acquiring target table data and acquiring a corresponding data set based on the target table data;
integrating all the data sets by using a preset integration method to obtain a target data set;
and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result.
Optionally, the obtaining target table data and obtaining a corresponding data set based on the target table data include:
acquiring target table data corresponding to a target service table, and performing data making based on the characteristic field data of the target table data to obtain a data set under the belonged condition corresponding to the characteristic field data.
Optionally, the integrating all the data sets by using a preset integration method to obtain a target data set includes:
extracting the characteristic field data of all the data sets to obtain a current array;
executing preset normalization processing operation on the current array to obtain a normalization array;
performing preset labeling operation on data in the normalized array to obtain a target array, and acquiring the target data set based on the target array; wherein the target data set comprises training data, validation data, and test data.
Optionally, the constructing a target network model based on the target data set, the original DNN network model, the original naive bayes model, and the original CNN network model includes:
acquiring a trained DNN network model based on the target data set and the original DNN network model;
acquiring first output data corresponding to the trained DNN network model, and acquiring a post-training naive Bayes model based on the first output data and the original naive Bayes model;
acquiring second output data corresponding to the post-training plain Bayesian model, and acquiring a post-training CNN network model based on an original CNN network model, the target data set, the first output data and the second output data;
and constructing the target network model based on the trained DNN network model, the trained plain Bayesian model and the trained CNN network model.
Optionally, the obtaining a trained DNN network model based on the target data set and the original DNN network model includes:
determining DNN model parameters based on the training data and the validation data in the target dataset;
generating a current DNN network model based on the DNN model parameters and the original DNN network model;
adjusting the DNN model parameters in the current DNN network model through a first preset loss function and a first preset accuracy calculation formula to obtain DNN model final parameters meeting current requirements;
and acquiring the trained DNN network model based on the final DNN model parameters and the current DNN network model.
Optionally, the obtaining a post-training naive bayes model based on the first output data and the original naive bayes model comprises:
setting prior probability, and determining the current posterior probability based on the prior probability;
updating the current posterior probability by utilizing the first output data to obtain a target posterior probability;
and acquiring the post-training primitive Bayesian model based on the target posterior probability and the original primitive Bayesian model.
Optionally, the obtaining a trained CNN network model based on the original CNN network model, the target data set, the first output data, and the second output data includes:
inputting the target data set, the first output data and the second output data into the original CNN network model to execute a preset training mode so as to obtain a current output value;
acquiring a second preset loss function and a second preset accuracy calculation formula so as to adjust the preset CNN parameters in the original CNN network model based on the current output value, the second preset loss function and the second preset accuracy calculation formula to obtain target CNN parameters;
acquiring the trained CNN network model based on the target CNN parameter and the original CNN network model;
correspondingly, the constructing a target network model based on the target data set, the original DNN network model, the original naive Bayesian model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform a preset script replacement operation according to a prediction result comprises the following steps:
and constructing the target network model based on the target data set, the trained DNN network model, the trained naive Bayesian model and the trained CNN network model so as to predict the received service table to be analyzed, automatically acquiring a corresponding target script according to the prediction result, and replacing the current script with the target script by a preset script replacement method.
In a second aspect, the present application discloses a model design apparatus, comprising:
the first data set acquisition module is used for acquiring target table data and acquiring a corresponding data set based on the target table data;
the second data set acquisition module is used for integrating all the data sets by using a preset integration method to obtain a target data set;
and the model construction module is used for constructing a target network model based on the target data set, the original DNN network model, the original naive Bayesian model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result.
In a third aspect, the present application discloses an electronic device, comprising:
a memory for storing a computer program;
a processor for executing said computer program for carrying out the steps of the model design method as disclosed in the foregoing.
In a fourth aspect, the present application discloses a computer readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements a model design method as disclosed in the preceding.
As can be seen, the present application provides a model design method, comprising: acquiring target table data and acquiring a corresponding data set based on the target table data; integrating all the data sets by using a preset integration method to obtain a target data set; and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to a prediction result. Therefore, the deep learning target network model is constructed based on the DNN network model, the naive Bayes model and the CNN network model, the service condition is preliminarily predicted through the DNN model, the result of the DNN model is subjected to probability analysis through the naive Bayes model, the CNN model makes a decision according to the target data set, the output result of the DNN model and the output result of the naive Bayes, and the final prediction result is analyzed and output, so that automatic replacement of a script is realized according to the prediction result, and the fitting performance and the accuracy are improved by combining a plurality of models; the training time is shortened by optimizing the algorithm in the model; the operation and maintenance efficiency is improved, the labor cost is saved, and the risk of human errors is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a model design method disclosed herein;
FIG. 2 is a flow chart of a particular model design method disclosed herein;
FIG. 3 is a flow chart of data set creation disclosed herein;
FIG. 4 is a flow chart of a particular model design method disclosed herein;
FIG. 5 is a schematic diagram of a network structure of the model disclosed in the present application;
FIG. 6 is a schematic diagram of a deep learning model framework disclosed herein;
FIG. 7 is a schematic structural diagram of a model design apparatus provided in the present application;
fig. 8 is a block diagram of an electronic device provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, along with the gradual large scale and stable operation of a system, a production system is faced with not only a problem on a system level but also a problem on a business level, such as error data, redundant data, dirty data and the like. When the business problem on the data is involved, operation and maintenance personnel are often required to query a background database, judge the occurrence situation according to the characteristic field of the business table and further issue an operation and maintenance script. Because the operation and maintenance personnel are limited, but along with the continuous improvement of a service system, the probability of occurrence of repeatability problems is greatly increased, and the maintenance efficiency is influenced. Therefore, the model design method can realize automatic replacement of the script, improve operation and maintenance efficiency, save labor cost and shorten model training time.
The embodiment of the invention discloses a model design method, which is shown in figure 1 and comprises the following steps:
step S11: target table data is obtained, and a corresponding data set is obtained based on the target table data.
In this embodiment, target table data is obtained, and a corresponding data set is obtained based on the target table data. Specifically, target table data corresponding to a target service table is obtained, and counting is performed based on the feature field data of the target table data to obtain a data set under the belonging condition corresponding to the feature field data. It can be understood that, based on the target table data related to the service acquisition, the data is manufactured according to the characteristic field data of the target table data, so as to form several data sets. The method comprises the steps of collecting table data related to business problems of one user in two cases, for example, collecting three pieces of business table data, and performing the number of the two cases according to the characteristic fields because the characteristic field values of the business problems in the two cases are not consistent, wherein 500 pieces of the number of the cases are obtained for each case, and 500 pieces of the number of the cases A and 500 pieces of the number of the cases B are obtained.
Step S12: and integrating all the data sets by using a preset integration method to obtain a target data set.
In this embodiment, after target table data is acquired and a corresponding data set is acquired based on the target table data, all the data sets are integrated by using a preset integration method to obtain a target data set. Specifically, all data sets are subjected to data cleaning processing, normalization processing and labeling processing, and a plurality of data sets are integrated into one data set, namely target table data, wherein the target table data comprises training data, verification data and test data. It can be understood that, for example, three feature fields of the service table are extracted to form a one-dimensional array, and a larger value is normalized, and the processed data set has 1000 pieces of data (including 500 pieces of the a case model and 500 pieces of the B case model), where the data is divided according to a preset ratio, for example, 650 pieces of data are used as a training set, 300 pieces of data are used as a verification set, and 50 pieces of data are used as a test set.
Step S13: and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result.
In this embodiment, after all the data sets are integrated by using a preset integration method to obtain a target data set, a target network model is constructed based on the target data set, an original DNN (Deep Neural Networks, fully connected Neural Networks) network model, an original naive bayes model, and an original CNN (convolutional Neural Networks) network model, so that the received service table to be analyzed is predicted by using the target network model, and a preset script replacement operation is performed according to a prediction result. It can be understood that a trained DNN network model for preliminarily distinguishing business problems is established based on DNN model parameters obtained by training data and verification data of a data set; setting prior probability, and updating posterior probability in real time based on DNN training results of DNN network models after each round of training so as to establish a probability model (namely a post-training primitive Bayesian model) for identifying the accuracy of the DNN model based on the posterior probability; inputting the target data set, the first output data and the second output data into the original CNN network model to execute a preset training mode so as to obtain a current output value; acquiring a second preset loss function and a second preset accuracy calculation formula so as to adjust a preset CNN parameter in the original CNN network model based on the current output value, the second preset loss function and the second preset accuracy calculation formula to obtain a target CNN parameter; and establishing a trained CNN network model for finally identifying service problems based on the target CNN parameters and the original CNN network model. And constructing the target network model based on the trained DNN model, the trained primitive Bayesian model and the trained CNN model, so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to a prediction result. It can be understood that the input data are used to train the CNN model parameters, and the CNN model parameters in the original CNN network model are preset CNN parameters.
It should be noted that the data in the scheme includes not only text data, but also logic data and integer data, and the data is stored in the structural database, the text data types in the business table have no semantic association of up-down words, and most of the training data fields are logic data and integer data; the target network model is a model in deep learning and comprises a DNN model, a naive Bayes model and a CNN model; the input of the naive Bayes model is from the initial prediction result of the DNN model (naive Bayes is not concerned about original data when applied to the upper model), different results can be generated when the original data is repeatedly input into the DNN model in each round, so that the posterior probability of naive Bayes is updated in each round, and the probability distribution of naive Bayes is more practical when the DNN model is optimal. The DNN model and the CNN model are classification models, and are extracted and classified based on the characteristic properties of data; for the training data of the invention, not only text data without upper and lower word semantics exist, but also logic data and integer data are used as characteristic values of the data, and the method is very suitable for extracting the characteristic values by using DNN and CNN models and then classifying the characteristic values under the premise of the data environment. In the invention, the DNN model is used for preliminarily predicting the service condition, the probability analysis is carried out on the DNN model result through the naive Bayes model, the CNN model finally carries out decision making according to the input data, the DNN model output result and the naive Bayes output result, the final prediction result is analyzed and output, and the probability model and the decision model are added to enable the final output to be more fit. The method mainly distinguishes the characteristic field of the database business table data through machine learning and outputs the operation and maintenance script, and can also distinguish through analyzing the field meaning by a marking method, but the marking is equivalent to manual operation and entry notes and is stored in a computer program. In actual life, hundreds of fields may exist in one business table, and several business tables may be associated with one key business table, in the prior art, the field meaning is analyzed and distinguished through a labeling method, but the labeling is equivalent to manual operation and entry notes stored in a computer program, so that the labor cost is wasted by realizing the labeling and distinguishing; the method is based on deep learning, only big data support is needed, the meaning of each field does not need to be marked manually, and the model after parameter adjustment training realizes intelligent identification of abnormal conditions of the service, so that an operation and maintenance scheme is provided intelligently, and labor cost input is reduced; the intelligent issuing of the operation and maintenance script through the target network model not only helps to save labor time cost and improve operation and maintenance efficiency, but also provides a new direction for AIOps (Intelligent operation and maintenance).
It can be seen that the present application provides a model design method, comprising: acquiring target table data and acquiring a corresponding data set based on the target table data; integrating all the data sets by using a preset integration method to obtain a target data set; and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result. Therefore, the deep learning target network model is constructed based on the DNN network model, the naive Bayes model and the CNN network model, the service condition is preliminarily predicted through the DNN model, the result of the DNN model is subjected to probability analysis through the naive Bayes model, the CNN model makes a decision according to the target data set, the output result of the DNN model and the output result of the naive Bayes, and the final prediction result is analyzed and output, so that automatic replacement of a script is realized according to the prediction result, and the fitting performance and the accuracy are improved by combining a plurality of models; the training time is shortened by optimizing the algorithm in the model; the operation and maintenance efficiency is improved, the labor cost is saved, and the risk of human errors is reduced.
Referring to fig. 2, the embodiment of the present invention discloses a model design method, and the embodiment further describes and optimizes the technical solution with respect to the previous embodiment.
Step S21: target table data is obtained, and a corresponding data set is obtained based on the target table data.
Step S22: and extracting the characteristic field data of all the data sets to obtain a current array.
In this embodiment, after the corresponding data set is obtained based on the target table data, the target table data is obtained, and the corresponding data set is obtained based on the target table data. It can be understood that, as shown in fig. 3, a data cleansing operation is performed first, i.e., the characteristic field values of the three service tables are extracted and stored in a one-dimensional array, and the useless fields are discarded.
Step S23: and executing preset normalization processing operation on the current array to obtain a normalization array.
In this embodiment, after the feature field data of all the data sets are extracted to obtain a current array, a preset normalization processing operation is performed on the current array to obtain a normalized array. As can be appreciated, the data normalization process maps field values to a [0,1] span range to obtain a normalized array.
Step S24: and performing preset labeling operation on the data in the normalized array to obtain a target array, and acquiring the target data set based on the target array.
In this embodiment, after a preset normalization processing operation is performed on the current array to obtain a normalized array, a preset tagging operation is performed on data in the normalized array to obtain a target array, and the target data set is obtained based on the target array; wherein the target data set comprises training data, validation data, and test data. It can be understood that, since the data is manufactured according to two different processing cases under the service, a column of result values is added after each group of data in the one-dimensional array, and the result values are used to identify which processing case the data is currently, for example, the result values are respectively set to 0 and 1. And (3) disorganizing 65% of data in the target array and storing the data as training data, storing 30% of data as verification data and storing 5% of data as test data.
Step S25: and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result.
For the specific content of the above steps S21 and S25, reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, the data of the target table is obtained, and the corresponding data set is obtained based on the data of the target table; extracting the characteristic field data of all the data sets to obtain a current array; executing preset normalization processing operation on the current array to obtain a normalized array; performing preset labeling operation on data in the normalized array to obtain a target array, and acquiring the target data set based on the target array; constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model, and performing preset script replacement operation according to the prediction result to realize automatic replacement of the script, wherein the fitting performance and the accuracy are improved by combining a plurality of models; the training time is shortened by optimizing the algorithm in the model; the operation and maintenance efficiency is improved, the labor cost is saved, and the risk of human errors is reduced.
Referring to fig. 4, the embodiment of the present invention discloses a model design method, and the embodiment further describes and optimizes the technical solution with respect to the previous embodiment.
Step S31: target table data is obtained, and a corresponding data set is obtained based on the target table data.
Step S32: and integrating all the data sets by using a preset integration method to obtain a target data set.
Step S33: obtaining a trained DNN network model based on the target dataset and the original DNN network model.
In this embodiment, after all the data sets are integrated by using a preset integration method to obtain a target data set, a trained DNN network model is obtained based on the target data set and the original DNN network model. Specifically, DNN model parameters are determined based on the training data and the validation data in the target dataset; generating a current DNN network model based on the DNN model parameters and the original DNN network model; adjusting the DNN model parameters in the current DNN network model through a first preset loss function and a first preset accuracy calculation formula to obtain DNN model final parameters meeting current requirements; establishing the trained DNN network model for preliminarily distinguishing business problems based on the final DNN model parameters and the current DNN network model.
It can be understood that the network structure of the original DNN network model is designed and developed based on the fully-connected neural network in deep learning, and as shown in fig. 5, the main constituent structure of the DNN network model includes an input layer, a fully-connected network layer of two nested algorithms (e.g., a fully-connected neural network layer with a neuron number of 256, a fully-connected neural network layer with a neuron number of 128), and an output layer; also included are a BN (normalization processing algorithm in neural networks) layer, a leakyRelu (activation function in neural networks) layer, a Dropout (random deletion hidden neuron algorithm in neural networks) layer, and a Sigmoid (activation function in neural networks) layer. Training an original DNN network model based on data of a training set and a verification set, defining a loss function and accuracy to judge a training result of the model, enabling model weight parameters to be optimal by adjusting a learning rate in an optimizer, and inputting an output result into a naive Bayes model and a CNN model. The original DNN network model training process under the network structure is as follows: first, the input layer parameters are two-dimensional data, and the dimension of the input data in the input layer is [ K, V ]]Wherein K is the data size and V is the number of the characteristic fields. For example, the training set data is divided into 130 groups of 50 data, K has a value of 50, V has a value of 9 (consisting of 8 feature fields and 1 label result), wherein the feature fields are arranged from left to right by the determinant. Secondly, a weight proportion coefficient v is set for each characteristic field in the data of the input layer, for example, the data of the input layer is ((A1, A2, A3, A4, A5, A6, A7, A8, y 0)..) and then input into a fully-connected neural network layer of 256 neurons, and after passing through the first fully-connected neural network layer, output values of each neuron are as follows:
Figure 774474DEST_PATH_IMAGE001
(wherein
Figure 390000DEST_PATH_IMAGE002
And b is rightThe weight parameter, n is the number of the neurons of the fully-connected neural network layer), and then the output value of the normalization processing is as follows through the BN layer:
Figure 947015DEST_PATH_IMAGE003
the data values input to the activation function are all made [0,1] by the BN layer]Therefore, the situation that the gradient disappears when the model is trained is eliminated, the influence of the change of input data is reduced, and finally, the output value is activated through the LEAKYRelu layer as follows:
Figure 26704DEST_PATH_IMAGE004
thirdly, inputting the upper layer output value R into a fully-connected neural network layer of 128 neurons, wherein the output values of the neurons after passing through the second fully-connected neural network layer are as follows:
Figure 220925DEST_PATH_IMAGE005
(n is the number of the neurons of the fully-connected neural network layer), and then normalization processing is carried out on the neurons through the BN layer to obtain the output value:
Figure 854906DEST_PATH_IMAGE006
and the processed data is input into a LEAKyRelu layer to activate output values as follows:
Figure 136983DEST_PATH_IMAGE007
finally, 30% of the neurons were randomly discarded by the Dropout layer. It can be understood that the number of times of inputting the fully-connected neural network layer, the BN layer, and the leakyRelu layer is determined according to actual conditions.
Fourth, the upper layer output value R is input into the fully-connected neural network layer of 2 neurons (i.e., the output layer), and the output values are as follows:
Figure 838223DEST_PATH_IMAGE008
(n is the number of the neurons of the fully-connected neural network layer), and activating and outputting a final value through a Sigmoid layer:
Figure 78711DEST_PATH_IMAGE009
fifthly, defining a loss function and accuracy, and adopting a binary cross entropy as the loss function:
Figure 232612DEST_PATH_IMAGE010
the accuracy in each round of training is:
Figure 285757DEST_PATH_IMAGE011
(where n =50 represents the current round that needs to be performed, i.e. the number of data in each group,
Figure 841503DEST_PATH_IMAGE012
as a result of the labeling).
Setting the total training times and the learning rate, updating the weight parameters in the neural network through the loss value and the accuracy rate for each training, for example, 130 times are needed for each training in the invention, 50 data are input for each training, the training is performed 50 times, and the steps from two to five are continuously repeated for each training until the training is finished.
It can be understood that, the fully-connected network layer is used for perception classification, and needs to set input parameters, output parameters, the number of neurons, a weight initialization function and an offset value initialization function; the perception capability and complexity of the model can be increased by increasing the number of hidden layers, namely the number of fully-connected neural network layers. The BN layer, the LEAKYRelu layer, the Dropout layer and the Sigmoid layer are all nested in the fully-connected neural network layer, wherein the output layer can be regarded as a special fully-connected neural network layer; the BN layer is used for eliminating gradient explosion and fast convergence of the model; the LEAKYRelu layer and the Sigmoid layer are used for increasing the nonlinear capacity of a fully-connected neural network layer (linear output), eliminating the problem of gradient disappearance and accelerating the speed of gradient descending and back propagation; the Dropout layer is used for reducing overfitting and needs to set a deleting ratio; the size of the output layer is the number of the service data.
Step S34: and acquiring first output data corresponding to the trained DNN network model, and acquiring a trained primitive Bayesian model based on the first output data and the original primitive Bayesian model.
In this embodiment, after obtaining the trained DNN network model based on the target data set and the original DNN network model, first output data corresponding to the trained DNN network model is obtained, and a trained naive bayes model is obtained based on the first output data and the original naive bayes model. Specifically, a prior probability is set, and the current posterior probability is determined based on the prior probability; updating the current posterior probability in real time by using the first output data to obtain a target posterior probability; and acquiring the post-training primitive Bayesian model based on the target posterior probability and the original primitive Bayesian model.
It is understood that a prior probability is first set, a posterior probability is updated in real time based on each round of DNN training results (i.e., first output data), and a probabilistic model (i.e., a post-training bayesian model) for discriminating the accuracy of the DNN model is established based on the posterior probability. The prior probability is the probability of each service condition and the probability of each result output by the DNN model, and the prior probability of the service condition can be customized based on the frequent times of service occurrence, for example, the prior probability of the A condition is set to be 0.65, the prior probability of the B condition is set to be 0.35, and the prior probabilities of each output result of the DNN model are both 0.5. The posterior probability is the probability of the DNN model outputting the correct result under the condition of the known service, and because the DNN model generates inconsistent results in each training and is gradually updated to the optimal trend, the posterior probability updated in real time is more suitable for the reality.
The naive Bayes model is a probability model, mainly predicts the accurate probability of the primary discrimination result of the DNN model, and is used as a reference for the subsequent CNN model decision, and the output result of the naive Bayes model is as follows:
Figure 987313DEST_PATH_IMAGE013
wherein
Figure 894090DEST_PATH_IMAGE014
Is the a-priori probability of the traffic situation,
Figure 783548DEST_PATH_IMAGE015
for the DNN model to output the prior probability of the result,
Figure 426757DEST_PATH_IMAGE016
for the posterior probability of the output result of the DNN model under the known service condition, i.e. the probability that the DNN model outputs the correct result, in the case of the iterative update of the DNN model toward the optimal trend,
Figure 477889DEST_PATH_IMAGE017
the value of (a) will approach 1 indefinitely, making the verification probability of the naive bayes model more accurate.
Step S35: and acquiring second output data corresponding to the post-training primitive Bayesian model, and acquiring a post-training CNN model based on the original CNN model, the target data set, the first output data and the second output data.
In this embodiment, first output data corresponding to the trained DNN network model is obtained, and after obtaining a post-trained naive bayesian model based on the first output data and the original naive bayesian model, second output data corresponding to the post-trained naive bayesian model is obtained, and a post-trained CNN network model for finally identifying a service problem is established based on the original CNN network model, the target data set, the first output data, and the second output data.
It can be understood that the network structure of the CNN network model is designed and developed based on the convolutional neural network in deep learning, and the main constituent structure includes an input layer, a convolutional layer, a flat layer, and an output layer. As shown in fig. 6, the target data set data, the DNN network model output data y1, and the naive bayesian model output data y2 are input to the CNN network model for training, and the CNN model serves as a main decision model to output a final discrimination result. The model training process is as follows: first, the input layer parameters are two-dimensional data, and the dimension of the input data in the input layer is [ K, V ], where K is the data size and V is the data number (V is composed of a feature field, a DNN model output result and a naive bayesian model output result). For example, a value of K of 50, a value of 11, and consists of 8 feature fields, 1 tag result, a DNN model output result, and a naive Bayes model output result.
Secondly, inputting data ((A1, A2, A3, A4, A5, A6, A7, A8, y0, y1, y 2)) of an input layer into a convolutional neural network layer with a convolutional kernel size of 1 and an output channel number of 16, wherein the convolutional layer is composed of the convolutional neural network layer and a learkyrelu layer, is mainly used for extracting data characteristic values, needs to set input parameters, convolutional kernel size and output parameters, and outputs the parameters as follows:
Figure 606382DEST_PATH_IMAGE018
finally, the output values are activated by the LEAKyRelu layer as follows:
Figure 299532DEST_PATH_IMAGE019
thirdly, inputting an upper-layer output value R into a flat layer, wherein the flat layer mainly flattens multi-dimensional data output by the convolutional layer into one-dimensional data so as to be input into a full-connection layer for perception classification, and converts sixteen-dimensional data into one-dimensional data for output, and the output value is as follows:
Figure 564291DEST_PATH_IMAGE020
fourthly, the output layer mainly comprises a fully-connected neural network layer and a Sigmoid layer of two neurons, and finally, a prediction result of the service condition is output. Inputting an upper-layer output value Z into a fully-connected neural network layer of 2 neurons, wherein the output value is as follows:
Figure 284860DEST_PATH_IMAGE021
and then activating an output final value through a Sigmoid layer, wherein the formula is as follows:
Figure 635070DEST_PATH_IMAGE022
fifthly, defining a loss function and accuracy, and adopting a binary cross entropy as the loss function:
Figure 866331DEST_PATH_IMAGE023
the accuracy in each round of training is as follows:
Figure 516756DEST_PATH_IMAGE024
because the DNN model sets the total times of training, the training of the CNN model does not need to be repeated, only the learning rate needs to be set, the weight parameters in the neural network are updated through the loss value and the accuracy rate during each training, and the steps from the second step to the fifth step are repeated continuously until the training is finished.
Step S36: and constructing the target network model based on the trained DNN network model, the trained naive Bayes model and the trained CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to a prediction result.
In this embodiment, the target network model is constructed based on the trained DNN network model, the trained naive bayes model, and the trained CNN network model, so as to predict the received service table to be analyzed by using the target network model, and perform a preset script replacement operation according to a prediction result. Specifically, a target network model is constructed based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model, so that the trained DNN network model, the trained naive Bayes model and the trained CNN network model are used for predicting the received service table to be analyzed, a corresponding target script is automatically obtained according to the prediction result, and the current script is replaced by the target script through a preset script replacement method. And inputting the data of the test set into the trained target network model to predict the service problem, judging the script content based on the prediction result of the model, and automatically replacing the fixed script content according to the field value of the service table through a preset script replacing code to automatically provide the service operation and maintenance script.
According to the method and the device, the data of the business table related to the user needs to be input, and the replacement of the fixed operation and maintenance script is realized after the problem of the business is predicted through the target model. By the mode, the problems of service occurrence do not need to be judged manually, the operation and maintenance script does not need to be replaced manually, the condition that a large amount of time is spent to verify the condition of the replacement script in manual issuing of the script is avoided, the investment of manual time is reduced, the speed of issuing the operation and maintenance script is improved, and the labor cost is saved. The training set data is used for training model parameters in the training process, and the validation set data is used for feeding back the prediction condition of the current model parameters to real data, so that the model weight can be changed in a better and more appropriate direction. The influence ratio of the model to input data is realized by setting the weight ratio of the characteristic field, a BN algorithm is added in the model structure, the mixed algorithm is used for activation, different activation functions are used for different neural network layers, and then the binary cross entropy is combined, so that the time for training the model is shortened, and overfitting is reduced. The DNN model is used for preliminarily predicting the service condition, probability analysis is carried out on the DNN model result through the naive Bayes model, finally the CNN model carries out decision making according to the input data, the DNN model output result and the naive Bayes output result, the final prediction result is analyzed and output, and the probability model and the decision model are added to enable the final output to be more fit. In addition, the scheme can realize corresponding effects by adding and subtracting the number of hidden layers or adding and subtracting algorithms, and the whole model framework of deep learning is not limited to a DNN model, a naive Bayes model and a CNN model.
For the specific content of the above steps S31 and S32, reference may be made to the corresponding content disclosed in the foregoing embodiments, and details are not repeated here.
Therefore, in the embodiment of the application, the target table data is obtained, and the corresponding data set is obtained based on the target table data; integrating all the data sets by using a preset integration method to obtain a target data set; acquiring a trained DNN network model based on the target data set and the original DNN network model; acquiring first output data corresponding to the trained DNN network model, and acquiring a post-training naive Bayes model based on the first output data and the original naive Bayes model; acquiring second output data corresponding to the post-training primitive Bayesian model, and acquiring a post-training CNN network model based on an original CNN network model, the target data set, the first output data and the second output data; constructing the target network model based on the trained DNN network model, the trained naive Bayes model and the trained CNN network model so as to predict the received service table to be analyzed by using the target network model, and performing preset script replacement operation according to a prediction result to realize automatic replacement of the script, wherein the fitting property and the accuracy are improved by combining a plurality of models; the training time is shortened by optimizing the algorithm in the model; the operation and maintenance efficiency is improved, the labor cost is saved, and the risk of human errors is reduced.
Referring to fig. 7, an embodiment of the present application further discloses a model designing apparatus, which includes:
a first data set obtaining module 11, configured to obtain target table data, and obtain a corresponding data set based on the target table data;
a second data set obtaining module 12, configured to integrate all the data sets by using a preset integration method to obtain a target data set;
and the model building module 13 is configured to build a target network model based on the target data set, the original DNN network model, the original naive bayes model and the original CNN network model, so as to predict the received service table to be analyzed by using the target network model, and perform a preset script replacement operation according to a prediction result.
As can be seen, the present application includes: acquiring target table data and acquiring a corresponding data set based on the target table data; integrating all the data sets by using a preset integration method to obtain a target data set; and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result. Therefore, the deep learning target network model is constructed based on the DNN model, the naive Bayes model and the CNN network model, the DNN model is used for preliminarily predicting the service condition, the naive Bayes model is used for carrying out probability analysis on the result of the DNN model, the CNN model is used for making a decision according to the target data set, the output result of the DNN model and the output result of the naive Bayes model, and the final prediction result is analyzed and output, so that the automatic replacement of the script is realized according to the prediction result, and the fitting property and the accuracy are improved by combining a plurality of models; the training time is shortened by optimizing the algorithm in the model; the operation and maintenance efficiency is improved, the labor cost is saved, and the risk of human errors is reduced.
In some specific embodiments, the first data set obtaining module 11 specifically includes:
the target table data acquisition unit is used for acquiring target table data corresponding to the target service table;
and the data set acquisition unit is used for making a number based on the characteristic field data of the target table data so as to obtain a data set corresponding to the characteristic field data under the belonged condition.
In some specific embodiments, the second data set obtaining module 12 specifically includes:
a current array obtaining unit, configured to extract the feature field data of all the data sets to obtain a current array;
the normalization unit is used for executing preset normalization processing operation on the current array to obtain a normalization array;
the tag printing unit is used for executing preset tag printing operation on the data in the normalized array to obtain a target array;
a target data set acquisition unit configured to acquire the target data set based on the target array; wherein the target data set comprises training data, validation data, and test data.
In some embodiments, the model building module 13 specifically includes:
a DNN model parameter determination unit for determining DNN model parameters based on the training data and the validation data in the target data set;
a current DNN network model generating unit, configured to generate a current DNN network model based on the DNN model parameters and the original DNN network model;
a final parameter obtaining unit, configured to adjust the DNN model parameter in the current DNN network model through a first preset loss function and a first preset accuracy calculation formula, so as to obtain a DNN model final parameter meeting a current requirement;
a trained DNN network model obtaining unit, configured to obtain the trained DNN network model based on the final DNN model parameter and the current DNN network model;
a first output data acquisition unit, configured to acquire first output data corresponding to the trained DNN network model;
a prior probability setting unit for setting a prior probability;
a current posterior probability determining unit for determining a current posterior probability based on the prior probability;
a target posterior probability obtaining unit, configured to update the current posterior probability with the first output data to obtain a target posterior probability;
a post-training naive Bayes model obtaining unit for obtaining the post-training naive Bayes model based on the target posterior probability and the original naive Bayes model;
the second output data acquisition unit is used for acquiring second output data corresponding to the post-training primitive Bayesian model;
a current output value obtaining unit, configured to input the target data set, the first output data, and the second output data into the original CNN network model to execute a preset training mode, so as to obtain a current output value;
a target CNN parameter obtaining unit, configured to obtain a second preset loss function and a second preset accuracy calculation formula, so as to adjust a preset CNN parameter in the original CNN network model based on the current output value, the second preset loss function, and the second preset accuracy calculation formula, so as to obtain a target CNN parameter;
a post-training CNN network model obtaining unit, configured to obtain the post-training CNN network model based on the target CNN parameter and the original CNN network model;
and the script replacing unit is used for constructing the target network model based on the target data set, the trained DNN network model, the trained primitive Bayesian model and the trained CNN network model so as to predict the received service table to be analyzed, automatically acquire a corresponding target script according to the prediction result, and replace the current script with the target script through a preset script replacing method.
Further, the embodiment of the application also provides electronic equipment. FIG. 8 is a block diagram illustrating an electronic device 20 according to an exemplary embodiment, and the contents of the figure should not be construed as limiting the scope of the application in any way.
Fig. 8 is a schematic structural diagram of an electronic device 20 according to an embodiment of the present disclosure. The electronic device 20 may specifically include: at least one processor 21, at least one memory 22, a power supply 23, a communication interface 24, an input output interface 25, and a communication bus 26. Wherein, the memory 22 is used for storing a computer program, and the computer program is loaded and executed by the processor 21 to implement the relevant steps in the model design method disclosed in any of the foregoing embodiments. In addition, the electronic device 20 in the present embodiment may be specifically an electronic computer.
In this embodiment, the power supply 23 is configured to provide a working voltage for each hardware device on the electronic device 20; the communication interface 24 can create a data transmission channel between the electronic device 20 and an external device, and a communication protocol followed by the communication interface is any communication protocol applicable to the technical solution of the present application, and is not specifically limited herein; the input/output interface 25 is configured to obtain external input data or output data to the outside, and a specific interface type thereof may be selected according to specific application requirements, which is not specifically limited herein.
In addition, the storage 22 is used as a carrier for resource storage, and may be a read-only memory, a random access memory, a magnetic disk or an optical disk, etc., and the resources stored thereon may include an operating system 221, a computer program 222, etc., and the storage manner may be a transient storage or a permanent storage.
The operating system 221 is used for managing and controlling each hardware device on the electronic device 20 and the computer program 222, and may be Windows Server, netware, unix, linux, or the like. The computer programs 222 may further include computer programs that can be used to perform other specific tasks in addition to the computer programs that can be used to perform the model design method disclosed in any of the foregoing embodiments and executed by the electronic device 20.
Further, an embodiment of the present application further discloses a storage medium, where a computer program is stored, and when the computer program is loaded and executed by a processor, the steps of the model design method disclosed in any of the foregoing embodiments are implemented.
The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts among the embodiments are referred to each other. The device disclosed in the embodiment corresponds to the method disclosed in the embodiment, so that the description is simple, and the relevant points can be referred to the description of the method part.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a," "8230," "8230," or "comprising" does not exclude the presence of additional like elements in a process, method, article, or apparatus that comprises the element.
The above detailed description is provided for a model design method, apparatus, device and storage medium, and the specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understand the method and its core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of model design, comprising:
acquiring target table data and acquiring a corresponding data set based on the target table data;
integrating all the data sets by using a preset integration method to obtain a target data set;
and constructing a target network model based on the target data set, the original DNN network model, the original naive Bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to a prediction result.
2. The model design method of claim 1, wherein said obtaining target table data and obtaining a corresponding data set based on said target table data comprises:
and acquiring target table data corresponding to a target service table, and counting based on the characteristic field data of the target table data to obtain a data set under the belonged condition corresponding to the characteristic field data.
3. The model design method of claim 2, wherein the integrating all the data sets by using a preset integration method to obtain a target data set comprises:
extracting the characteristic field data of all the data sets to obtain a current array;
executing preset normalization processing operation on the current array to obtain a normalized array;
performing preset labeling operation on data in the normalized array to obtain a target array, and acquiring the target data set based on the target array; wherein the target data set comprises training data, validation data, and test data.
4. The model design method of claim 3, wherein the constructing a target network model based on the target dataset, an original DNN network model, an original naive Bayes model, and an original CNN network model comprises:
acquiring a trained DNN network model based on the target data set and the original DNN network model;
acquiring first output data corresponding to the trained DNN network model, and acquiring a trained primitive Bayesian model based on the first output data and the original primitive Bayesian model;
acquiring second output data corresponding to the post-training primitive Bayesian model, and acquiring a post-training CNN network model based on an original CNN network model, the target data set, the first output data and the second output data;
and constructing the target network model based on the trained DNN network model, the trained plain Bayesian model and the trained CNN network model.
5. The model design method of claim 4, wherein said obtaining a trained DNN network model based on the target dataset and the original DNN network model comprises:
determining DNN model parameters based on the training data and the validation data in the target dataset;
generating a current DNN network model based on the DNN model parameters and the original DNN network model;
adjusting the DNN model parameters in the current DNN network model through a first preset loss function and a first preset accuracy calculation formula to obtain DNN model final parameters meeting current requirements;
and acquiring the trained DNN model based on the final DNN model parameters and the current DNN model.
6. The model design method of claim 4, wherein said obtaining a trained naive Bayes model based on said first output data and said original naive Bayes model, comprises:
setting prior probability, and determining the current posterior probability based on the prior probability;
updating the current posterior probability by utilizing the first output data to obtain a target posterior probability;
and acquiring the post-training naive Bayes model based on the target posterior probability and the original naive Bayes model.
7. The model design method of any one of claims 4 to 6, wherein the obtaining a trained CNN network model based on the original CNN network model, the target data set, the first output data and the second output data comprises:
inputting the target data set, the first output data and the second output data into the original CNN network model to execute a preset training mode so as to obtain a current output value;
acquiring a second preset loss function and a second preset accuracy calculation formula so as to adjust a preset CNN parameter in the original CNN network model based on the current output value, the second preset loss function and the second preset accuracy calculation formula to obtain a target CNN parameter;
acquiring the trained CNN network model based on the target CNN parameter and the original CNN network model;
correspondingly, the constructing a target network model based on the target data set, the original DNN network model, the original naive bayes model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform a preset script replacement operation according to a prediction result, includes:
and constructing the target network model based on the target data set, the trained DNN network model, the trained primitive Bayesian model and the trained CNN network model so as to predict the received service table to be analyzed, automatically acquire a corresponding target script according to the prediction result, and replace the current script with the target script by a preset script replacement method.
8. A model design apparatus, comprising:
the first data set acquisition module is used for acquiring target table data and acquiring a corresponding data set based on the target table data;
the second data set acquisition module is used for integrating all the data sets by using a preset integration method to obtain a target data set;
and the model construction module is used for constructing a target network model based on the target data set, the original DNN network model, the original naive Bayesian model and the original CNN network model so as to predict the received service table to be analyzed by using the target network model and perform preset script replacement operation according to the prediction result.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to carry out the steps of the model design method according to any one of claims 1 to 7.
10. A computer-readable storage medium for storing a computer program; wherein the computer program, when executed by a processor, implements the model design method of any of claims 1 to 7.
CN202310000603.0A 2023-01-03 2023-01-03 Model design method, device, equipment and storage medium Active CN115713099B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310000603.0A CN115713099B (en) 2023-01-03 2023-01-03 Model design method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310000603.0A CN115713099B (en) 2023-01-03 2023-01-03 Model design method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115713099A true CN115713099A (en) 2023-02-24
CN115713099B CN115713099B (en) 2023-05-09

Family

ID=85236146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310000603.0A Active CN115713099B (en) 2023-01-03 2023-01-03 Model design method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115713099B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017061A (en) * 2020-07-15 2020-12-01 北京淇瑀信息科技有限公司 Financial risk prediction method and device based on Bayesian deep learning and electronic equipment
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
US20210303986A1 (en) * 2020-03-26 2021-09-30 Fujitsu Limited Validation of deep neural network (dnn) prediction based on pre-trained classifier
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN114943372A (en) * 2022-05-09 2022-08-26 国网浙江省电力有限公司宁波供电公司 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network
CN115169490A (en) * 2022-07-25 2022-10-11 济南浪潮数据技术有限公司 Log classification method, device and equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
US20210303986A1 (en) * 2020-03-26 2021-09-30 Fujitsu Limited Validation of deep neural network (dnn) prediction based on pre-trained classifier
CN112017061A (en) * 2020-07-15 2020-12-01 北京淇瑀信息科技有限公司 Financial risk prediction method and device based on Bayesian deep learning and electronic equipment
CN114862763A (en) * 2022-04-13 2022-08-05 华南理工大学 Gastric cancer pathological section image segmentation prediction method based on EfficientNet
CN114943372A (en) * 2022-05-09 2022-08-26 国网浙江省电力有限公司宁波供电公司 Method and device for predicting life of proton exchange membrane based on Bayesian recurrent neural network
CN115169490A (en) * 2022-07-25 2022-10-11 济南浪潮数据技术有限公司 Log classification method, device and equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHIWEI LAN等: "Scaling Up Bayesian Uncertainty Quantification for Inverse Problems using Deep Neural Networks" *
蒋俊钊;程良伦;李全杰;: "基于标签相关性的卷积神经网络多标签分类算法" *

Also Published As

Publication number Publication date
CN115713099B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
US11487941B2 (en) Techniques for determining categorized text
Fong et al. Accelerated PSO swarm search feature selection for data stream mining big data
CN109345399B (en) Method, device, computer equipment and storage medium for evaluating risk of claim settlement
CN107861951A (en) Session subject identifying method in intelligent customer service
CN111639516B (en) Analysis platform based on machine learning
CN111897528B (en) Low-code platform for enterprise online education
CN111797320B (en) Data processing method, device, equipment and storage medium
CN110175235A (en) Intelligence commodity tax sorting code number method and system neural network based
US20200073939A1 (en) Artificial Intelligence Process Automation for Enterprise Business Communication
Fisch et al. Conformal prediction sets with limited false positives
CN112836509A (en) Expert system knowledge base construction method and system
EP4222635A1 (en) Lifecycle management for customized natural language processing
CN113159213A (en) Service distribution method, device and equipment
CN111930944A (en) File label classification method and device
Bharadhwaj Layer-wise relevance propagation for explainable recommendations
CN115713099B (en) Model design method, device, equipment and storage medium
CN115688101A (en) Deep learning-based file classification method and device
AU2022204724B1 (en) Supervised machine learning method for matching unsupervised data
CN116842936A (en) Keyword recognition method, keyword recognition device, electronic equipment and computer readable storage medium
CN115129890A (en) Feedback data map generation method and generation device, question answering device and refrigerator
US20210312319A1 (en) Artificial Intelligence Process Automation for Enterprise Business Communication
KR101856115B1 (en) System and Method for providing digital information
CN113642727B (en) Training method of neural network model and processing method and device of multimedia information
US11836176B2 (en) System and method for automatic profile segmentation using small text variations
US11727215B2 (en) Searchable data structure for electronic documents

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant