CN115577107A - Student portrait generation method based on classification neural network - Google Patents

Student portrait generation method based on classification neural network Download PDF

Info

Publication number
CN115577107A
CN115577107A CN202211266946.3A CN202211266946A CN115577107A CN 115577107 A CN115577107 A CN 115577107A CN 202211266946 A CN202211266946 A CN 202211266946A CN 115577107 A CN115577107 A CN 115577107A
Authority
CN
China
Prior art keywords
student
information
training
data
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211266946.3A
Other languages
Chinese (zh)
Inventor
夏江华
诸葛晴凤
王寒
沙行勉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
East China Normal University
Original Assignee
East China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by East China Normal University filed Critical East China Normal University
Priority to CN202211266946.3A priority Critical patent/CN115577107A/en
Publication of CN115577107A publication Critical patent/CN115577107A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a student portrait generation method based on a classification neural network, which is characterized in that a distributed training architecture is adopted to classify student data and subject data, different types of models are trained and predicted in a parallel mode, and training data are processed in a centralized mode, and the method specifically comprises the following steps: 1) Performing unsupervised learning by using the generated embedded vector as training data by using a controllable clustering algorithm; 2) Each device builds a model for student subjects of different categories and conducts distributed training; 3) And finally, forecasting the knowledge state of the student according to the received data and carrying out visualization to construct a student portrait. Compared with the prior art, the method has the advantages that the model can be simply and quickly deployed in a distributed manner by automatically adjusting the data distribution, so that the model prediction accuracy is improved, the high availability of the portrait generation technology in an online learning system is improved, and the problem of poor accuracy possibly caused by data dispersion is effectively solved.

Description

Student portrait generation method based on classification neural network
Technical Field
The invention relates to the technical field of neural networks and machine learning, in particular to a student portrait generation method based on a classification neural network.
Background
With the rapid increase of the data volume of the learning record of students in the online learning system, the student portrait generation technology based on the neural network is rapidly developed, and the method tracks the learning progress of the students based on the historical learning data of the students, and further draws the knowledge mastering level of the students and the like. For an online learning system, portrait generation needs to meet high accuracy and high real-time performance, so that user experience is improved. How to generate student portraits more quickly and accurately remains a very challenging problem.
At present, the knowledge tracking technology based on the deep neural network obtains better prediction performance than the traditional BKT method, and can capture the time sequence characteristics and semantic characteristics of the exercise text more deeply. Therefore, deep neural network based technologies are widely studied and discussed. Most work has been done to improve the accuracy of student portrayal by improving the accuracy of predicting student answers to mistakes, such as improving models in conjunction with educational data features: cheng et al introduced errors and guessing factors in DKT to better simulate the student's real problem-making situation. Nagatani et al introduced time intervals to simulate forgetting behaviors of students and studied the influence of the time intervals on prediction accuracy; attention is drawn: su et al propose to use cosine similarity to calculate exercise topic similarity when the model predicts the output, have captured the long-term dependency in the exercise sequence effectively; interpretable knowledge tracing: the EKT expands the knowledge state vector of each student into a knowledge state matrix which is updated along with time, wherein each vector represents the mastery degree of the student on a certain concept, and the problem that the traditional DKTT is difficult to determine which concepts the student is adept or unfamiliar with is effectively solved. However, current work still presents problems in terms of accuracy and training efficiency. 1) Most studies neglect the negative impact of training data dispersion on the prediction results, i.e., uniformly training the answer data of all the questions by all students, which may reduce the prediction accuracy. 2) Meanwhile, the timeliness of the deep learning framework is continuously reduced along with the sharp increase of the data volume. In a real online learning scene, student answer data sharply increases every day, and the model is updated timely to ensure the accuracy of the portrait. The current neural network is optimized by increasing the data dimension of training, the model depth and the like, although the prediction accuracy is improved, the training delay is greatly increased, and the applicability in a real scene is reduced.
Nowadays, cloud servers become one of the mainstream training platforms. In most knowledge tracking techniques based on deep learning, a processor trains the model using a complete data set. In order to process huge cloud-end data, network transmission delay and training delay become major bottlenecks. For online learning, scenes with high real-time requirements are achieved, and the high-delay problem greatly reduces the learning progress of learners and influences the experience of users. In order to accelerate model training, distributed model training techniques are widely studied. However, the distributed architecture design needs to analyze data distribution, and developers are very inefficient in obtaining a network structure with strong robustness by manually adjusting the network structure without knowing the specific distribution of data.
Disclosure of Invention
The invention aims to provide a student portrait generation method based on a classification neural network aiming at the defects of the prior art, which classifies student data and subject data based on a distributed training architecture, trains and predicts different models in a parallel mode, and performs centralized processing on the training data, thereby improving the accuracy of model prediction, enabling the models to be simply and rapidly deployed in a distributed mode by automatically adjusting data distribution, further accelerating the model training by using a distributed technology, improving the high availability of the portrait generation technology in an online learning system, and effectively solving the problem of poor accuracy possibly caused by data dispersion.
The purpose of the invention is realized by the following steps: a student portrait generation method based on a classification neural network is characterized in that student data and subject data are classified, distributed training architecture is based, different types of models are trained and predicted in a parallel mode, so that model prediction accuracy is improved, and the specific flow of student portrait high-efficiency generation based on the classification neural network is as follows:
s100: and the K devices participating in the knowledge tracking training send the device identification numbers and the distribution information of the local data to the central server.
S200: a mathematical model is used to obtain the expression functions of students and subjects, and embedded expression vectors of the students and the subjects are generated.
S300: and the central server receives the information sent by the equipment, counts data distribution for each proxy server, and classifies the expression vectors of the students and the topics generated by the S100 by using a controllable classification algorithm so that the knowledge points after data sets in the equivalence classes are integrated are distributed as centrally as possible.
S400: the central server remaps the classified students and subjects and the weight parameter W of the initialized neural network model 0 And b 0 And sent to the corresponding proxy server together.
S500: parallel training models among the proxy servers, internal equipment of the proxy servers simultaneously utilize a data set training model sent by the central server, and the proxy servers can obtain a weight parameter W according to the training of the group * And b * Sending to the central server, the central server will send the weight parameter W according to the proxy server * And b * Restoring a knowledge tracking model and predicting the knowledge state of the student; at the same time, all proxy servers will use their respective updated weight parameters W * And b * And continuing training.
S600: the central server updates the knowledge state of the student through a joint average algorithm, and the updated knowledge state is visualized to construct a user portrait of the student.
S700: and repeating the steps S500-S600 until the model convergence finishes training to obtain the final user portrait.
In step S100, the sending of the device identification number K to the central server is a very critical step, the value of K is not arbitrary, and the size of K is closely related to the accuracy of knowledge tracking and prediction and the number of the user figures. Assuming that the number of user images is M and the number of topic features is N, K = M × N;
in step S200, the mathematical model is expressed by the following expressions (a) and (b):
Figure BDA0003893744640000031
Q=DNN(L,N,…,Time) (b)
wherein X represents an input layer vector and is formed by splicing text information T, a knowledge point vector K and a secondary compressed vector. Wherein the secondary compression information is formed by co-training secondary information such as difficulty, prompting times, training time and the like.
In step S300, the classification algorithm employs K-means, which is a controllable clustering algorithm, and is calculated by the following equations (c) and (d):
Figure BDA0003893744640000032
Figure BDA0003893744640000033
wherein, c (i) And mu j Representing the centroid of class j and the class to which student i belongs, respectively.
In step S400, the remapping technique is calculated by the following (e) and (f):
Figure BDA0003893744640000034
Figure BDA0003893744640000035
in the formula: q is a Query variable by which information important for Query is found from all K; k is the address of the information pointing to Value; v is the content of the information.
The above equations (e) and (f) are core equations for topic information remapping, and are also core equations for the Bert model, and the remapping using the Bert model can preserve the text information of the topic as much as possible using the digital information.
In step S500, the neural network iteration and the weight parameter W * And b * The update formula (c) is expressed by the following formulas (g) to (l):
f t =σ(W f ·[h t-1 ,x t ]+b f ) (g);
i t =σ(W i ·[h t-1 ,x t ]+b i ) (h);
Figure BDA0003893744640000036
Figure BDA0003893744640000037
o t =σ(W o ·[h t-1 ,x t ]+b o ) (k);
h t =o t *tanh(C t ) (l);
wherein, in the formula: sigma is a sigmoid activation function; h is a total of t-1 And x t Respectively representing a hidden layer and an input layer of the neural network, and iteratively training the data of the next layer of hidden layer according to the information of the previous layer of hidden layer and the information of the input layer; f. of t To forget the door as C t-1 Is used to calculate C t ;i t Is an input gate; c t Information of the current neuron;
Figure BDA0003893744640000038
updating a value for the cell state; o t Determining information of a hidden layer for an output gate; h is t Is a hidden layer; w f A weight vector that is a forgetting gate unit state; w is a group of C Calculating a weight vector of current neuron information; w i A weight vector that is the input unit state; w o A weight vector of the output unit state; b f A bias term that is a forgetting gate cell state; b is a mixture of i A bias term that is an input cell state; b C To calculate a bias term for the current neuron information; b o Is a bias term for the output cell state.
In step S600, the joint average algorithm of the knowledge states of the students is represented by the following formula (m):
Figure BDA0003893744640000041
wherein M is i Representing the knowledge status of the student under each batch training in the N categories of topic sets.
Compared with the prior art, the invention has the following remarkable technical progress and beneficial technical effects:
1) The method has the advantages that the model construction is carried out on students and subjects of different categories in a manner of treatment by treatment, so that the problem of low model prediction precision caused by data distribution dispersion is reduced, and the prediction accuracy is further improved;
2) Distributed deployment is rapidly realized in a data classification mode, and model training efficiency is improved, so that the usability of an online portrait generation technology in an online learning system is improved.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a diagram of a mathematical model;
FIG. 3 is a diagram illustrating the data classification result;
FIG. 4 is a diagram of a neural network architecture under a single training model;
FIG. 5 is a schematic diagram of a student user representation.
Detailed Description
The method comprises the steps that answer data of a large number of students are collected from a remote online learning platform and stored in a central server; it is assumed that information such as the device number corresponding to the proxy server is known and can be normally transmitted to the central server. Taking the answer data of advanced mathematics in universities as an example (the data consists of 2500 students answering 200 questions, wherein the total number of the questions comprises 18 knowledge points), the invention is further explained by combining the drawings and the specific implementation.
Example 1
Referring to fig. 1, the present embodiment performs student portrayal by the following steps:
step S100: after the proxy server sends the corresponding device information, the central server selects the corresponding device according to the data information and the related requirements. Assuming that the plan divides the 2500 students into 3 classes, generating corresponding student figures; it is assumed that the classification of topic data into 3 classes best reflects the relationship between the data. At this time, K =3 × 3=9 proxy servers are selected to participate in the distributed training.
Step 200: when the corresponding proxy server is selected, the central server can vectorize students and subjects by using the mathematical model. Students can carry out vectorization according to the answer condition of each question; and vectorizing the question according to the text information, the average answering time, the average prompting times and other information of the question.
Referring to fig. 2, students and topics are vectorized by using a mathematical model, and topic information is retained by compressing secondary information and splicing important information, and meanwhile, the dimensionality of the topic is compressed to accelerate future model training. Each block represents one information dimension, and high-density information storage is realized by compressing various information dimensions.
Step 300: after the students and the subjects are vectorized, the subjects and the students are subjected to unsupervised learning by using a K-means clustering algorithm, and pseudo codes of the K-means clustering algorithm are detailed in the following table 1:
TABLE 1 pseudo code for K-means clustering algorithm
Figure BDA0003893744640000051
The algorithm has certain controllability, and the number of the classifications can be controlled, which is also one of the methods for selecting the algorithm. This is critical because the number of student figures can be freely controlled when generating the student figures. The students are divided into 3 classes for generating three student figures. The topics are classified into 3 types, because the classification of the topics into 3 types can best reflect the relationship between the data under the data set through a large number of experiments.
Referring to FIG. 3a, a topic classification result visualization graph is obtained by using a K-means clustering algorithm for topics. Wherein different symbols represent the categories to which they belong (wherein "+", "o" represent the three equivalent categories into which K-means are divided, respectively), and the numbers represent the title ids. As can be seen from the figure, the K-means clustering algorithm can well classify the question vectors after extracting the text hidden information.
Referring to fig. 3b, the student classification result visualization graph obtained after the student uses the K-means clustering algorithm. Wherein different symbols represent the categories to which they belong (wherein "+", "o" represent the three equivalent categories into which the K-means is divided, respectively), and the numbers represent the student ids. As can be seen from the figure, the K-means clustering algorithm can well classify student vectors.
Step 400: in order to prevent information leakage, the student id and the topic id are remapped before being sent to the proxy server, and the student id uses one-to-one mapping; the topic id is mapped using the Bert model.
Step 500: proxy server self-updating neural network weight parameter W * And b * The LSTM model is used for training, and has certain advantages in the aspect of sequence modeling, long-term memory function and solves the problems of gradient disappearance and gradient explosion in the long-sequence training process.
Referring to fig. 4, the diagram shows a neural network framework and model training details under a single training model, and the neural network framework search specifically includes: defining a search space, defining a search algorithm and a model performance evaluation strategy, wherein the search space is continuous and microminiature, and the weight parameters W and b can be subjected to mixed search by a gradient descent method; the search algorithm adopts a long-short term memory model LSTM; the model performance evaluation strategy is that the central server and each joint average algorithm integration model are used as evaluation objects of a network architecture.
Step 600: the proxy server sends the parameter training result to the central server in a timing mode, and after the central server receives the parameters, the knowledge tracking model is restored, and the knowledge state of the students is predicted. The answer data of each class of students is divided into three classes, and each class of students generates N classes of student portraits. The central server collects the N types of images by using a joint average algorithm, and finally generates the whole student image.
Referring to fig. 5, after visualization is performed according to the knowledge state vector, a user portrait of the student, an 18-sided radar map, the angle center distance represents the knowledge mastery degree of the student, and the closer to the center, the firmer the mastery of the knowledge point.
Referring to FIG. 5a, a user representation N1 illustrates the knowledge point grasping status of the Nth class student, which is trained on the first class topic data through a neural network model. The graph shows that the mastery degree of 0-17 knowledge points is 0.9%, 1.1%, 8230, 45.7%.
Referring to FIG. 5b, the user profile N2 illustrates the knowledge points mastery status of the Nth class student, which is trained on the second class topic data by the neural network model. The graph shows that the mastery degree of 0-17 knowledge points is 99.9%, 8230, and 41.7%.
Referring to FIG. 5c, the user profile N3 illustrates the knowledge point grasping status of the Nth class of students, which is trained on the third class of topic data through the neural network model. The graph shows that the mastery degree of 0-17 knowledge points are 100.0%, 8230, 34.8%, respectively.
Referring to FIG. 5d, the overall user profile illustrates the overall knowledge point mastery status of the Nth class of students. The knowledge state is calculated by the joint average algorithm through the first three knowledge states. The graph shows that the mastery degree of 0-17 knowledge points are 66.6%, 8230, 40.0%, respectively.
The method starts from the aspect of data, and aims at the problem of poor accuracy possibly caused by data dispersion, the training data is processed in a centralized mode, so that the model prediction accuracy is improved, and the problem of poor accuracy possibly caused by data dispersion is effectively solved. Meanwhile, the model can be simply and quickly distributed and deployed by automatically adjusting data distribution, so that the model training is accelerated by using a distributed technology, and the high availability of the portrait generation technology in an online learning system is improved.
The invention is further described and not intended to be limited to the details shown, since equivalent implementations of the invention are within the scope and range of equivalents of the claims.

Claims (7)

1. A student portrait generation method based on a classification neural network is characterized by comprising the following steps:
s100: k devices participating in knowledge tracking training send device identification numbers, local data tags and distribution information of the number of each tag to a central server;
s200: the central server generates student and question embedded vectors by using a mathematical model;
s300: the central server receives information sent by the equipment, counts data distribution for each proxy server, classifies generated embedded vectors of students and subjects by using a controllable classification method, builds models of students and subjects of different classes, and deploys data in a distributed manner;
s400: the central server remaps the classified student and question information and the weight parameter W of the initialized neural network model 0 And b 0 Sending the data to corresponding proxy servers together;
s500: each proxy server transmitting by means of a central serverThe data set parallel training model, the proxy server will update the group of the current training to obtain the weight parameter W * And b * Then all proxy servers will use the self-updated weight parameter W * And b * Continuing the training, and weighting the parameter W in the training process of the proxy server * And b * Sending to the central server, the central server sends the weight parameter W according to the proxy server * And b * Restoring the knowledge tracking model and predicting the knowledge state of the students;
s600: the central server updates the knowledge state of the student through a joint average algorithm, and the updated knowledge state is visualized to construct a user portrait of the student;
s700: and repeating the steps S500-S600 until the model converges and the training is finished, and obtaining the final student portrait.
2. The method for generating a student image based on a classification neural network as claimed in claim 1, wherein K = M × N in the step S100, where M is the number of user images; n is the number of topic features.
3. The method of generating a student image based on a classification neural network according to claim 1, wherein the mathematical model in step S200 is calculated by the following equations (a) to (b):
Figure FDA0003893744630000021
Q=DNN(L,N,…,Time) (b);
in the formula: x is an input layer vector and is formed by splicing the text information T, the knowledge point vector K and the vector of the secondary compressed information; t is text information; k is a knowledge point vector; q is a Q matrix and represents the relation between questions and knowledge points; l is the question difficulty; n is the average number of times of prompting the student to answer; the Time is the average Time for students to answer questions;
and the secondary compressed information is formed by co-training and compressing the secondary information of difficulty, prompting times and training time.
4. The method for generating student portrait based on classification neural network as claimed in claim 1, wherein the controllable classification method in step S300 is to classify the generated embedded vectors of students and subjects by using K-means algorithm represented by the following formulas (c) to (d):
C (i) :=arg min j ||x (i)j || 2 (c);
Figure FDA0003893744630000022
in the formula: c. C (i) The class closest to student i among the k classes; mu.s j Is the centroid of class j; x is the number of (i) Is an n-dimensional vector for student i.
5. The classification neural network-based student representation generation method according to claim 1, wherein the remapping in step S400 is an attention mechanism in a Bert neural network model, and the attention mechanism is calculated by the following equations (e) to (f):
Figure FDA0003893744630000023
Figure FDA0003893744630000024
in the formula: q is a Query variable by which information important for Query is found from all K; k is the address of the information pointing to Value; v is the content of the information.
6. The method for generating student portrait based on classification neural network as claimed in claim 1, wherein the weighting parameter W in step S500 * And b * The self-renewal of the mixture is carried out by the following formulas (g) to (l):
f t =σ(W f ·[h t-1 ,x t ]+b f ) (g);
i t =σ(W i ·[h t-1 ,x t ]+b i ) (h);
Figure FDA0003893744630000031
Figure FDA0003893744630000032
o t =σ(W o ·[h t-1 ,x t ]+b o ) (k);
h t =o t *tanh(C t ) (l);
in the formula: sigma is a sigmoid activation function; h is t-1 And x t Respectively representing a hidden layer and an input layer of the neural network, and iteratively training the data of the next layer of hidden layer according to the information of the previous layer of hidden layer and the information of the input layer; f. of t To forget the door as C t-1 Is used to calculate C t ;i t Is an input gate; c t Information of the current neuron;
Figure FDA0003893744630000033
updating a value for the cell state; o t Determining information of a hidden layer for an output gate; h is a total of t Is a hidden layer; w is a group of f A weight vector that is a forgetting gate unit state; w C Calculating a weight vector of current neuron information; w is a group of i A weight vector that is the input unit state; w o The weight vector is the state of the output unit; b f A bias term that is a forgetting gate cell state; b is a mixture of i A bias term that is an input cell state; b C Bias terms for calculating current neuron information; b o Is a bias term for the output cell state.
7. The method for generating a student portrait based on a classification neural network as claimed in claim 1, wherein the knowledge status in step S600 is updated by a joint average algorithm of the following formula (m):
Figure FDA0003893744630000034
in the formula: n is a radical of i Representing the knowledge status of the student under each batch training in the N categories of topic sets.
CN202211266946.3A 2022-10-17 2022-10-17 Student portrait generation method based on classification neural network Pending CN115577107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211266946.3A CN115577107A (en) 2022-10-17 2022-10-17 Student portrait generation method based on classification neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211266946.3A CN115577107A (en) 2022-10-17 2022-10-17 Student portrait generation method based on classification neural network

Publications (1)

Publication Number Publication Date
CN115577107A true CN115577107A (en) 2023-01-06

Family

ID=84584770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211266946.3A Pending CN115577107A (en) 2022-10-17 2022-10-17 Student portrait generation method based on classification neural network

Country Status (1)

Country Link
CN (1) CN115577107A (en)

Similar Documents

Publication Publication Date Title
KR102071582B1 (en) Method and apparatus for classifying a class to which a sentence belongs by using deep neural network
CN110222163B (en) Intelligent question-answering method and system integrating CNN and bidirectional LSTM
CN110368690B (en) Game decision model training method, game strategy generation method and device
CN111160474A (en) Image identification method based on deep course learning
EA035114B1 (en) Neural network and method of neural network training
CN111666919B (en) Object identification method and device, computer equipment and storage medium
CN111429340A (en) Cyclic image translation method based on self-attention mechanism
CN110889450B (en) Super-parameter tuning and model construction method and device
CN107544960B (en) Automatic question-answering method based on variable binding and relation activation
CN113344053B (en) Knowledge tracking method based on examination question different composition representation and learner embedding
CN110807509A (en) Depth knowledge tracking method based on Bayesian neural network
CN106777402A (en) A kind of image retrieval text method based on sparse neural network
CN114912357A (en) Multi-task reinforcement learning user operation method and system based on user model learning
CN108304376A (en) Determination method, apparatus, storage medium and the electronic device of text vector
CN114758180B (en) Knowledge distillation-based lightweight flower identification method
CN114004336A (en) Three-dimensional ray reconstruction method based on enhanced variational self-encoder
CN114332565A (en) Method for generating image by generating confrontation network text based on distribution estimation condition
CN111311997B (en) Interaction method based on network education resources
CN115577107A (en) Student portrait generation method based on classification neural network
Dong et al. A biologically inspired system for classification of natural images
CN115063374A (en) Model training method, face image quality scoring method, electronic device and storage medium
CN114861917A (en) Knowledge graph inference model, system and inference method for Bayesian small sample learning
CN117093733A (en) Training method of media classification model, media data classification method and device
CN112907004A (en) Learning planning method, device and computer storage medium
Jaszcz et al. Human-AI collaboration to increase the perception of VR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination