CN113569690A - Classroom teaching quality double-dial control method based on deep learning - Google Patents

Classroom teaching quality double-dial control method based on deep learning Download PDF

Info

Publication number
CN113569690A
CN113569690A CN202110824604.8A CN202110824604A CN113569690A CN 113569690 A CN113569690 A CN 113569690A CN 202110824604 A CN202110824604 A CN 202110824604A CN 113569690 A CN113569690 A CN 113569690A
Authority
CN
China
Prior art keywords
information
model
image
teaching
deep learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110824604.8A
Other languages
Chinese (zh)
Inventor
蒋建军
杨爽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202110824604.8A priority Critical patent/CN113569690A/en
Publication of CN113569690A publication Critical patent/CN113569690A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Educational Administration (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention relates to a classroom teaching quality review control method based on deep learning, which specifically comprises the following steps: s1, extracting image information and audio information of a fixed time slot as basic units in a video sampling mode based on time slices; s2, inputting a classroom teaching detection model by a basic unit, and outputting facial expression information, limb action information and language emotion information of a student; and S3, inputting the facial expression information, the limb action information and the language emotion information into a teaching liveness evaluation model, outputting an evaluation result, and calculating by a sliding window technology according to the evaluation result to obtain a classroom teaching hotspot graph. Compared with the prior art, the method has the advantages that the teacher can directly position the time period with high classroom teaching quality and check the teaching state of the teacher, so that the teacher can be better guided to know the influence of a specific teaching mode on the teaching quality, the teaching method of the teacher is duplicated, the teaching capacity is better improved, and the like.

Description

Classroom teaching quality double-dial control method based on deep learning
Technical Field
The invention relates to the technical field of education, in particular to a classroom teaching quality review control method based on deep learning.
Background
The existing video-based teaching quality evaluation system is developed rapidly in an intelligent era, and generally processes teaching videos by using a gesture recognition algorithm in image processing, so that teacher classroom behavior analysis, student classroom behavior analysis and student attendance analysis are performed. The basic analysis further obtains the teacher information, the writing time occupation ratio, the total number of classroom questioning of the teacher, the total number of classroom hand-lifting of students and the attendance information of students, thereby helping the teacher to carry out auxiliary evaluation on the teaching quality. However, the current teaching quality assessment system has the following problems:
firstly, in the prior art, because dense frame extraction during video sampling can generate a large amount of picture data, and the processing speed of the traditional image processing algorithm is limited, the prior art is slow in use and high in cost of computing resources;
secondly, the consumption of huge time cost in the prior art causes that teachers cannot get feedback of teaching in time, thereby losing part of the value of the problem required to be solved by the technology;
third, the evaluation data given in the prior art are all discrete statistical data, and only the teacher can roughly grasp the teaching quality of the whole course, and the teacher cannot clearly determine which specific teaching mode can obviously improve the liveness of the classroom.
Disclosure of Invention
The invention aims to overcome the defects in the prior art and provide a classroom teaching quality double-disc control method based on deep learning, so that teachers can be directly positioned in a time period with high classroom teaching quality and check the teaching states of the teachers, the teachers are guided to better know the influence of a specific teaching mode on the teaching quality, the teaching method of the teachers is repeated, and the teaching capacity is better improved.
The purpose of the invention can be realized by the following technical scheme:
a classroom teaching quality duplication control method based on deep learning specifically comprises the following steps:
s1, extracting image information and audio information of a fixed time slot as basic units in a video sampling mode based on time slices;
s2, inputting a classroom teaching detection model by the basic unit, and outputting facial expression information, limb action information and language emotion information of a student;
and S3, inputting the facial expression information, the limb action information and the language emotion information into a teaching liveness evaluation model, outputting an evaluation result, and calculating to obtain a classroom teaching hotspot graph through a sliding window technology according to the evaluation result.
In step S1, the start frame, the middle frame, and the end frame are extracted from the image information as image samples corresponding to the basic unit, and the audio information of the preset time period is extracted from the audio information as audio samples corresponding to the basic unit.
Further, the image samples in the basic unit and the audio samples in the basic unit are combined, time information is used as an index, and a complete data set is made by using a COCO data set format.
The limb motion information includes head motion information and arm motion information.
The classroom teaching detection model comprises an image-based task object recognition model, an image-based image distinguishing model, an image-based facial expression recognition model and an audio-based emotion analysis model.
The task object recognition model based on the image adopts a improved YOLO V3 model based on the classroom environment, and adopts smaller 3 hyper-parameters as prior frame parameters.
Further, the image-based task object recognition model marks the positions of all people in the image sample in one basic unit for use by subsequent models.
The image distinguishing model based on the image adopts a deep learning CNN classification network, and adopts SIGMOID as an activation function of an output layer in the CNN classification network.
The image-based facial expression recognition model adopts a multi-classification facial expression recognition deep learning model.
Further, the expression types collected by the multi-classification facial expression recognition deep learning model comprise anger, calmness, fear, joy, sadness, surprise and tension.
Further, the multi-classification facial expression recognition deep learning model carries out clustering according to the collected expressions, and the states of the students corresponding to the clustering results comprise a learning conscience state, a learning irrelevant state and a learning negative state.
The audio-based emotion analysis model includes a first layer of analysis and a second layer of analysis.
Further, the first layer of analysis retains high-frequency human voice signals, the second layer of analysis converts audio information into text information through voice recognition, the converted text information is converted into a data structure which can be processed by a neural network through One-hot coding, and the meaning of the text is learned word by word from left to right through a recurrent neural network to finish the classification of emotions, wherein the emotion classification comprises positive direction, negative direction and the like.
Further, the emotion analysis model based on the audio takes the result of the first layer of analysis as weight information to perform weighted analysis on the result of the second layer of analysis, so as to participate in the final teaching activity evaluation model.
Compared with the prior art, the invention has the following beneficial effects:
1. the invention uses a continuous sampling mode based on time slices, combines the image signal and the audio signal by time to obtain more information which can be explored, and each time slice uses 3 frames to represent the state of the whole time slice, thereby greatly reducing the data volume of subsequent image processing.
2. By using the improved YOLO-V3 deep learning task object recognition model which is suitable for the classroom teaching environment, the invention clusters the hyper-parameters which are more suitable for recognizing the small task objects in advance, and is more suitable for the recognition requirements of the small task objects in the classroom.
3. By using the recurrent neural network which is more adaptive to the shorter natural language text, the model complexity of the neural network is reduced, the training difficulty of the model is reduced, the training and predicting time is saved, and teachers can quickly reply the teaching process in a short time by approaching to the real-time experience.
4. The invention uses the sliding window technology to smoothly change discrete classroom assessment data into continuous information based on a time axis to obtain a classroom teaching hotspot graph, provides a brand-new teaching quality assessment and guidance mode, and more intuitively guides teachers to repeat teaching process.
Drawings
FIG. 1 is a schematic flow diagram of the present invention;
fig. 2 is a schematic frame diagram of a teaching liveness evaluation model in an embodiment of the present invention.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. The present embodiment is implemented on the premise of the technical solution of the present invention, and a detailed implementation manner and a specific operation process are given, but the scope of the present invention is not limited to the following embodiments.
Examples
As shown in fig. 1, a classroom teaching quality review control method based on deep learning specifically includes the following steps:
s1, extracting image information and audio information of a fixed time slot as basic units in a video sampling mode based on time slices;
s2, inputting a classroom teaching detection model by a basic unit, and outputting facial expression information, limb action information and language emotion information of a student;
and S3, inputting the facial expression information, the limb action information and the language emotion information into a teaching liveness evaluation model, outputting an evaluation result, and calculating by a sliding window technology according to the evaluation result to obtain a classroom teaching hotspot graph.
In step S1, the start frame, the middle frame, and the end frame are extracted from the image information as image samples corresponding to the basic unit, and the audio information for 1 second is extracted from the audio information as audio samples corresponding to the basic unit.
The image samples in the base unit are combined with the base unit audio samples, indexed by time information, to make a complete data set using the COCO data set format.
In this embodiment, the teaching video is equally divided by taking each 2.5 seconds as a time slice, which is used as a basic data set unit, one third of the continuous video is randomly sampled in each basic video unit with the length of 2.5 seconds, and the image signal and the audio signal are split.
The limb motion information includes head motion information and arm motion information.
The classroom teaching detection model includes an image-based task object recognition model, an image-based image differentiation model, an image-based facial expression recognition model, and an audio-based emotion analysis model.
The task object recognition model based on the image adopts a improved YOLO V3 model based on the classroom environment, and adopts smaller 3 hyper-parameters as prior frame parameters, thereby being beneficial to accelerating the operation speed.
In this embodiment, the image-based task object recognition model uses ResNet as a backbone network to extract picture features to obtain 3 layers of feature maps with different sizes, which is beneficial to detecting targets on different scales and improving detection accuracy.
The image-based task object recognition model marks all of the positions of people in the image sample in one base unit for use by subsequent models.
The image distinguishing model based on the image adopts a deep learning CNN classification network, and adopts SIGMOID as an activation function of an output layer in the CNN classification network, so that the problem of extracting human face characteristic points in complex human face recognition is simplified into a plurality of binary problems, and the prediction time of a neural network and the training complexity of the neural network are greatly reduced. The two classification problems after disassembly mainly concern whether the head is raised or not and whether the arm is placed on a desk or not, and therefore the two classification problems provide bottom-layer data support for a subsequent teaching evaluation model. The facial expression distinguishing model based on the image works on the basis of head up detection, so that unnecessary calculation for distinguishing facial expressions of students who do not head up is reduced, calculation resources are saved, and the calculation time of the model is reduced.
The image-based facial expression recognition model employs a multi-classification facial expression recognition deep learning model.
The expression types collected by the multi-classification facial expression recognition deep learning model comprise anger, calmness, fear, joy, sadness, surprise and tension.
The multi-classification facial expression recognition deep learning model carries out clustering according to the collected expressions, and the states of students corresponding to clustering results comprise a learning conscientious state, a learning irrelevant state and a learning passive state.
The audio-based emotion analysis model includes a first layer of analysis and a second layer of analysis.
The first layer of analysis keeps high-frequency human voice signals, the second layer of analysis firstly converts audio information into text information through voice recognition, the converted text information is converted into a data structure which can be processed by a neural network through One-hot coding, and then the meaning of the text is learned word by word from left to right through a recurrent neural network to finish the classification of emotions, wherein the emotion classification comprises positive direction, negative direction and the like.
The cyclic neural network ensures the characteristics of less computing resource occupation and short running time cost of the invention.
And the emotion analysis model based on the audio takes the result of the first layer of analysis as weight information to perform weighted analysis on the result of the second layer of analysis, so that the emotion analysis model participates in the final teaching activity evaluation model.
As shown in fig. 2, in specific implementation, the teaching liveness evaluation model mainly performs the following steps:
s301, obtaining the current classroom active state weight X through the first-layer analysis of an emotion analysis model based on audio, wherein the weight X is a floating point number within the range of 0.5 to 1, and numerical values are given by the model according to the state;
s302, obtaining the weight Y of the whole emotion condition of the current classroom through the analysis of a second layer of the emotion analysis model based on the audio, wherein the weight Y is a floating point number within the interval of 0.5 to 1, and the numerical value is given by the model according to the state;
s303, acquiring whether the teacher is in a teaching state or not through the analysis of a second layer of the emotion analysis model based on the audio, wherein the teaching state is marked as M, and the non-teaching state is marked as N;
s304, obtaining K pieces of image information containing independent human bodies in each basic unit through image information obtained by the task object recognition model based on the images;
s305, the image distinguishing model based on the image analyzes the image information of each independent human body, whether the student raises the head and whether the arm is placed on the desktop or not is judged according to the result obtained by the image distinguishing model based on the image, and four states are generated in total through state combination, wherein the four states include that the student does not raise the head and does not have the arm and are marked as a state A; the student does not raise the head and has an arm, and the state is marked as state B; the student does not have arms when raising the head and is marked as a state C; the student raises his head and appears arms, and the state is marked as state D;
s306, analyzing each independent human body image with a head-up state by the facial expression recognition model based on the image, obtaining a result by the facial expression recognition model based on the image, judging the expression state of the student, and generating three states in total, wherein the three states include a state E which is serious for the student; student negative, labeled state F; student irrelevant state, labeled as state G;
s307, constructing a matrix of K X11 for storing state information, wherein K rows respectively store data of K independent individuals, and 11 columns respectively occupy two columns in sequence for floating point numbers X and Y; states M and N are Boolean type and occupy two columns in sequence, wherein 1 represents that the state is true; states A, B, C and D are Boolean-type occupying four columns in sequence, where 1 represents that the state is true; states E, F and G are Boolean-type occupying three columns in sequence, where 1 represents that the state is true; totally occupying 11 columns, and if the teaching video time is T seconds, the number S of the basic units is as follows:
Figure BDA0003173238960000061
after all information processing is completed, constructing a matrix of S X K X11 and storing all processed information;
s308, sequentially extracting the matrixes K11 from the stored matrixes S K11 to calculate the classroom activity state of the basic unit, wherein when the state is true, the state weight distribution is as shown in the following table 1:
TABLE 1 State weight assignment Table
State M 0.8
State N 0.2
State A -4
State B -1
State C 1
State D 2
State E 3
State F 2
State G 1
The basic unit teaching liveness score O (k) is:
Figure BDA0003173238960000062
wherein the basic unit teaching basis weight P (k) is:
P(k)=Xk*Yk*(0.8*Mk+0.2*Nk)
and after all basic unit teaching activity degree grading calculation is completed, storing the result in a one-dimensional array with the length of S for subsequent use.
The step of calculating the classroom teaching hotspot graph in the step S3 specifically includes the following steps:
s309, taking out all contents of the one-dimensional array with the length of S in the step S308, finding the most approximate Smax and the minimum value Smin, carrying out normalization processing on all values in the S array, and carrying out 10-division processing after normalization is finished;
s310, taking a sliding window with 3 intervals and 2 step length, wherein data taken by the sliding window are marked as a1, a2, a3 and a4 respectively, and according to the situation that the classroom liveness rarely changes suddenly in a short time, the sliding window is used for smoothing the numerical value in the S, and some unreasonable fluctuations are filtered;
s311, the processed data in the step S310 are taken out, restoration processing is carried out according to the comparison of the index and the time axis of the original video, the converted time axis index is used as the X axis, the S one-dimensional array is used as the Y axis, an image curve is made, and the generated image curve is fitted to the time axis of the actual teaching video to form a video hotspot graph.
The invention relates audio information to video information through time and processes student action and emotion states detected by a deep learning model with lower time cost and calculation cost through continuous sampling based on time slices. And carrying out weighted scoring based on time connection through a classroom quality evaluation model in education, thereby obtaining a teaching activity hotspot graph based on a time axis. The teacher can directly locate the time period with high classroom teaching quality according to the teaching video hotspot graph generated by the technology, and look up the teaching state of the teacher, so that the teacher can be better guided to know the influence of a specific teaching mode on the teaching quality, and the teaching method of the teacher can be copied to better improve the teaching capacity of the teacher.
In addition, it should be noted that the specific embodiments described in the present specification may have different names, and the above descriptions in the present specification are only illustrations of the structures of the present invention. All equivalent or simple changes in the structure, characteristics and principles of the invention are included in the protection scope of the invention. Various modifications or additions may be made to the described embodiments or methods may be similarly employed by those skilled in the art without departing from the scope of the invention as defined in the appending claims.

Claims (10)

1. A classroom teaching quality double-disc control method based on deep learning is characterized by comprising the following steps:
s1, extracting image information and audio information of a fixed time slot as basic units in a video sampling mode based on time slices;
s2, inputting a classroom teaching detection model by the basic unit, and outputting facial expression information, limb action information and language emotion information of a student;
and S3, inputting the facial expression information, the limb action information and the language emotion information into a teaching liveness evaluation model, outputting an evaluation result, and calculating to obtain a classroom teaching hotspot graph through a sliding window technology according to the evaluation result.
2. The method as claimed in claim 1, wherein the step S1 extracts a start frame, a middle frame and an end frame from the image information as the image samples corresponding to the basic units, and extracts audio information from the audio information for a predetermined period as the audio samples corresponding to the basic units.
3. The method as claimed in claim 1, wherein the classroom teaching detection model includes an image-based task object recognition model, an image-based image discrimination model, an image-based facial expression discrimination model, and an audio-based emotion analysis model.
4. The deep learning-based classroom teaching quality duplication control method of claim 3, wherein the image-based task object recognition model adopts a improved YOLO V3 model based on classroom environment, and adopts the smaller 3 hyper-parameters as the prior frame parameters.
5. The deep learning-based classroom teaching quality duplication control method according to claim 3, wherein the image-based image classification model employs a deep learning CNN classification network, and SIGMOID as an activation function of an output layer in the CNN classification network.
6. The deep learning-based classroom teaching quality inventory control method as claimed in claim 3, wherein the image-based facial expression recognition model employs multi-classification facial expression to recognize the deep learning model.
7. The deep learning-based classroom teaching quality reply control method according to claim 6, wherein the expression types collected by the multi-classification facial expression recognition deep learning model include angry, calm, fear, joy, sadness, surprise and tension.
8. The deep learning-based classroom teaching quality inventory control method as claimed in claim 7, wherein the multi-classification facial expression recognition deep learning model is clustered according to the collected expressions, and the status of the students corresponding to the clustering result includes a learning conscience status, a learning irrelevant status and a learning negative status.
9. The deep learning-based classroom teaching quality duplication control method of claim 3 wherein the audio-based emotion analysis model includes a first layer analysis and a second layer analysis.
10. The method as claimed in claim 9, wherein the first layer of analysis retains high frequency vocal signals, the second layer of analysis converts audio information into text information through speech recognition, and learns the meaning of the text word by word from left to right through a recurrent neural network to complete the emotion classification.
CN202110824604.8A 2021-07-21 2021-07-21 Classroom teaching quality double-dial control method based on deep learning Withdrawn CN113569690A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824604.8A CN113569690A (en) 2021-07-21 2021-07-21 Classroom teaching quality double-dial control method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824604.8A CN113569690A (en) 2021-07-21 2021-07-21 Classroom teaching quality double-dial control method based on deep learning

Publications (1)

Publication Number Publication Date
CN113569690A true CN113569690A (en) 2021-10-29

Family

ID=78166045

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824604.8A Withdrawn CN113569690A (en) 2021-07-21 2021-07-21 Classroom teaching quality double-dial control method based on deep learning

Country Status (1)

Country Link
CN (1) CN113569690A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051324A (en) * 2022-12-31 2023-05-02 华中师范大学 Student classroom participation state evaluation method and system based on gesture detection

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051324A (en) * 2022-12-31 2023-05-02 华中师范大学 Student classroom participation state evaluation method and system based on gesture detection

Similar Documents

Publication Publication Date Title
CN110647612A (en) Visual conversation generation method based on double-visual attention network
CN109062404B (en) Interaction system and method applied to intelligent early education machine for children
CN113657168B (en) Student learning emotion recognition method based on convolutional neural network
CN111652332A (en) Deep learning handwritten Chinese character recognition method and system based on two classifications
CN113592251B (en) Multi-mode integrated teaching state analysis system
CN115936944B (en) Virtual teaching management method and device based on artificial intelligence
Alon et al. Deep-hand: a deep inference vision approach of recognizing a hand sign language using american alphabet
CN114936787A (en) Online student teaching intelligent analysis management cloud platform based on artificial intelligence
Aulia et al. Personality identification based on handwritten signature using convolutional neural networks
CN113569690A (en) Classroom teaching quality double-dial control method based on deep learning
Jain et al. Student’s Feedback by emotion and speech recognition through Deep Learning
CN117252259A (en) Deep learning-based natural language understanding method and AI teaching aid system
Najeeb et al. Gamified smart mirror to leverage autistic education-Aliza
Avula et al. CNN based recognition of emotion and speech from gestures and facial expressions
CN113407670B (en) textCNN-based method and system for detecting online learning behaviors of students
CN113642446A (en) Detection method and device based on face dynamic emotion recognition
Xiaoning Application of artificial neural network in teaching quality evaluation
Sen et al. Real-time sign language recognition system
CN111914683A (en) Handwriting score input system based on bionic image enhancement algorithm and FPGA hardware acceleration
Doshi et al. A Convolutional Recurrent Neural Network-Based Model For Handwritten Text Recognition To Predict Dysgraphia
CN112001222A (en) Student expression prediction method based on semi-supervised learning
Dodiya et al. Attention, emotion and attendance tracker with question generation system using deep learning
Bansal et al. Dynamic ISL Word Recognition System using ResNet50 and RNN Deep Learning Models
Pangestu et al. The Deep Learning Approach For American Sign Language Detection
CN114492421B (en) Emotion recognition method, storage medium, device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20211029

WW01 Invention patent application withdrawn after publication