CN110363084A - A kind of class state detection method, device, storage medium and electronics - Google Patents
A kind of class state detection method, device, storage medium and electronics Download PDFInfo
- Publication number
- CN110363084A CN110363084A CN201910495719.XA CN201910495719A CN110363084A CN 110363084 A CN110363084 A CN 110363084A CN 201910495719 A CN201910495719 A CN 201910495719A CN 110363084 A CN110363084 A CN 110363084A
- Authority
- CN
- China
- Prior art keywords
- class
- key frame
- frame images
- feature
- emotional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
- G06F18/2135—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on approximation criteria, e.g. principal component analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Abstract
The embodiment of the present application discloses a kind of class state detection method, device, storage medium and electronic equipment, wherein method includes: the key frame images obtained in video file of attending class;The key frame images are parsed, the facial image and human face action feature in the key frame images are obtained;Obtain attend class emotional characteristics and the emotional change feature of the user of the corresponding user of the facial image;Based on the human face action feature, attend class emotional characteristics and the emotional change feature, the class state of the user is determined.Therefore, using the embodiment of the present application, the workload of artificial screening is reduced, improves the accuracy rate of Online class class-teaching of teacher status assessment.
Description
Technical field
This application involves field of computer technology more particularly to a kind of class state detection method, device, storage medium and
Electronic equipment.
Background technique
Online education enterprise can generate video file of largely attending class daily, can be correct by these video files of attending class
The state of mind of assessment Online class teacher the performance of enterprise is optimized for the teaching level that enterprise correctly appraises the teacher
Management helps the course arrangement for improving Online class, promotes Classroom Teaching, improves control of the enterprise to classroom instruction process
Ability improves course and has very important significance to tools such as the attractions of student.
The monitoring of existing class state is evaluated mainly according to student's video record in classroom by manually selected concentration
Index, such as student comes back and the number bowed stands up and returns to problem number and the indexs such as number that classmate speaks, and
Evaluation of the student to classroom, is weighted and averaged, to obtain the class state of classroom teacher.This appraisal procedure relies solely on
Several simple indicators of artificial selection are weighted marking, and evaluation method is single and accuracy rate is low.
Summary of the invention
The embodiment of the present application provides a kind of class state detection method, device, storage medium and electronic equipment, can subtract
The workload of few artificial screening, improves the accuracy rate of Online class class-teaching of teacher status assessment.The technical solution is as follows:
In a first aspect, the embodiment of the present application provides a kind of class state detection method, which comprises
Obtain the key frame images in the class video file;
The key frame images are parsed, the facial image in the key frame images and human face action spy are obtained
Sign;
Obtain attend class emotional characteristics and the emotional change feature of the user of the corresponding user of the facial image;
Based on the human face action feature, attend class emotional characteristics and the emotional change feature, the use is determined
The class state at family.
Optionally, the key frame images obtained in video file of attending class, comprising:
Acquisition is attended class video file, the picture quality of each frame image in video file of attending class described in acquisition;
Picture quality in video file of attending class described in filtering is less than the image of predetermined quality threshold, generates key frame images.
Optionally, after the key frame images obtained in video file of attending class and described to the key frame figure
As before being parsed, further includes:
Dimension-reduction treatment is carried out to the key frame images.
Optionally, described that the key frame images are parsed, obtain the facial image in the key frame images with
And human face action feature, comprising:
The facial image in the key frame images is identified using face recognition algorithms;
The facial image is parsed, the corresponding human face action feature of the facial image is obtained.
Optionally, it is described identify the facial image in the key frame images using face recognition algorithms after, Yi Jisuo
It states before being parsed to the facial image, further includes:
The facial image is adjusted to the image of pre-set dimension size.
Optionally, attend class emotional characteristics and the mood of the user for obtaining the corresponding user of the facial image
Variation characteristic, comprising:
Using trained neural network model extract the corresponding user of the facial image attend class emotional characteristics and
The emotional change feature of the user.
It is optionally, described to be based on the human face action feature, attend class emotional characteristics and the emotional change feature,
Determine the class state of the user, comprising:
To the human face action feature, described attend class emotional characteristics and the emotional change feature is normalized
And Regularization, generate emotional state matrix;
Classification processing is carried out to the emotional state matrix using gradient boosting algorithm, institute is determined in state classification set
State the corresponding dbjective state classification of emotional state matrix, the class state by dbjective state classification as the user.
Second aspect, the embodiment of the present application provide a kind of class state detection device, and described device includes:
Key frame images obtain module, for obtaining the key frame images in video file of attending class;
Fisrt feature obtains module and obtains in the key frame images for parsing to the key frame images
Facial image and human face action feature;
Second feature obtains module, for obtaining the emotional characteristics and described of attending class of the corresponding user of the facial image
The emotional change feature of user;
Class state determining module, for being based on the human face action feature, emotional characteristics and the feelings of attending class
Thread variation characteristic determines the class state of the user.
Optionally, the key frame images obtain module, comprising:
Quality acquiring unit, for obtaining the picture quality of each frame image in the video file of attending class;
Image filtering unit, the figure for being less than predetermined quality threshold for filtering picture quality in the video file of attending class
Picture generates key frame images.
Optionally, described device further include:
Dimension-reduction treatment module, for carrying out dimension-reduction treatment to the key frame images.
Optionally, the fisrt feature obtains module, comprising:
Face identification unit, for identifying the facial image in the key frame images using face recognition algorithms;
Motion characteristic acquiring unit obtains the corresponding people of the facial image for parsing to the facial image
Face motion characteristic.
Optionally, the fisrt feature obtains module, further includes:
Size adjusting unit, for the facial image to be adjusted to the image of pre-set dimension size.
Optionally, the second feature obtains module, is specifically used for:
Using trained neural network model extract the corresponding user of the facial image attend class emotional characteristics and
The emotional change feature of the user.
Optionally, the class state determining module, comprising:
Matrix generation unit, for the human face action feature, emotional characteristics and the emotional change of attending class
Feature is normalized and Regularization, generates emotional state matrix;
Matrix Classification unit, for carrying out classification processing to the emotional state matrix using gradient boosting algorithm, in shape
The corresponding dbjective state classification of the emotional state matrix is determined in state classification set, the dbjective state is classified described in being used as
The class state of user.
The third aspect, the embodiment of the present application provide a kind of computer storage medium, and the computer storage medium is stored with
A plurality of instruction, described instruction are suitable for being loaded by processor and executing above-mentioned method and step.
Fourth aspect, the embodiment of the present application provide a kind of server, it may include: processor and memory;Wherein, described to deposit
Reservoir is stored with computer program, and the computer program is suitable for being loaded by the processor and executing above-mentioned method and step.
The technical solution bring beneficial effect that some embodiments of the application provide includes at least:
In the application one or more embodiment, by obtaining the key frame images in video file of attending class, to described
Key frame images are parsed, and to obtain the facial image and human face action feature in the key frame images, then obtain institute
Attend class emotional characteristics and the emotional change feature of the user of the corresponding user of facial image are stated, the human face action is based on
Feature, emotional characteristics and the emotional change feature of attending class, it may be determined that the class state of the user.Based on view of attending class
The identification and extraction of the human face action feature of facial image, attend class emotional characteristics and emotional change feature in frequency file, so that it may
The class state for determining the corresponding user of the facial image reduces people without being calculated again by artificial selection index
The workload of work screening, improves the accuracy rate of Online class class-teaching of teacher status assessment.Processing Online class view can also be improved
The rate of frequency can deal with large-scale classroom assessment task.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is a kind of flow diagram of class state detection method provided by the embodiments of the present application;
Fig. 2 is a kind of example schematic of emotional state value provided by the embodiments of the present application;
Fig. 3 is a kind of flow diagram of class state detection method provided by the embodiments of the present application;
Fig. 4 is a kind of example schematic of face datum mark provided by the embodiments of the present application;
Fig. 5 is a kind of structural schematic diagram of mini- model provided by the embodiments of the present application;
Fig. 6 is a kind of example schematic of classifier identification class state provided by the embodiments of the present application;
Fig. 7 is a kind of structural schematic diagram of class state detection device provided by the embodiments of the present application;
Fig. 8 is the structural schematic diagram that a kind of key frame images provided by the embodiments of the present application obtain module;
Fig. 9 is a kind of structural schematic diagram for putting on class condition checkout gear provided by the embodiments of the present application;
Figure 10 is the structural schematic diagram that a kind of fisrt feature provided by the embodiments of the present application obtains module;
Figure 11 is a kind of structural schematic diagram of class state determining module provided by the embodiments of the present application;
Figure 12 is the structural schematic diagram of a kind of electronic equipment provided by the embodiments of the present application.
Specific embodiment
To keep the purposes, technical schemes and advantages of the application clearer, below in conjunction with attached drawing to the embodiment of the present application
Mode is described in further detail.
In the following description when referring to the accompanying drawings, unless otherwise indicated, the same numbers in different attached drawings indicate same or similar
Element.Embodiment described in following exemplary embodiment does not represent all embodiment party consistent with the application
Formula.On the contrary, they are only the consistent device and method of as detailed in the attached claim, the application some aspects
Example.
In the description of the present application, it is to be understood that term " first ", " second " etc. are used for description purposes only, without
It can be interpreted as indication or suggestion relative importance.For the ordinary skill in the art, on being understood with concrete condition
State the concrete meaning of term in this application.In addition, unless otherwise indicated, " multiple " refer to two in the description of the present application
Or it is more than two."and/or" describes the incidence relation of affiliated partner, indicates may exist three kinds of relationships, for example, A and/or B,
Can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.Character "/" typicallys represent forward-backward correlation pair
As if a kind of relationship of "or".
The application is illustrated below with reference to specific embodiment.
Below in conjunction with attached drawing 1- attached drawing 6, it is situated between in detail to class state detection method provided by the embodiments of the present application
It continues.This method can be dependent on computer program realization, can run in the class state detection device based on von Neumann system.
The computer program can integrate in the application, also can be used as independent tool-class application operation.Wherein, in the embodiment of the present application
Class state detection device can be user terminal, including but not limited to: PC, handheld device, vehicle-mounted is set tablet computer
Standby, wearable device calculates equipment or the other processing equipments for being connected to radio modem etc..It is used in different networks
Family terminal can be called different titles, such as: user equipment, access terminal, subscriber unit, subscriber station, movement station, movement
Platform, remote station, remote terminal, mobile device, user terminal, terminal, wireless telecom equipment, user agent or user apparatus, bee
Cellular telephone, wireless phone, personal digital assistant (personal digital assistant, PDA), 5G network or the following evolution
Terminal device etc. in network.
It referring to Figure 1, is a kind of flow diagram of class state detection method provided by the embodiments of the present application.Such as Fig. 1
It is shown, the embodiment of the present application the method may include following steps:
S101 obtains the key frame images in video file of attending class;
Video file of attending class refers to the video file comprising teacher and/or student.The video file can be in real time
It records, is also possible to record completion.In the embodiment of the present application, which can teach online for network remote
Educate the speech video of teacher.
Video file of attending class includes multiple image, includes different image informations in every frame image, as image is fuzzy
Degree, image face integrity degree, image resolution ratio etc..It can determine picture quality by image information.
When picture quality is unsatisfactory for requiring, it is believed that the frame image is invalid image.By the nothing in video file of attending class
Remaining image is key frame images after effect image-erasing.
In the specific implementation, when user (such as student) logs in application program of attending class by class state detection device (such as mobile phone)
Or attend class when entering Online class using webpage, class state detection device acquires video file of attending class in real time.Alternatively, shape of attending class
State detection device downloads the video file of attending class for having acquired and having recorded completion from reading in caching or network.Then on this
Every frame image in class video file carries out quality testing, such as determines whether the clarity of image meets the requirements, the people in image
Whether whether face is complete or lack, therefrom to extract the key frame images that quality is met the requirements.Key frame images include more
Frame image.
S102 parses the key frame images, obtains facial image and face in the key frame images
Motion characteristic;
Include facial image in key frame images, face recognition algorithms can be used for the identification of facial image.
Face recognition algorithms refer to that after detecting face and positioning facial key feature points, main human face region is just
It can be cut out, after pretreatment, the recognizer of feed-in rear end.Recognizer will complete mentioning for face characteristic
It takes, and is compared with the known face of inventory, complete final identification.
Common face recognition algorithms include: the recognizer (feature-based based on human face characteristic point
Recognition algorithms), the recognizer (appearance-based based on whole picture facial image
Recognition algorithms), recognizer (the template-based recognition based on template
Algorithms), algorithm (the recognition algorithms using neural identified using neural network
Network), the algorithm (recognition algorithms using SVM) etc. identified using support vector machines.
The human face action feature of the facial image identified is parsed, thus can determine this group of key frame images by
The human face action matrix of image sequencing composition.Wherein, human face action feature can be regarded as facial orientation, as in a frame image
Human face action feature be to the left 30 degree, the human face action feature in another frame image is 50 degree to the left.
Human face action feature can be indicated by pre-arranged code (such as binary code), such as 00000001,00000010,
00000011 etc..If there is 100 frame key frame images, this 100 face motion characteristics are constituted with human face action matrix, can be with
For row vector, or column vector can also be multiple lines and multiple rows (such as 10*10) matrix.
S103 obtains attend class emotional characteristics and the emotional change spy of the user of the corresponding user of the facial image
Sign;
Emotional characteristics of attending class are embodied in combination with various aspects such as teaching mood, expression, posture action, expressions.It can pass through
One standard value and a variation range are set to every aspect, so that it is determined that the score value of every aspect, then preset algorithm can be used and (such as ask
Average value) and obtain the emotional characteristics of attending class (can indicate by some obtained occurrence) of current key frame image.
Emotional change feature can be regarded as the emotional change of user in adjacent two frames key frame images.For example, first frame is closed
The mood of attending class of user is happiness in key frame image, and the mood of attending class of user is sorrow, third frame in the second frame key frame images
The mood of attending class of user, if being defined as 0 using happiness as standard, often increases by one toward happy direction for indignation in key frame images
A expression, then corresponding value increases by 1, correspondingly, often increasing an expression toward sad direction, then corresponding value reduces 1.
As shown in Fig. 2, " happiness " corresponding value is 0, " pride " corresponding value is 1, and " excitement " corresponding value is 2, " sorrow
It is anxious " corresponding value is -1, " sad " corresponding value is -2, and " indignation " corresponding value is -3.
Emotional change feature mainly describes the emotional change relationship of two framed users of front and back.By defined rule,
Can be used trained neural network model (such as: mini-xception model etc.) extract respectively facial image it is corresponding on
Class mood matrix and emotional change matrix.
S104 determines institute based on the human face action feature, attend class emotional characteristics and the emotional change feature
State the class state of user.
If human face action feature homography A, attend class emotional characteristics homography B, emotional change feature homography C,
It in conjunction with A, B, C, is normalized and Regularization, to obtain matrix D, then using D as classification trained in advance
The input of device (such as gradient promotes tree algorithm (XGboost)) so that the class state marking of user can be obtained, then is based on setting in advance
Fixed threshold range, to can determine the class state of user.
The class state may include multiple states such as outstanding, good, qualified, unqualified.Each state can pass through one
Threshold range is identified.For example, 0-59 point be it is unqualified, 60-70 is qualification, 71-85 be it is good, 86-100 is outstanding.
It should be noted that usually can be regarded as only including a teacher at school in video file, i.e., for every frame figure
As included in facial image only one and be identical facial image.Certainly, if at school comprising more in video file
It in a facial image, then needs to classify to this multiple image according to each facial image, then be distinguished using aforesaid way
Determine the class state of the corresponding user of each face.
In the application one or more embodiment, by obtaining the key frame images in video file of attending class, to described
Key frame images are parsed, and to obtain the facial image and human face action feature in the key frame images, then obtain institute
Attend class emotional characteristics and the emotional change feature of the user of the corresponding user of facial image are stated, the human face action is based on
Feature, emotional characteristics and the emotional change feature of attending class, it may be determined that the class state of the user.Based on view of attending class
The identification and extraction of the human face action feature of facial image, attend class emotional characteristics and emotional change feature in frequency file, so that it may
The class state for determining the corresponding user of the facial image reduces people without being calculated again by artificial selection index
The workload of work screening, improves the accuracy rate of Online class class-teaching of teacher status assessment.Processing Online class view can also be improved
The rate of frequency can deal with large-scale classroom assessment task.
Fig. 4 is referred to, is a kind of flow diagram of class state detection method provided by the embodiments of the present application.This implementation
Example is applied to illustrate in user terminal with condition detection method of attending class.It's time for class condition detection method may include following
Step:
S201 obtains the picture quality of each frame image in video file of attending class;
Video file of attending class refers to the video file comprising teacher and/or student.The video file can be in real time
It records, is also possible to record completion.In the embodiment of the present application, which can teach online for network remote
Educate the speech video of teacher.
Video file of attending class includes multiple image, includes different image informations in every frame image, as image is fuzzy
Degree, image face integrity degree, image resolution ratio etc..It can determine picture quality by image information.
When picture quality is unsatisfactory for pre-set image quality threshold, as fuzziness is greater than default fuzziness threshold value and/or figure
As face integrity degree is less than default integrity degree threshold value and/or image resolution ratio less than resolution threshold etc., it is believed that the frame image
For invalid image.
S202, picture quality is less than the image of predetermined quality threshold in video file of attending class described in filtering, generates key frame
Image;
I.e. by identified invalid image-erasing, by image remaining after the invalid image-erasing in video file of attending class
As key frame images.
That is, key frame images are the image that picture quality is more than or equal to predetermined quality threshold, image is fuzzy
Spend low, face is complete, resolution ratio is high etc..
S203 carries out dimension-reduction treatment to the key frame images;
Key frame images are two dimensional image, and two dimensional image is p*q size, are the vector spaces of m=qp dimension, work as image
Pixel it is more when, corresponding dimension is also larger.For example, the image of a 100*100 pixel size is exactly the figure of 10000 dimensions
Image space.
But in fact, not all dimension space all contains useful information, principal component analysis can be used
(Principal Component Analysis, PCA) is that some possible relevant variables are converted into a smaller not phase
The subset of pass finds the direction for possessing maximum variance in data.One High Dimensional Data Set often indicates by correlated variables, therefore only
There are in some dimensions data just meaningful, includes most information.
That is, can delete keys unessential dimension in image by using PCA, to be carried out to key frame images
Dimensionality reduction, such as key frame images are that 10000 images tieed up obtain 6000 dimension images, after dimension-reduction treatment so as to reduce
Calculation amount improves computational efficiency.
S204 identifies the facial image in the key frame images using face recognition algorithms;
For details, reference can be made to S102, and details are not described herein again.
The facial image is adjusted to the image of pre-set dimension size by S205;
The pre-set dimension size can be arbitrary size, a kind of feasible having a size of according to 68 keys of face
Facial image is aligned to 68*68 gray level image by point (datum mark).
The reference characteristic point can be instruction facial characteristics datum mark, for example, face mask, eye contour, nose,
The points such as lip can be 83 datum marks, be also possible to 68 datum marks as shown in Figure 4, and specific points can have exploitation
Personnel according to demand depending on.
S206 parses the facial image, obtains the corresponding human face action feature of the facial image;
The human face action feature of the facial image identified is parsed, thus can determine this group of key frame images by
The human face action matrix of image sequencing composition.Wherein, human face action feature can be regarded as facial orientation, as in a frame image
Human face action feature be to the left 30 degree, the human face action feature in another frame image is 50 degree to the left.
Human face action feature can be indicated by pre-arranged code (such as binary code), such as 00000001,00000010,
00000011 etc..If there is 100 frame key frame images, this 100 face motion characteristics are constituted with human face action matrix, can be with
For row vector, or column vector can also be multiple lines and multiple rows (such as 10*10) matrix.
S207 extracts the emotional characteristics of attending class of the corresponding user of the facial image using trained neural network model
And the emotional change feature of the user;
Emotional characteristics of attending class are embodied in combination with various aspects such as teaching mood, expression, posture action, expressions.It can pass through
One standard value and a variation range are set to every aspect, so that it is determined that the score value of every aspect, then preset algorithm can be used and (such as ask
Average value) and obtain the emotional characteristics of attending class (can indicate by some obtained occurrence) of current key frame image.
Emotional change feature can be regarded as the emotional change of user in adjacent two frames key frame images.For example, first frame is closed
The mood of attending class of user is happiness in key frame image, and the mood of attending class of user is sorrow, third frame in the second frame key frame images
The mood of attending class of user, if being defined as 0 using happiness as standard, often increases by one toward happy direction for indignation in key frame images
A expression, then corresponding value increases by 1, correspondingly, often increasing an expression toward sad direction, then corresponding value reduces 1.
Emotional change feature mainly describes the emotional change relationship of two framed users of front and back.By defined rule,
Can be used trained neural network model (such as: mini-xception model etc.) extract respectively facial image it is corresponding on
Class mood matrix and emotional change matrix.
Wherein, mini-xception is changing to one kind of Inception v3 of proposing after Inception of google
Into the convolution mainly replaced using depthwise separable convolution in original Inception v3 is grasped
Make.It is illustrated in figure 5 the structure chart of mini-xception model, sparsableConv therein is exactly depthwise
separable convolution.In addition, the connection of each fritter is using residule connection (adding in figure
Number), rather than the concat in original Inception.
Depthwise separable convolution is exactly that traditional convolution operation is divided into two steps, it is assumed that originally
It is the convolution of 3*3, then depthwise separable convolution is exactly first with the M one-to-one convolution of 3*3 convolution kernel
M feature map of input, does not sum, and generates M result;Then with being generated before the normal convolution of the convolution kernel of N number of 1*1
M as a result, summation, ultimately produces N number of result.
Mini-xception model is trained using sample in advance, the facial image identified is input to training
In the mini-xception model of completion, corresponding mood matrix and the institute of attending class of emotional characteristics of attending class of user can be obtained
State the corresponding emotional change matrix of emotional change feature of user.
S208, to the human face action feature, emotional characteristics and the emotional change feature progress normalizing of attending class
Change processing and Regularization, generates emotional state matrix;
Normalized is a kind of dimensionless processing means, the absolute value of physical system numerical value is made to become certain relative value
Relationship.It can simplify calculating, reduce magnitude.For example, frequency is all after each frequency values are normalized in filter with cutoff frequency
It is off the relative value of frequency, without dimension.After impedance is normalized with the internal resistance of source, each impedance is all at a kind of opposite
Impedance value, " ohm " this dimension also without.
Regularization refers to that in linear algebraic process, ill-posed problem is usually to be determined by one group of linear algebraic equation
Justice, and this group of equation group is typically derived from the ill-posed inverse problem of very big conditional number.Big conditional number means to give up
The result of problem can be severely impacted by entering error or other errors.
Regularization is exactly that can be construed to priori knowledge to addition of constraints, such constraint on minimum experience error function
(regularization parameter is equivalent to introduce prior distribution to parameter).Constraint has guiding function, is inclined to when optimizing error function
Meet the direction of the gradient reduction of constraint in selection, final solution is made to tend to meet priori knowledge (such as general l-norm elder generation
It tests, indicates former problem it is more likely that fairly simple, such optimization tends to produce the small solution of parameter value magnitude, general corresponding
In the smooth solution of Sparse parameter).
Meanwhile regularization solves the ill-posedness of inverse problem, the solution of generation is to exist, unique to also rely on data simultaneously
, noise on it is ill posed influence just it is weak, solution would not over-fitting, and if priori (regularization) properly, solution just be inclined to
Then meet true solution, even if incoherent sample number is seldom to each other in training set.
If human face action feature homography A, attend class emotional characteristics homography B, emotional change feature homography C,
It in conjunction with A, B, C, is normalized and Regularization, to obtain emotional state matrix D.
S209 carries out classification processing to the emotional state matrix using gradient boosting algorithm, in state classification set
Determine the corresponding dbjective state classification of the emotional state matrix, the shape of attending class by dbjective state classification as the user
State.
Specifically, gradient boosting algorithm is one kind of sorting algorithm.The sorting algorithm for solving the problems, such as classification learning,
Common sorting algorithm is divided into single sorting algorithm and the Ensemble Learning Algorithms for combining single classification method, wherein single
Sorting algorithm specifically include that decision tree (e.g., Gradient Boosting Decision Tree, GBDT, gradient promotion determine
Plan tree), Bayes, artificial neural network, K- neighbour, support vector machines and the classification based on correlation rule etc., for combining list
The Ensemble Learning Algorithms of one classification method include Bagging, Boosting and XGboost etc..
It is illustrated by taking XGboost as an example, basic thought is that a decision tree, decision tree are first trained with an initial value
The available prediction of leaf value, and prediction after residual error, then subsequent decision tree will be based on front decision
The residual error of tree is trained, until the residual error of predicted value and true value is zero.It is exactly front finally for the predicted value of test sample
Many decision tree predicted values add up.
Again using D as the input of classifier (such as gradient boosting algorithm (XGboost)) trained in advance, to can be used
The class state at family is given a mark, then is based on preset threshold range, to can determine the class state of user.
The class state may include multiple states such as outstanding, good, qualified, unqualified.Each state can pass through one
Threshold range is identified.For example, 0-59 point be it is unqualified, 60-70 is qualification, 71-85 be it is good, 86-100 is outstanding.
For example, as shown in fig. 6, emotional state matrix D is input in classifier, thus export result be class state=
Well.
It should be noted that usually can be regarded as only including a teacher at school in video file, i.e., for every frame figure
As included in facial image only one and be identical facial image.Certainly, if at school comprising more in video file
It in a facial image, then needs to classify to this multiple image according to each facial image, then be distinguished using aforesaid way
Determine the class state of the corresponding user of each face.
In the application one or more embodiment, by obtaining the key frame images in video file of attending class, to described
Key frame images are parsed, and to obtain the facial image and human face action feature in the key frame images, then obtain institute
Attend class emotional characteristics and the emotional change feature of the user of the corresponding user of facial image are stated, the human face action is based on
Feature, emotional characteristics and the emotional change feature of attending class, it may be determined that the class state of the user.Based on view of attending class
The identification and extraction of the human face action feature of facial image, attend class emotional characteristics and emotional change feature in frequency file, so that it may
The class state for determining the corresponding user of the facial image reduces people without being calculated again by artificial selection index
The workload of work screening improves the accurate of Online class class-teaching of teacher status assessment using neural network+machine learning method
Rate.Using lightweight neural network and efficient disaggregated model, the rate of processing Online class video can also be improved, can be dealt with
Large-scale classroom assessment task.
Following is the application Installation practice, can be used for executing the application embodiment of the method.It is real for the application device
Undisclosed details in example is applied, the application embodiment of the method is please referred to.
Fig. 7 is referred to, it illustrates the structures for the class state detection device that one exemplary embodiment of the application provides
Schematic diagram.It's time for class condition checkout gear can by software, hardware or both be implemented in combination with as terminal whole or
A part.The device 1 includes that key frame images obtain module 10, fisrt feature obtains module 20, second feature obtains module 30
With class state determining module 40.
Key frame images obtain module 10, for obtaining the key frame images in video file of attending class;
Fisrt feature obtains module 20 and obtains in the key frame images for parsing to the key frame images
Facial image and human face action feature;
Second feature obtains module 30, for obtaining attend class emotional characteristics and the institute of the corresponding user of the facial image
State the emotional change feature of user;
Class state determining module 40, for based on the human face action feature, the emotional characteristics and described of attending class
Emotional change feature determines the class state of the user.
Optionally, as shown in figure 8, the key frame images obtain module 10, comprising:
Quality acquiring unit 101, for obtaining video file of attending class, each frame image in video file of attending class described in acquisition
Picture quality;
Image filtering unit 102 is less than predetermined quality threshold for filtering picture quality in the video file of attending class
Image generates key frame images.
Optionally, as shown in figure 9, described device further include:
Dimension-reduction treatment module 50, for carrying out dimension-reduction treatment to the key frame images.
Optionally, as shown in Figure 10, the fisrt feature obtains module 20, comprising:
Face identification unit 201, for identifying the facial image in the key frame images using face recognition algorithms;
It is corresponding to obtain the facial image for parsing to the facial image for motion characteristic acquiring unit 202
Human face action feature.
Optionally, as shown in Figure 10, the fisrt feature obtains module 20 further include:
Size adjusting unit 203, for the facial image to be adjusted to the image of pre-set dimension size.
Optionally, the second feature obtains module 30, is specifically used for:
Using the emotional characteristics and described of attending class of the corresponding user of facial image described in mini-xception model extraction
The emotional change feature of user.
Optionally, as shown in figure 11, the class state determining module 40, comprising:
Matrix generation unit 401, for the human face action feature, attend class emotional characteristics and the mood change
Change feature is normalized and Regularization, generates emotional state matrix;
Matrix Classification unit 402, for carrying out classification processing to the emotional state matrix using gradient boosting algorithm,
The corresponding dbjective state classification of the emotional state matrix is determined in state classification set, regard dbjective state classification as institute
State the class state of user.
It should be noted that class state detection device provided by the above embodiment is executing class state detection method
When, only the example of the division of the above functional modules, in practical application, it can according to need and divide above-mentioned function
With being completed by different functional modules, i.e., the internal structure of equipment is divided into different functional modules, to complete above description
All or part of function.In addition, class state detection device provided by the above embodiment and class state detection method are real
It applies example and belongs to same design, embody realization process and be detailed in embodiment of the method, which is not described herein again.
Above-mentioned the embodiment of the present application serial number is for illustration only, does not represent the advantages or disadvantages of the embodiments.
The technical solution bring beneficial effect that some embodiments of the application provide includes at least:
In the application one or more embodiment, by obtaining the key frame images in video file of attending class, to described
Key frame images are parsed, and to obtain the facial image and human face action feature in the key frame images, then obtain institute
Attend class emotional characteristics and the emotional change feature of the user of the corresponding user of facial image are stated, the human face action is based on
Feature, emotional characteristics and the emotional change feature of attending class, it may be determined that the class state of the user.Based on view of attending class
The identification and extraction of the human face action feature of facial image, attend class emotional characteristics and emotional change feature in frequency file, so that it may
The class state for determining the corresponding user of the facial image reduces people without being calculated again by artificial selection index
The workload of work screening improves the accurate of Online class class-teaching of teacher status assessment using neural network+machine learning method
Rate.Using lightweight neural network and efficient disaggregated model, the rate of processing Online class video can also be improved, can be dealt with
Large-scale classroom assessment task.
The embodiment of the present application also provides a kind of computer storage medium, the computer storage medium can store more
Item instruction, described instruction are suitable for being loaded by processor and being executed the method and step such as above-mentioned Fig. 1-embodiment illustrated in fig. 6, specifically hold
Row process may refer to Fig. 1-embodiment illustrated in fig. 6 and illustrate, herein without repeating.
Referring to Figure 12, the structural schematic diagram of a kind of electronic equipment is provided for the embodiment of the present application.As shown in figure 12, institute
Stating electronic equipment 1000 may include: at least one processor 1001, at least one network interface 1004, user interface 1003,
Memory 1005, at least one communication bus 1002.
Wherein, communication bus 1002 is for realizing the connection communication between these components.
Wherein, user interface 1003 may include display screen (Display), camera (Camera), optional user interface
1003 can also include standard wireline interface and wireless interface.
Wherein, network interface 1004 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).
Wherein, processor 1001 may include one or more processing core.Processor 1001 using it is various excuse and
Various pieces in the entire electronic equipment 1000 of connection, by run or execute the instruction being stored in memory 1005,
Program, code set or instruction set, and the data being stored in memory 1005 are called, execute the various function of electronic equipment 1000
It can and handle data.Optionally, processor 1001 can using Digital Signal Processing (Digital Signal Processing,
DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array
At least one of (Programmable Logic Array, PLA) example, in hardware is realized.Processor 1001 can integrating central
Processor (Central Processing Unit, CPU), image processor (Graphics Processing Unit, GPU)
With the combination of one or more of modem etc..Wherein, the main processing operation system of CPU, user interface and apply journey
Sequence etc.;GPU is used to be responsible for the rendering and drafting of content to be shown needed for display screen;Modem is for handling channel radio
Letter.It is understood that above-mentioned modem can not also be integrated into processor 1001, carried out separately through chip piece
It realizes.
Wherein, memory 1005 may include random access memory (Random Access Memory, RAM), also can wrap
Include read-only memory (Read-Only Memory).Optionally, which includes non-transient computer-readable medium
(non-transitory computer-readable storage medium).Memory 1005 can be used for store instruction, journey
Sequence, code, code set or instruction set.Memory 1005 may include storing program area and storage data area, wherein storing program area
Can store the instruction for realizing operating system, the instruction at least one function (such as touch function, sound play function
Energy, image player function etc.), for realizing instruction of above-mentioned each embodiment of the method etc.;Storage data area can store each above
The data etc. being related in a embodiment of the method.Memory 1005 optionally can also be that at least one is located remotely from aforementioned processing
The storage device of device 1001.As shown in figure 12, as may include in a kind of memory 1005 of computer storage medium operation
System, network communication module, Subscriber Interface Module SIM and class state detect application program.
In the electronic equipment 1000 shown in Figure 12, user interface 1003 is mainly used for providing the interface of input for user,
Obtain the data of user's input;And processor 1001 can be used for calling the class state stored in memory 1005 detection application
Program, and specifically execute following operation:
Obtain the key frame images in video file of attending class;
The key frame images are parsed, the facial image in the key frame images and human face action spy are obtained
Sign;
Obtain attend class emotional characteristics and the emotional change feature of the user of the corresponding user of the facial image;
Based on the human face action feature, attend class emotional characteristics and the emotional change feature, the use is determined
The class state at family.
In one embodiment, the processor 1001 is when executing the key frame images for obtaining and attending class in video file,
It is specific to execute following operation:
It attends class described in acquisition the picture quality of each frame image in video file;
Picture quality in video file of attending class described in filtering is less than the image of predetermined quality threshold, generates key frame images.
In one embodiment, the processor 1001 is executing the key frame images in video file of attending class described in acquisition
Later and it is described the key frame images are parsed before, also execute following operation:
Dimension-reduction treatment is carried out to the key frame images.
In one embodiment, the processor 1001 parses the key frame images in execution, described in acquisition
It is specific to execute following operation when facial image and human face action feature in key frame images:
The facial image in the key frame images is identified using face recognition algorithms;
The facial image is parsed, the corresponding human face action feature of the facial image is obtained.
In one embodiment, the processor 1001 is being executed using the face recognition algorithms identification key frame images
In facial image after and it is described the facial image is parsed before, also execute following operation:
The facial image is adjusted to the image of pre-set dimension size.
In one embodiment, the processor 1001 is executing the feelings of attending class for obtaining the corresponding user of the facial image
It is specific to execute following operation when thread feature and the emotional change feature of the user:
Using trained neural network model extract the corresponding user of the facial image attend class emotional characteristics and
The emotional change feature of the user.
In one embodiment, the processor 1001 is being executed based on the human face action feature, the mood of attending class
Feature and the emotional change feature are specific to execute following operation when determining the class state of the user:
To the human face action feature, described attend class emotional characteristics and the emotional change feature is normalized
And Regularization, generate emotional state matrix;
Classification processing is carried out to the emotional state matrix using gradient boosting algorithm, institute is determined in state classification set
State the corresponding dbjective state classification of emotional state matrix, the class state by dbjective state classification as the user.
The technical solution bring beneficial effect that some embodiments of the application provide includes at least:
In the application one or more embodiment, by obtaining the key frame images in video file of attending class, to described
Key frame images are parsed, and to obtain the facial image and human face action feature in the key frame images, then obtain institute
Attend class emotional characteristics and the emotional change feature of the user of the corresponding user of facial image are stated, the human face action is based on
Feature, emotional characteristics and the emotional change feature of attending class, it may be determined that the class state of the user.Based on view of attending class
The identification and extraction of the human face action feature of facial image, attend class emotional characteristics and emotional change feature in frequency file, so that it may
The class state for determining the corresponding user of the facial image reduces people without being calculated again by artificial selection index
The workload of work screening improves the accurate of Online class class-teaching of teacher status assessment using neural network+machine learning method
Rate.Using lightweight neural network and efficient disaggregated model, the rate of processing Online class video can also be improved, can be dealt with
Large-scale classroom assessment task.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the program can be stored in a computer-readable storage medium
In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic
Dish, CD, read-only memory or random access memory etc..
Above disclosed is only the application preferred embodiment, cannot limit the right model of the application with this certainly
It encloses, therefore according to equivalent variations made by the claim of this application, still belongs to the range that the application is covered.
Claims (10)
1. a kind of class state detection method, which is characterized in that the described method includes:
Obtain the key frame images in video file of attending class;
The key frame images are parsed, the facial image and human face action feature in the key frame images are obtained;
Obtain attend class emotional characteristics and the emotional change feature of the user of the corresponding user of the facial image;
Based on the human face action feature, attend class emotional characteristics and the emotional change feature, determine the user's
Class state.
2. according to the method described in claim 1, it is characterized in that obtaining the key frame images in video file of attending class, comprising:
It attends class described in acquisition the picture quality of each frame image in video file;
Picture quality in video file of attending class described in filtering is less than the image of predetermined quality threshold, generates key frame images.
3. the method according to claim 1, wherein the acquisition is attended class the key frame images in video file it
Afterwards and it is described the key frame images are parsed before, further includes:
Dimension-reduction treatment is carried out to the key frame images.
4. obtaining institute the method according to claim 1, wherein described parse the key frame images
State the facial image and human face action feature in key frame images, comprising:
The facial image in the key frame images is identified using face recognition algorithms;
The facial image is parsed, the corresponding human face action feature of the facial image is obtained.
5. according to the method described in claim 4, it is characterized in that, described identify the key frame figure using face recognition algorithms
As in facial image after and it is described the facial image is parsed before, further includes:
The facial image is adjusted to the image of pre-set dimension size.
6. the method according to claim 1, wherein described obtain attending class for the corresponding user of facial image
The emotional change feature of emotional characteristics and the user, comprising:
The emotional characteristics and described of attending class of the corresponding user of the facial image are extracted using trained neural network model
The emotional change feature of user.
7. the method according to claim 1, wherein described be based on the human face action feature, the feelings of attending class
Thread feature and the emotional change feature, determine the class state of the user, comprising:
To the human face action feature, it is described attend class emotional characteristics and the emotional change feature be normalized and
Regularization generates emotional state matrix;
Classification processing is carried out to the emotional state matrix using gradient boosting algorithm, the feelings are determined in state classification set
The corresponding dbjective state classification of not-ready status matrix, the class state by dbjective state classification as the user.
8. a kind of class state detection device, which is characterized in that described device includes:
Key frame images obtain module, for obtaining the key frame images in video file of attending class;
Fisrt feature obtains module and obtains the face in the key frame images for parsing to the key frame images
Image and human face action feature;
Second feature obtains module, for obtaining attend class emotional characteristics and the user of the corresponding user of the facial image
Emotional change feature;
Class state determining module, for based on the human face action feature, attend class emotional characteristics and the mood change
Change feature, determines the class state of the user.
9. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with a plurality of instruction, described instruction
Suitable for being loaded by processor and being executed the method and step such as claim 1~7 any one.
10. a kind of electronic equipment characterized by comprising processor and memory;Wherein, the memory is stored with calculating
Machine program, the computer program are suitable for being loaded by the processor and being executed the method step such as claim 1~7 any one
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910495719.XA CN110363084A (en) | 2019-06-10 | 2019-06-10 | A kind of class state detection method, device, storage medium and electronics |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910495719.XA CN110363084A (en) | 2019-06-10 | 2019-06-10 | A kind of class state detection method, device, storage medium and electronics |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110363084A true CN110363084A (en) | 2019-10-22 |
Family
ID=68216852
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910495719.XA Pending CN110363084A (en) | 2019-06-10 | 2019-06-10 | A kind of class state detection method, device, storage medium and electronics |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110363084A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666820A (en) * | 2020-05-11 | 2020-09-15 | 北京中广上洋科技股份有限公司 | Speaking state recognition method and device, storage medium and terminal |
CN111967346A (en) * | 2020-07-28 | 2020-11-20 | 北京大米科技有限公司 | Video data processing method and device and electronic equipment |
CN112418068A (en) * | 2020-11-19 | 2021-02-26 | 中国平安人寿保险股份有限公司 | On-line training effect evaluation method, device and equipment based on emotion recognition |
CN112907408A (en) * | 2021-03-01 | 2021-06-04 | 北京安博创赢教育科技有限责任公司 | Method, device, medium and electronic equipment for evaluating learning effect of students |
CN113052088A (en) * | 2021-03-29 | 2021-06-29 | 北京大米科技有限公司 | Image processing method and device, readable storage medium and electronic equipment |
CN113744445A (en) * | 2021-09-06 | 2021-12-03 | 北京雷石天地电子技术有限公司 | Match voting method, device, computer equipment and storage medium |
CN113762156A (en) * | 2021-09-08 | 2021-12-07 | 北京优酷科技有限公司 | Viewing data processing method, device and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
CN114677751A (en) * | 2022-05-26 | 2022-06-28 | 深圳市中文路教育科技有限公司 | Learning state monitoring method, monitoring device and storage medium |
CN114757499A (en) * | 2022-03-24 | 2022-07-15 | 慧之安信息技术股份有限公司 | Working quality analysis method based on deep learning |
CN115019374A (en) * | 2022-07-18 | 2022-09-06 | 北京师范大学 | Intelligent classroom student concentration degree low-consumption detection method and system based on artificial intelligence |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101807198A (en) * | 2010-01-08 | 2010-08-18 | 中国科学院软件研究所 | Video abstraction generating method based on sketch |
CN102750383A (en) * | 2012-06-28 | 2012-10-24 | 中国科学院软件研究所 | Spiral abstract generation method oriented to video content |
CN106447184A (en) * | 2016-09-21 | 2017-02-22 | 中国人民解放军国防科学技术大学 | Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning |
CN107292289A (en) * | 2017-07-17 | 2017-10-24 | 东北大学 | Facial expression recognizing method based on video time sequence |
CN107330420A (en) * | 2017-07-14 | 2017-11-07 | 河北工业大学 | The facial expression recognizing method of rotation information is carried based on deep learning |
CN107430780A (en) * | 2015-02-23 | 2017-12-01 | 柯达阿拉里斯股份有限公司 | The method created for the output based on video content characteristic |
CN108563978A (en) * | 2017-12-18 | 2018-09-21 | 深圳英飞拓科技股份有限公司 | A kind of mood detection method and device |
CN109657529A (en) * | 2018-07-26 | 2019-04-19 | 台州学院 | Classroom teaching effect evaluation system based on human facial expression recognition |
CN109784153A (en) * | 2018-12-10 | 2019-05-21 | 平安科技(深圳)有限公司 | Emotion identification method, apparatus, computer equipment and storage medium |
CN109784312A (en) * | 2019-02-18 | 2019-05-21 | 深圳锐取信息技术股份有限公司 | Teaching Management Method and device |
-
2019
- 2019-06-10 CN CN201910495719.XA patent/CN110363084A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101807198A (en) * | 2010-01-08 | 2010-08-18 | 中国科学院软件研究所 | Video abstraction generating method based on sketch |
CN102750383A (en) * | 2012-06-28 | 2012-10-24 | 中国科学院软件研究所 | Spiral abstract generation method oriented to video content |
CN107430780A (en) * | 2015-02-23 | 2017-12-01 | 柯达阿拉里斯股份有限公司 | The method created for the output based on video content characteristic |
CN106447184A (en) * | 2016-09-21 | 2017-02-22 | 中国人民解放军国防科学技术大学 | Unmanned aerial vehicle operator state evaluation method based on multi-sensor measurement and neural network learning |
CN107330420A (en) * | 2017-07-14 | 2017-11-07 | 河北工业大学 | The facial expression recognizing method of rotation information is carried based on deep learning |
CN107292289A (en) * | 2017-07-17 | 2017-10-24 | 东北大学 | Facial expression recognizing method based on video time sequence |
CN108563978A (en) * | 2017-12-18 | 2018-09-21 | 深圳英飞拓科技股份有限公司 | A kind of mood detection method and device |
CN109657529A (en) * | 2018-07-26 | 2019-04-19 | 台州学院 | Classroom teaching effect evaluation system based on human facial expression recognition |
CN109784153A (en) * | 2018-12-10 | 2019-05-21 | 平安科技(深圳)有限公司 | Emotion identification method, apparatus, computer equipment and storage medium |
CN109784312A (en) * | 2019-02-18 | 2019-05-21 | 深圳锐取信息技术股份有限公司 | Teaching Management Method and device |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111666820B (en) * | 2020-05-11 | 2023-06-20 | 北京中广上洋科技股份有限公司 | Speech state recognition method and device, storage medium and terminal |
CN111666820A (en) * | 2020-05-11 | 2020-09-15 | 北京中广上洋科技股份有限公司 | Speaking state recognition method and device, storage medium and terminal |
CN111967346A (en) * | 2020-07-28 | 2020-11-20 | 北京大米科技有限公司 | Video data processing method and device and electronic equipment |
CN112418068A (en) * | 2020-11-19 | 2021-02-26 | 中国平安人寿保险股份有限公司 | On-line training effect evaluation method, device and equipment based on emotion recognition |
CN112907408A (en) * | 2021-03-01 | 2021-06-04 | 北京安博创赢教育科技有限责任公司 | Method, device, medium and electronic equipment for evaluating learning effect of students |
CN113052088A (en) * | 2021-03-29 | 2021-06-29 | 北京大米科技有限公司 | Image processing method and device, readable storage medium and electronic equipment |
CN113744445A (en) * | 2021-09-06 | 2021-12-03 | 北京雷石天地电子技术有限公司 | Match voting method, device, computer equipment and storage medium |
CN113762156A (en) * | 2021-09-08 | 2021-12-07 | 北京优酷科技有限公司 | Viewing data processing method, device and storage medium |
CN113762156B (en) * | 2021-09-08 | 2023-10-24 | 北京优酷科技有限公司 | Video data processing method, device and storage medium |
CN114007105A (en) * | 2021-10-20 | 2022-02-01 | 浙江绿城育华教育科技有限公司 | Online course interaction method, device, equipment and storage medium |
CN114757499A (en) * | 2022-03-24 | 2022-07-15 | 慧之安信息技术股份有限公司 | Working quality analysis method based on deep learning |
CN114677751A (en) * | 2022-05-26 | 2022-06-28 | 深圳市中文路教育科技有限公司 | Learning state monitoring method, monitoring device and storage medium |
CN115019374A (en) * | 2022-07-18 | 2022-09-06 | 北京师范大学 | Intelligent classroom student concentration degree low-consumption detection method and system based on artificial intelligence |
CN115019374B (en) * | 2022-07-18 | 2022-10-11 | 北京师范大学 | Intelligent classroom student concentration degree low-consumption detection method and system based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110363084A (en) | A kind of class state detection method, device, storage medium and electronics | |
US20190087686A1 (en) | Method and apparatus for detecting human face | |
CN105426356B (en) | A kind of target information recognition methods and device | |
CN112346567B (en) | Virtual interaction model generation method and device based on AI (Artificial Intelligence) and computer equipment | |
CN109034069B (en) | Method and apparatus for generating information | |
CN107391760A (en) | User interest recognition methods, device and computer-readable recording medium | |
CN109919244B (en) | Method and apparatus for generating a scene recognition model | |
CN106611015B (en) | Label processing method and device | |
CN112863683B (en) | Medical record quality control method and device based on artificial intelligence, computer equipment and storage medium | |
CN111062389A (en) | Character recognition method and device, computer readable medium and electronic equipment | |
CN108924381B (en) | Image processing method, image processing apparatus, and computer readable medium | |
CN116229530A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110717407A (en) | Human face recognition method, device and storage medium based on lip language password | |
CN111062440A (en) | Sample selection method, device, equipment and storage medium | |
CN113033912A (en) | Problem solving person recommendation method and device | |
CN112995690A (en) | Live content item identification method and device, electronic equipment and readable storage medium | |
CN115906861B (en) | Sentence emotion analysis method and device based on interaction aspect information fusion | |
CN111859933A (en) | Training method, recognition method, device and equipment of Malay recognition model | |
CN109657710B (en) | Data screening method and device, server and storage medium | |
CN110738261A (en) | Image classification and model training method and device, electronic equipment and storage medium | |
CN114170484B (en) | Picture attribute prediction method and device, electronic equipment and storage medium | |
CN115423031A (en) | Model training method and related device | |
CN112732908B (en) | Test question novelty evaluation method and device, electronic equipment and storage medium | |
CN111767710B (en) | Indonesia emotion classification method, device, equipment and medium | |
CN113947798A (en) | Background replacing method, device and equipment of application program and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191022 |
|
RJ01 | Rejection of invention patent application after publication |