CN108376305A - Training quality appraisal procedure, device, equipment and medium - Google Patents

Training quality appraisal procedure, device, equipment and medium Download PDF

Info

Publication number
CN108376305A
CN108376305A CN201711486994.2A CN201711486994A CN108376305A CN 108376305 A CN108376305 A CN 108376305A CN 201711486994 A CN201711486994 A CN 201711486994A CN 108376305 A CN108376305 A CN 108376305A
Authority
CN
China
Prior art keywords
student
class
training
attending class
attending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711486994.2A
Other languages
Chinese (zh)
Inventor
王希
陈捷
李金地
严东萍
姚文海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Group Fujian Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Group Fujian Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Group Fujian Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN201711486994.2A priority Critical patent/CN108376305A/en
Publication of CN108376305A publication Critical patent/CN108376305A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management, e.g. organising, planning, scheduling or allocating time, human or machine resources; Enterprise planning; Organisational models
    • G06Q10/063Operations research or analysis
    • G06Q10/0639Performance analysis
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06K9/6262Validation, performance evaluation or active pattern learning techniques
    • G06K9/6265Validation, performance evaluation or active pattern learning techniques based on a specific statistical test
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6288Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion
    • G06K9/629Fusion techniques, i.e. combining data from various sources, e.g. sensor fusion of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance

Abstract

An embodiment of the present invention provides a kind of training quality appraisal procedure, device, equipment and media.This method includes:The face characteristic of acquisition student of upper class hour obtains the attention rate of attending class of student by the face characteristic;The acoustic feature in acquisition classroom of upper class hour obtains the participation of attending class of student by the acoustic feature;The physiological parameter of acquisition student of upper class hour obtains the interactive degree of attending class of student by the physiological parameter;The attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class are input in the neural network that training obtains in advance, to give the assessment of quality training, obtain training quality score.

Description

Training quality appraisal procedure, device, equipment and medium
Technical field
The present invention relates to field of computer technology more particularly to a kind of training quality appraisal procedure, device, equipment and Jie Matter.
Background technology
Classroom instruction is still the most basic and important teaching organization forms of higher education, while being also to realize personnel training Target, guarantee and the most important link for improving the quality of education.Traditional teachers ' teaching quality assessment mode generally have student assessment, Peer review, inspection expert's assessment.In student assesses, the phenomenon that student gives a mark with emotion, is very universal.Who is old It is teacher's usually poor management, not responsible to student, but examination is easy to pass through and score is not also low, will be beaten when they give a mark High score;Teacher is not strict with student to secure satisfactory grades in assessment, or even pleases student, to divide to student.Religion is commented in expert In, since the expert of Evaluation Center is in order to mitigate the task of oneself, whole subject is often judged with the performance of one class of teacher Journey, and Evaluation Center expert also has the psychology of a kind of " offending nobody ".Institute leader collect manager, instructor, scientific research person in All over the body, they spend most of time energy in the teaching, scientific research and management work of oneself, but particularly for teaching evaluation Work then seems unable to do what one wishes, can only be assessment as a kind of form when many.
To sum up, there is the demand to intelligent and precision Teaching Quality Assessment reponse system in the related technology.
Invention content
An embodiment of the present invention provides training quality appraisal procedure, device, equipment and media, to objectively to training matter Amount scores.
In a first aspect, an embodiment of the present invention provides a kind of training quality appraisal procedure, method includes:
The face characteristic of acquisition student of upper class hour obtains the attention rate of attending class of student by the face characteristic;And/or
The acoustic feature in acquisition classroom of upper class hour obtains the participation of attending class of student by the acoustic feature;And/or
The physiological parameter of acquisition student of upper class hour obtains the interactive degree of attending class of student by the physiological parameter;
At least one in the attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class is input in advance In the neural network that training obtains, to give the assessment of quality training, training quality score is obtained.
Preferably, in the acquisition class hour student face characteristic attending class for student is obtained by the face characteristic The step of attention rate, specifically includes:
Acquisition student attends class video, obtains face characteristic, and by the first model of training in advance, identification judges eyes concern The quantity and duration of teacher or blackboard;
According to the quantity and duration for identifying the eyes judged concern teacher or blackboard, the concern of attending class of student is obtained Degree.
It is further preferred that first model generates with the following method:
A certain amount of student is acquired to attend class video;
Various sizes of small picture is cut into, by each section of video according to predetermined manner to obtain image based on time shaft Information;
Obtain student's eyes concern teacher or the quantity of blackboard and the feature of duration in image information;
The quantity and duration of student's eyes concern teacher or blackboard in the video attended class with student and the video obtained For sample, off-line training is carried out, to obtain the first model.
It is further preferred that student's eyes concern teacher or the quantity of blackboard and the feature of duration in described image information Including:At least one of local binary patterns feature, scale invariant feature conversion, histograms of oriented gradients feature.
It may further be preferable that first model is grader.
Preferably, in the acquisition class hour classroom acoustic feature attending class for student is obtained by the acoustic feature The step of participation, specifically includes:
Acquired the sound of student's upper class hour, and obtained sound characteristic, by the second model of training in advance, judge student with it is old The number and duration of Shi Yuyan interactions;
According to the number and duration of the student and teacher's language interaction that judge, the participation of attending class of student is obtained.
It may further be preferable that second model generates with the following method:
The sound characteristic of each student to attend class is acquired, and the sound characteristic of student is subjected to audio mixing, obtains each The sound characteristic of student and multiple audio mixing features;
With the sound characteristic of each student and audio mixing feature, with the number of corresponding student and teacher's language interaction and when A length of sample carries out the training of MFCC cepstrum figures, to obtain the second model.
It may further be preferable that second model includes Markov model and/or multinomial sorter model.
Preferably, in the acquisition class hour student physiological parameter attending class for student is obtained by the physiological parameter The step of interaction is spent specifically includes:
The physiological parameter feature of upper class hour for acquiring student obtains the interactive frequency of student by third model trained in advance Secondary and duration;
According to the interactive frequency and duration of acquired student, the interactive degree of attending class of student is obtained.
It may further be preferable that the third model is grader.
Second aspect, an embodiment of the present invention provides a kind of apparatus for evaluating of training quality, which includes:
Attention rate of attending class acquiring unit, the face characteristic for acquiring student of upper class hour are special by the face of the student Sign, obtains the attention rate of attending class of student;And/or
Participation of attending class acquiring unit, the acoustic feature for acquiring classroom of upper class hour are obtained by the acoustic feature The participation of attending class of student;And/or
It attends class interactive degree acquiring unit, the physiological parameter for acquiring student of upper class hour passes through the physiological parameter, obtains The interactive degree of attending class of student;And/or
Training quality score acquisition module, for by the attention rate of attending class, participation of attending class, interaction of attending class of the student of acquisition At least one in degree is input in the neural network that training obtains in advance, to give the assessment of quality training, obtains training matter Measure score.
The third aspect, an embodiment of the present invention provides a kind of assessment equipments of training quality, including:At least one processing Device, at least one processor and computer program instructions stored in memory, when computer program instructions are by processor The method such as first aspect in the above embodiment is realized when execution.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage mediums, are stored thereon with computer journey Sequence instructs, and the method such as first aspect in the above embodiment is realized when computer program instructions are executed by processor.This hair Appraisal procedure, device, equipment and the medium for the training quality that bright embodiment provides, it is special by the face for acquiring student of upper class hour Sign, acoustic feature, physiological parameter obtain the attention rate of attending class, participation of attending class, interactive degree of attending class of student, lead to again later respectively Advance trained neural network is crossed, according to the attention rate of attending class of acquired student, participation of attending class, interactive degree of attending class, directly Connect the score of output training quality, in this way since it is more objective to training quality accurate.
Description of the drawings
In order to illustrate the technical solution of the embodiments of the present invention more clearly, will make below to required in the embodiment of the present invention Attached drawing is briefly described, for those of ordinary skill in the art, without creative efforts, also It can be obtain other attached drawings according to these attached drawings.
Fig. 1 shows the training of one embodiment of the present of invention to the appraisal procedure flow chart of quality;
Fig. 2 shows the appraisal procedure flow charts of the training of another embodiment of the present invention to quality;
Fig. 3 shows the training of another embodiment of the present invention to the structural schematic diagram of the apparatus for evaluating of quality;
Fig. 4 shows the training of another embodiment of the present invention to the hardware schematic of the assessment equipment of quality.
Specific implementation mode
The feature and exemplary embodiment of various aspects of the invention is described more fully below, in order to make the mesh of the present invention , technical solution and advantage be more clearly understood, with reference to the accompanying drawings and embodiments, the present invention is further retouched in detail It states.It should be understood that specific embodiment described herein is only configured to explain the present invention, it is not configured as limiting the present invention. To those skilled in the art, the present invention can be real in the case of some details in not needing these details It applies.Below to the description of embodiment just for the sake of by showing that the example of the present invention is better understood from the present invention to provide.
It should be noted that herein, relational terms such as first and second and the like are used merely to a reality Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those Element, but also include other elements that are not explicitly listed, or further include for this process, method, article or equipment Intrinsic element.In the absence of more restrictions, the element limited by sentence " including ... ", it is not excluded that including There is also other identical elements in the process, method, article or equipment of the element.
A kind of train to the appraisal procedure of quality is provided in conjunction with shown in Fig. 1, in one embodiment of the invention comprising as follows Step:
S01, the face characteristic for acquiring student of upper class hour obtain the attention rate of attending class of student by the face characteristic.
S02, the acoustic feature for acquiring classroom of upper class hour obtain the participation of attending class of student by the acoustic feature.
S03, the physiological parameter for acquiring student of upper class hour obtain the interactive degree of attending class of student by the physiological parameter.
S04, the attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class are input to trained in advance obtain Neural network in, to give the assessment of quality training, obtain training quality score.
In the present embodiment, it by acquiring the face characteristic, acoustic feature, physiological parameter of student of upper class hour, obtains respectively The attention rate of attending class of student, participation of attending class, interactive degree of attending class, later again by advance trained neural network, according to institute The attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class, directly export training quality score, in this way since It is more objective to training quality accurate.
Wherein, the sequence in above step S01, S02, S03 is not unique, can also mutual reversed order.
Wherein, the method in the embodiment of the present invention can also only include more than step S01, S02, S03 in it is any one Step or arbitrary two step, it is corresponding at this time in step S04 by the attention rate of attending class of corresponding student, participation of attending class, attend class One or two in interactive degree is input in the neural network that training obtains in advance, to give the assessment of quality training, obtains Take training quality score.
In order to become apparent from the training in the present embodiment to the appraisal procedure of quality, specific 2 pairs of the present embodiment in conjunction with the embodiments In the appraisal procedure of training to quality illustrate.
In conjunction with shown in Fig. 2, a kind of train to the appraisal procedure of quality is provided in another embodiment of present aspect comprising as follows Step:
S11, acquisition student attend class video, obtain face characteristic, and by the first model of training in advance, identification judges eyes Pay close attention to the quantity and duration of teacher or blackboard;According to the quantity for identifying the eyes judged concern teacher or blackboard And duration, obtain the attention rate of attending class of student.
Wherein, camera acquisition may be used in the video of attending class of student.
Wherein, the first model in this step needs to acquire by data, the expression of target signature, object module these three Step obtains.First, data acquisition is to obtain a certain number of information for building object module.Secondly based on the axis of time Degree divides one section of complete video according to certain strategy, and cuts into various sizes of small picture, to obtain a time domain Interior enough image informations.Secondly, the expression of target signature is the premise and basis that object module is established, main method Have:Local binary patterns feature, scale invariant feature conversion and histograms of oriented gradients feature etc..Finally, object module is base In above processing step, model is established.
Specifically, following steps generation may be used in the first model:
It acquires a certain amount of student to attend class video, namely obtains a certain amount of data.
Based on time shaft, each section of video will be cut into various sizes of small picture, to obtain figure according to predetermined manner As information.
Obtain student's eyes concern teacher or the quantity of blackboard and the feature namely target signature of duration in image information Expression;The step preferably uses local binary patterns feature, scale invariant feature conversion and histograms of oriented gradients feature Method.
The quantity and duration of student's eyes concern teacher or blackboard in the video attended class with student and the video obtained For sample, off-line training is carried out, to obtain the first model.Preferably, the first model is grader, naturally it is also possible to using nerve Network.
S12, the sound of acquisition student upper class hour, obtain sound characteristic, by the second model of training in advance, judge student With the number and duration of teacher's language interaction;According to the number and duration for judging student and teacher's language interaction, obtains and learn The participation of attending class of member.
Wherein, the sound of student's upper class hour can be acquired by classroom radio reception device.
Wherein, this step preferably uses people's sound identification technology (speaker recognition) or sound groove recognition technology in e (voiceprint recognition).Following methods generation may be used in second model:
The sound characteristic of each student to attend class is acquired, and the sound characteristic of student is subjected to audio mixing, obtains each The sound characteristic of student and multiple audio mixing features.
With the sound characteristic of each student and audio mixing feature, with the number of corresponding student and teacher's language interaction and when A length of sample carries out the training of MFCC cepstrum figures, to obtain the second model.Preferred second model includes Markov model and more Item formula sorter model.
S13, the physiological parameter feature of upper class hour for acquiring student obtain the mutual of student by third model trained in advance The dynamic frequency and duration;According to the interactive frequency and duration of acquired student, the interactive degree of attending class of student is obtained.
In this step, physiological parameter may include ecg characteristics, and ecg characteristics may include the indexs such as heartbeat, blood pressure; At this point it is possible to capture student's ecg characteristics of upper class hour using student's bracelet, third mould trained in advance is passed through according to electrocardiogram (ECG) data Type judges the interactive frequency and duration of student.
Third model can be a grader in this step, which may be used following methods generation:
The electrocardiogram (ECG) data of the student of a certain amount of upper class hour is acquired, the interactive degree of student corresponding with the electrocardiogram (ECG) data is obtained As sample;
Electrocardiogram (ECG) data to student and corresponding interaction degree carry out sample training, obtain grader.
S14, the attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class are input to trained in advance obtain Neural network in, to give the assessment of quality training, obtain training quality score.
In our department's step, by above-mentioned steps by by eye recognition, acoustics identification, electrocardio identification generate information into Row vector basis of formation data, the priori data in combination with questionnaire result as tape label, to training nerve net Network.
In the present embodiment used neural network be using RBF neural (Radial Basis Function, Radial basis function).Such neural network is a kind of but hidden layer feedforward neural network, it uses radial basis function as hidden layer nerve First activation primitive, and output layer is then the linear combination exported to hidden neuron.It is assumed that input bit d dimensional vector x, export as reality Numerical value, then RBF networks can be expressed as:
Wherein q is hidden neuron number, ciAnd wiIt is the center corresponding to i-th of hidden neuron and weight, ρ respectively (x,ci) it is radial basis function, Gaussian radial basis function is used herein, as follows:
Finally, the real number value that neural network through this embodiment can export is Classroom instruction quality score, to assess class Hall quality.
In the present embodiment, it by acquiring the face characteristic, acoustic feature, physiological parameter of student of upper class hour, obtains respectively The attention rate of attending class of student, participation of attending class, interactive degree of attending class, later again by advance trained neural network, according to institute The attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class, directly export training quality score, in this way since It is more objective to training quality accurate.
A kind of apparatus for evaluating of training quality is provided in conjunction with shown in Fig. 3, in another embodiment of the present invention, including:It attends class Attention rate acquiring unit, participation of attending class acquiring unit, interactive degree acquiring unit of attending class, training quality score acquisition module;Its In, attention rate of attending class acquiring unit is used to acquire the face characteristic of class hour student, by the face characteristic of the student, obtains The attention rate of attending class of student;And/or acoustic feature of the participation acquiring unit for acquiring class hour classroom of attending class, by described Acoustic feature obtains the participation of attending class of student;And/or physiology of the interactive degree acquiring unit for acquiring class hour student of attending class Parameter obtains the interactive degree of attending class of student by the physiological parameter;Training quality score acquisition module, for what will be obtained In the attention rate of attending class of student, participation of attending class, interactive degree of attending class at least one of be input to the nerve net that training obtains in advance In network, to give the assessment of quality training, training quality score is obtained.
Wherein, attention rate of attending class acquiring unit includes the first acquisition module 301 and attention rate acquisition module 304 of attending class.
Specifically, the first acquisition module 301 is used to acquire the video of class hour student, the face characteristic of student is obtained;The One acquisition module 301 is preferably camera;Attention rate of attending class acquisition module 304 according to the first acquisition module 301 for being acquired Student face characteristic, obtain the attention rate of attending class of student.
Wherein, participation of attending class acquiring unit includes the second acquisition module 302 and participation acquisition module 305 of attending class.
Specifically, the second acquisition module 302 is used to acquire the acoustic feature in class hour classroom;Second acquisition module 302 is excellent It is selected as radio reception device;Participation of attending class acquisition module 305, the acoustic feature for being acquired by the second acquisition module 302 obtain The participation of attending class of student.
Wherein, interactive degree acquiring unit of attending class includes third acquisition module 303 and interactive degree acquisition module 306 of attending class.
Specifically, third acquisition module 303 is used to acquire the physiological parameter of class hour student;The third acquisition module 303 be preferably Intelligent bracelet, and physiological parameter is preferably ecg characteristics, for example, heartbeat, blood pressure etc..It attends class interactive degree acquisition module 306 physiological parameter for being acquired by third acquisition module 303 obtains the interactive degree of attending class of student.
Training quality score acquisition module 307 is used for the attention rate of attending class of the student of acquisition, participation of attending class, attends class mutually Dynamic degree is input in the neural network that training obtains in advance, to give the assessment of quality training, obtains training quality score.
Preferably, the apparatus for evaluating of training quality further includes the first model training module in the embodiment of the present invention, for adopting Collect a certain amount of student to attend class video, namely obtain a certain amount of data, is based on time shaft, it will be according to default by each section of video Mode cuts into various sizes of small picture, to obtain image information, obtain in image information student's eyes concern teacher or The expression of the quantity of blackboard and the feature of duration namely target signature;Specifically preferably use local binary patterns feature, scale not Become the method for Feature Conversion and histograms of oriented gradients feature.Student's eyes in the video attended class with student and the video obtained Pay close attention to teacher or blackboard quantity and when a length of sample, off-line training is carried out, to obtain the first model.Preferably, the first mould Type is grader, naturally it is also possible to use neural network.
Preferably, the apparatus for evaluating of training quality further includes the second model training module in the embodiment of the present invention, for adopting Collect the sound characteristic of each student to attend class, and the sound characteristic of student is subjected to audio mixing, obtains the sound of each student Feature and multiple audio mixing features;It is mutual with corresponding student and teacher's language with the sound characteristic of each student and audio mixing feature Dynamic number and when a length of sample, the training of MFCC cepstrum figures is carried out, to obtain the second model.Preferred second model includes Ma Er Section's husband's model and multinomial sorter model.
Preferably, the apparatus for evaluating of training quality further includes third model training module, third mould in the embodiment of the present invention Type can be a grader, which may be used following methods generation:Acquire the heart of the student of a certain amount of upper class hour Electric data obtain the interactive degree of student corresponding with the electrocardiogram (ECG) data as sample;Electrocardiogram (ECG) data to student and it is corresponding mutually Dynamic degree, carries out sample training, obtains grader.
Preferably, the apparatus for evaluating of training quality further includes neural network generation module in the embodiment of the present invention, for leading to It crosses and the information that eye recognition, acoustics identification, electrocardio identification generate is subjected to vectorization basis of formation data, asked in combination with investigation Priori data of the clove hitch fruit as tape label, training obtain neural network.
Wherein, neural network is using RBF neural (Radial Basis Function, radial basis function).It is such Neural network is a kind of but hidden layer feedforward neural network, it uses radial basis function as hidden neuron activation primitive, and defeated It is the linear combination exported to hidden neuron to go out layer then.It is assumed that input bit d dimensional vector x, export as real number value, then RBF networks It can be expressed as:
Wherein q is hidden neuron number, ciAnd wiIt is the center corresponding to i-th of hidden neuron and weight, ρ respectively (x,ci) it is radial basis function, Gaussian radial basis function is used herein, as follows:
Finally, the real number value that neural network through this embodiment can export is Classroom instruction quality score, to assess class Hall quality.
In the present embodiment, the apparatus for evaluating of training quality can pass through face characteristic, the acoustics of acquisition student of upper class hour Feature, physiological parameter obtain the attention rate of attending class, participation of attending class, interactive degree of attending class of student, later again by instructing in advance respectively The neural network perfected, according to the attention rate of attending class of acquired student, participation of attending class, interactive degree of attending class, directly output training Instruct the score of quality, in this way since it is more objective to training quality accurate.
A kind of assessment equipment of training quality, the wherein training in embodiment 1 or 2 are provided in another embodiment of the present invention The appraisal procedure of quality can be realized by the assessment equipment of the training quality.Fig. 4 shows training provided in an embodiment of the present invention Instruct the hardware architecture diagram of the assessment equipment of quality.
The assessment equipment of the training quality may include processor 401 and the memory for being stored with computer program instructions 402。
Specifically, above-mentioned processor 401 may include central processing unit (CPU) or specific integrated circuit (Application Specific Integrated Circuit, ASIC), or may be configured to implement implementation of the present invention One or more integrated circuits of example.
Memory 402 may include the mass storage for data or instruction.For example unrestricted, memory 402 may include hard disk drive (Hard Disk Drive, HDD), floppy disk, flash memory, CD, magneto-optic disk, tape or logical With the combination of universal serial bus (Universal Serial Bus, USB) driver or two or more the above.It is closing In the case of suitable, memory 402 may include the medium of removable or non-removable (or fixed).In a suitable case, it stores Device 402 can be inside or outside data processing equipment.In a particular embodiment, memory 402 is nonvolatile solid state storage Device.In a particular embodiment, memory 402 includes read-only memory (ROM).In a suitable case, which can be mask The ROM of programming, programming ROM (PROM), erasable PROM (EPROM), electric erasable PROM (EEPROM), electrically-alterable ROM (EAROM) or the combination of flash memory or two or more the above.
Processor 401 is by reading and executing the computer program instructions stored in memory 402, to realize above-mentioned implementation The appraisal procedure of any one training quality in example.
In one example, the assessment equipment of training quality may also include communication interface 403 and bus 410.Wherein, as schemed Shown in 4, processor 401, memory 402, communication interface 403 are connected by bus 410 and complete mutual communication.
Communication interface 403 is mainly used for realizing in the embodiment of the present invention between each module, device, unit and/or equipment Communication.
Bus 410 includes hardware, software or both, and the component of the assessment equipment of training quality is coupled to each other together. For example unrestricted, bus may include accelerated graphics port (AGP) or other graphics bus, enhancing Industry Standard Architecture (EISA) bus, front side bus (FSB), super transmission (HT) interconnection, Industry Standard Architecture (ISA) bus, infinite bandwidth interconnect, are low Number of pins (LPC) bus, memory bus, micro- channel architecture (MCA) bus, peripheral component interconnection (PCI) bus, PCI- Express (PCI-X) bus, Serial Advanced Technology Attachment (SATA) bus, Video Electronics Standards Association part (VLB) bus or The combination of other suitable buses or two or more the above.In a suitable case, bus 410 may include one Or multiple buses.Although specific bus has been described and illustrated in the embodiment of the present invention, the present invention considers any suitable bus Or interconnection.
Another embodiment of the present invention provides a kind of computer readable storage medium to realize.The computer readable storage medium On be stored with computer program instructions;The computer program instructions realize the training in above-described embodiment 1 or 2 when being executed by processor Instruct the appraisal procedure of quality.
It should be clear that the invention is not limited in specific configuration described above and shown in figure and processing. For brevity, it is omitted here the detailed description to known method.In the above-described embodiments, several tools have been described and illustrated The step of body, is as example.But procedure of the invention is not limited to described and illustrated specific steps, this field Technical staff can be variously modified, modification and addition after the spirit for understanding the present invention, or suitable between changing the step Sequence.
Functional block shown in structures described above block diagram can be implemented as hardware, software, firmware or their group It closes.When realizing in hardware, it may, for example, be electronic circuit, application-specific integrated circuit (ASIC), firmware appropriate, insert Part, function card etc..When being realized with software mode, element of the invention is used to execute program or the generation of required task Code section.Either code segment can be stored in machine readable media program or the data-signal by being carried in carrier wave is passing Defeated medium or communication links are sent." machine readable media " may include any medium for capableing of storage or transmission information. The example of machine readable media includes electronic circuit, semiconductor memory devices, ROM, flash memory, erasable ROM (EROM), soft Disk, CD-ROM, CD, hard disk, fiber medium, radio frequency (RF) link, etc..Code segment can be via such as internet, inline The computer network of net etc. is downloaded.
It should also be noted that, the exemplary embodiment referred in the present invention, is retouched based on a series of step or device State certain methods or system.But the present invention is not limited to the sequence of above-mentioned steps, that is to say, that can be according in embodiment The sequence referred to executes step, may also be distinct from that the sequence in embodiment or several steps are performed simultaneously.
The above description is merely a specific embodiment, it is apparent to those skilled in the art that, For convenience of description and succinctly, the system, module of foregoing description and the specific work process of unit can refer to preceding method Corresponding process in embodiment, details are not described herein.It should be understood that scope of protection of the present invention is not limited thereto, it is any to be familiar with Those skilled in the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or substitutions, These modifications or substitutions should be covered by the protection scope of the present invention.

Claims (13)

1. a kind of appraisal procedure of training quality, which is characterized in that the method includes:
The face characteristic of acquisition student of upper class hour obtains the attention rate of attending class of student by the face characteristic;And/or
The acoustic feature in acquisition classroom of upper class hour obtains the participation of attending class of student by the acoustic feature;And/or
The physiological parameter of acquisition student of upper class hour obtains the interactive degree of attending class of student by the physiological parameter;
At least one in the attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class is input to advance training In obtained neural network, to give the assessment of quality training, training quality score is obtained.
2. according to the method described in claim 1, it is characterized in that, in the acquisition class hour student face characteristic, pass through institute State face characteristic, obtain student attend class attention rate the step of, specifically include:
Acquisition student attends class video, obtains face characteristic, and by the first model of training in advance, identification judges eyes concern teacher Or the quantity and duration of blackboard;
According to the quantity and duration for identifying the eyes judged concern teacher or blackboard, the attention rate of attending class of student is obtained.
3. according to the method described in claim 2, it is characterized in that, first model generates with the following method:
A certain amount of student is acquired to attend class video;
Various sizes of small picture is cut by each section of video according to predetermined manner based on time shaft, to obtain image letter Breath;
Obtain student's eyes concern teacher or the quantity of blackboard and the feature of duration in image information;
In the video attended class with student and the video obtained the quantity of student's eyes concern teacher or blackboard and when a length of sample This, carries out off-line training, to obtain the first model.
4. according to the method described in claim 3, it is characterized in that, student's eyes pay close attention to teacher or black in described image information The quantity of plate and the feature of duration include:
At least one of local binary patterns feature, scale invariant feature conversion, histograms of oriented gradients feature.
5. according to the method described in claim 3, it is characterized in that, first model is grader.
6. according to the method described in claim 1, it is characterized in that, in the acquisition class hour classroom acoustic feature, pass through institute State acoustic feature, obtain student attend class participation the step of, specifically include:
The sound for acquiring student's upper class hour, obtains sound characteristic, by the second model of training in advance, judges student and teacher's language Say interactive number and duration;
According to the number and duration of the student and teacher's language interaction that judge, the participation of attending class of student is obtained.
7. according to the method described in claim 6, it is characterized in that, second model generates with the following method:
The sound characteristic of each student to attend class is acquired, and the sound characteristic of student is subjected to audio mixing, obtains each student Sound characteristic and multiple audio mixing features;
With the sound characteristic of each student and audio mixing feature, with the number of corresponding student and teacher's language interaction and when it is a length of Sample carries out the training of MFCC cepstrum figures, to obtain the second model.
8. the method according to the description of claim 7 is characterized in that second model includes Markov model and/or more Item formula sorter model.
9. according to the method described in claim 1, it is characterized in that, in the acquisition class hour student physiological parameter, pass through institute Physiological parameter is stated, the step of interaction of attending class of student is spent is obtained, specifically includes:
The physiological parameter feature of upper class hour for acquiring student, by third model trained in advance, obtain student the interactive frequency and Duration;
According to the interactive frequency and duration of acquired student, the interactive degree of attending class of student is obtained.
10. according to the method described in claim 9, it is characterized in that, the third model is grader.
11. a kind of apparatus for evaluating of training quality, which is characterized in that including:
Attention rate of attending class acquiring unit, the face characteristic for acquiring student of upper class hour are obtained by the face characteristic of the student Take the attention rate of attending class of student;And/or
Participation of attending class acquiring unit, the acoustic feature for acquiring classroom of upper class hour obtain student by the acoustic feature Participation of attending class;And/or
It attends class interactive degree acquiring unit, the physiological parameter for acquiring student of upper class hour passes through the physiological parameter, obtains student Interactive degree of attending class;
Training quality score acquisition module, for will be in the attention rate of attending class of the student of acquisition, participation of attending class, interactive degree of attending class At least one of be input in the obtained neural network of training in advance, to give the assessment of quality training, obtain training quality point Number.
12. a kind of assessment equipment of training quality, which is characterized in that including:At least one processor, at least one processor with And the computer program instructions in the memory are stored in, it is real when the computer program instructions are executed by the processor The now method as described in any one of claim 1-10.
13. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that when the calculating The method as described in any one of claim 1-10 is realized when machine program instruction is executed by processor.
CN201711486994.2A 2017-12-30 2017-12-30 Training quality appraisal procedure, device, equipment and medium Pending CN108376305A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711486994.2A CN108376305A (en) 2017-12-30 2017-12-30 Training quality appraisal procedure, device, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711486994.2A CN108376305A (en) 2017-12-30 2017-12-30 Training quality appraisal procedure, device, equipment and medium

Publications (1)

Publication Number Publication Date
CN108376305A true CN108376305A (en) 2018-08-07

Family

ID=63016353

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711486994.2A Pending CN108376305A (en) 2017-12-30 2017-12-30 Training quality appraisal procedure, device, equipment and medium

Country Status (1)

Country Link
CN (1) CN108376305A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359613A (en) * 2018-10-29 2019-02-19 四川文轩教育科技有限公司 A kind of teaching process analysis method based on artificial intelligence
CN110334810A (en) * 2019-07-10 2019-10-15 福州大学 MOOC corpse course recognition methods based on machine learning
CN110443122A (en) * 2019-06-26 2019-11-12 深圳市天彦通信股份有限公司 Information processing method and Related product
CN112115756A (en) * 2020-03-22 2020-12-22 张冬梅 Block chain management platform for content analysis

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120264095A1 (en) * 2006-12-29 2012-10-18 Industrial Technology Research Institute Emotion abreaction device and using method of emotion abreaction device
CN103959299A (en) * 2011-09-28 2014-07-30 谷歌公司 Login to a computing device based on facial recognition
CN104008678A (en) * 2014-06-06 2014-08-27 杨安康 Intelligent network terminal for collecting and encrypting classroom teaching multimedia information in real time and working method
CN104408781A (en) * 2014-12-04 2015-03-11 重庆晋才富熙科技有限公司 Concentration attendance system
CN106851216A (en) * 2017-03-10 2017-06-13 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN107122789A (en) * 2017-03-14 2017-09-01 华南理工大学 The study focus analysis method of multimodal information fusion based on depth camera
CN107316257A (en) * 2017-06-06 2017-11-03 南京信息工程大学 A kind of Method of Teaching Quality Evaluation analyzed based on classroom students ' behavior and system
CN107316261A (en) * 2017-07-10 2017-11-03 湖北科技学院 A kind of Evaluation System for Teaching Quality based on human face analysis
CN107527159A (en) * 2017-09-20 2017-12-29 江苏经贸职业技术学院 One kind teaching quantitative estimation method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120264095A1 (en) * 2006-12-29 2012-10-18 Industrial Technology Research Institute Emotion abreaction device and using method of emotion abreaction device
CN103959299A (en) * 2011-09-28 2014-07-30 谷歌公司 Login to a computing device based on facial recognition
CN104008678A (en) * 2014-06-06 2014-08-27 杨安康 Intelligent network terminal for collecting and encrypting classroom teaching multimedia information in real time and working method
CN104408781A (en) * 2014-12-04 2015-03-11 重庆晋才富熙科技有限公司 Concentration attendance system
CN106878677A (en) * 2017-01-23 2017-06-20 西安电子科技大学 Student classroom Grasping level assessment system and method based on multisensor
CN106851216A (en) * 2017-03-10 2017-06-13 山东师范大学 A kind of classroom behavior monitoring system and method based on face and speech recognition
CN107122789A (en) * 2017-03-14 2017-09-01 华南理工大学 The study focus analysis method of multimodal information fusion based on depth camera
CN107316257A (en) * 2017-06-06 2017-11-03 南京信息工程大学 A kind of Method of Teaching Quality Evaluation analyzed based on classroom students ' behavior and system
CN107316261A (en) * 2017-07-10 2017-11-03 湖北科技学院 A kind of Evaluation System for Teaching Quality based on human face analysis
CN107527159A (en) * 2017-09-20 2017-12-29 江苏经贸职业技术学院 One kind teaching quantitative estimation method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109359613A (en) * 2018-10-29 2019-02-19 四川文轩教育科技有限公司 A kind of teaching process analysis method based on artificial intelligence
CN110443122A (en) * 2019-06-26 2019-11-12 深圳市天彦通信股份有限公司 Information processing method and Related product
CN110334810A (en) * 2019-07-10 2019-10-15 福州大学 MOOC corpse course recognition methods based on machine learning
CN112115756A (en) * 2020-03-22 2020-12-22 张冬梅 Block chain management platform for content analysis

Similar Documents

Publication Publication Date Title
CN108376305A (en) Training quality appraisal procedure, device, equipment and medium
US9262941B2 (en) Systems and methods for assessment of non-native speech using vowel space characteristics
US20150262496A1 (en) Multimedia educational content delivery with identity authentication and related compensation model
Kerkeni et al. Automatic speech emotion recognition using an optimal combination of features based on EMD-TKEO
Jain et al. Computational diagnosis of learning disability
Benba et al. Voiceprint analysis using Perceptual Linear Prediction and Support Vector Machines for detecting persons with Parkinson’s disease
Massaro et al. Bayes factor of model selection validates FLMP
Shu et al. An item response theory analysis of problem-solving processes in scenario-based tasks
Sanaullah et al. Distinguishing deceptive speech from truthful speech using MFCC
Mittelmann Personal knowledge management as basis for successful organizational knowledge management in the digital age
Schwella Knowledge based governance, governance as learning: the leadership implications
CN110457432A (en) Interview methods of marking, device, equipment and storage medium
CN110197658A (en) Method of speech processing, device and electronic equipment
CN109697982A (en) A kind of speaker speech recognition system in instruction scene
Barouch-Gilbert A policy discourse analysis of academic probation
Ramachandran et al. Vertical Integration of Biometrics Across the Curriculum: Case Study of Speaker, Face and Iris Recognition
Doleck et al. Examining diagnosis paths: a process mining approach
Drigas et al. ICTs as a Distinct Detection Approach for Dyslexia Screening: A Contemporary View
CN109791616A (en) Automatic speech recognition
Schlotterbeck et al. What classroom audio tells about teaching: a cost-effective approach for detection of teaching practices using spectral audio features
Pepino et al. Detecting Distrust Towards the Skills of a Virtual Assistant Using Speech
Kisker Key Resources for student affairs professionals in learning‐centered community colleges
Sedelmaier et al. Systematic evolution of a learning setting for requirements engineering education based on competence-oriented didactics
Samonte et al. Assistive Mobile App for Children with Hearing & Speech Impairment Using Character and Speech Recognition
Atkinson et al. Measuring Up: Benefits and Trends in Performance Measurement Technologies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination