CN110808066B - Teaching environment safety analysis method - Google Patents
Teaching environment safety analysis method Download PDFInfo
- Publication number
- CN110808066B CN110808066B CN201911061807.5A CN201911061807A CN110808066B CN 110808066 B CN110808066 B CN 110808066B CN 201911061807 A CN201911061807 A CN 201911061807A CN 110808066 B CN110808066 B CN 110808066B
- Authority
- CN
- China
- Prior art keywords
- curve
- preset
- teaching
- abnormal
- student
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000004458 analytical method Methods 0.000 title claims abstract description 17
- 230000002159 abnormal effect Effects 0.000 claims abstract description 84
- 230000008451 emotion Effects 0.000 claims abstract description 47
- 230000007613 environmental effect Effects 0.000 claims abstract description 47
- 238000000605 extraction Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 31
- 230000008859 change Effects 0.000 claims description 20
- 230000003993 interaction Effects 0.000 claims description 7
- 230000035772 mutation Effects 0.000 claims description 6
- 206010011469 Crying Diseases 0.000 claims description 5
- 230000007547 defect Effects 0.000 claims description 5
- 238000007781 pre-processing Methods 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 5
- 230000000694 effects Effects 0.000 abstract description 4
- 230000006399 behavior Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000009432 framing Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000002411 adverse Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000037433 frameshift Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 206010012374 Depressed mood Diseases 0.000 description 1
- 206010061991 Grimacing Diseases 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000013601 eggs Nutrition 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0635—Risk analysis of enterprise or organisation activities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- Economics (AREA)
- Educational Administration (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Marketing (AREA)
- Entrepreneurship & Innovation (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Educational Technology (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Signal Processing (AREA)
- Game Theory and Decision Science (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Child & Adolescent Psychology (AREA)
- Quality & Reliability (AREA)
- Operations Research (AREA)
- Development Economics (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a teaching environment safety analysis method, which comprises the following steps: acquiring classroom video and audio data of a first teaching site within a first preset time length; carrying out voice feature extraction on the video and audio data for classroom teaching; judging whether restrictive language exists in the classroom teaching video and audio data according to the extraction result; when the restrictive language exists, generating first prompt information; acquiring emotion information according to class attendance video and audio data; acquiring the number of abnormal individual students according to the emotion information; acquiring an environment information parameter; and judging whether the environmental information parameters influence the number of the abnormal student individuals or not according to the number of the abnormal student individuals and the corresponding environmental information parameters. From this, realized the detection to the language violence in the teaching environment, the environmental condition in teaching place, realized carrying out the analysis to the safety of teaching environment to improve the bad environment, thereby improved the teaching effect.
Description
Technical Field
The invention relates to the field of data processing, in particular to a teaching environment safety analysis method.
Background
In recent years, students or other persons in schools intentionally misuse language, physical strength, etc. during education activities in schools, and conduct certain invasive acts on the physiology, psychology, reputation, rights, properties, etc. of teachers and students, all of which are calculated as campus violence.
Campus violence has a great influence on the mental health and normal teaching order of students, and if the students are in soft and hard violence teaching environments for a long time, the students may have a low mood, psychological depression and even a rest of school.
However, in the teaching process, the campus violence is often acquired subjectively by the teacher or the family, for example, the teacher observes that the student is not focused and not listening to the speech in the classroom, and feeds back the phenomenon to the parents, and the parents communicate with the student to acquire the reason.
Disclosure of Invention
The embodiment of the invention aims to provide a teaching environment safety analysis method, which aims to solve the problem of one-sided analysis caused by the fact that whether a teaching environment has problems or not is obtained through subjective behaviors of people in the teaching process in the prior art.
In order to solve the above problem, in a first aspect, the present invention provides a teaching environment security analysis method, including:
acquiring classroom video and audio data of a first teaching site within a first preset time length; the classroom video and audio data comprise classroom teaching video and audio data and classroom listening video and audio data;
carrying out voice feature extraction on the classroom teaching video-audio data;
judging whether restrictive language exists in the classroom teaching video-audio data or not according to the extraction result;
when the restrictive language exists, generating first prompt information;
acquiring emotion information according to the class attendance video and audio data;
acquiring the number of abnormal individual students according to the emotion information;
acquiring an environment information parameter;
and judging the influence of the environmental information parameters on the number of the abnormal student individuals according to the number of the abnormal student individuals and the corresponding environmental information parameters.
In a possible implementation manner, the determining whether a restrictive language exists in the classroom teaching video-audio data according to the extraction result specifically includes:
preprocessing a voice signal in the classroom teaching video-audio data to obtain a first digital signal;
performing feature extraction on the first digital signal to obtain feature parameters;
through an acoustic model, a language model and a pronunciation dictionary, the similarity of the characteristic parameters and a reference template in a pre-built restrictive language model library is scored;
and determining whether the limiting language exists in the video and audio data for classroom teaching according to the score.
In a possible implementation manner, the obtaining the number of abnormal student individuals according to the emotion information specifically includes:
acquiring emotion information of each student when a first subject gives lessons in a first teaching place within a second preset time; the second preset time length is a period of time length in the first preset time length;
setting a score for each emotion according to the emotion information to obtain a first curve of the change of the emotion of each student along with time;
calculating a first time number that the change rate of the first curve is greater than a preset first change rate threshold;
when the first time is larger than a preset first time threshold value, determining that the student is an abnormal individual;
and obtaining the number of the abnormal student individuals in the first teaching place in the first preset time according to the number of the abnormal student individuals in the first subject in the second preset time.
In a possible implementation manner, the obtaining the number of abnormal student individuals according to the emotion information specifically includes:
acquiring duration and times of the individual emotion information of the student as abnormal emotion;
and when the duration is greater than a preset first duration threshold and the times are greater than a preset first time threshold, determining the student is an abnormal student individual.
In one possible implementation, the abnormal emotion includes: sadness, crying, and anger.
In one possible implementation, after generating the first prompt message when the restrictive language exists, the method further includes:
calculating the number of the first prompt messages;
and when the number of the first prompt messages is larger than a preset first number threshold value, generating first early warning messages.
In a possible implementation manner, after generating the first prompt information when the restrictive language exists, the method further includes:
generating a first keyword according to the first prompt message;
and adding the first keyword into the classroom video and audio data according to the timestamp of the first prompt message.
In one possible implementation, the method further includes:
acquiring classroom behavior information of individual students;
according to the classroom behavior information, calculating classroom interaction participation of the first teaching site in each subject;
comparing the classroom interaction participation of each subject with a preset participation threshold;
and determining whether the first teaching site has teaching environment defects or not according to the comparison result.
In a possible implementation manner, the environmental information parameters include decibels, temperature, humidity and light intensity of the first teaching place, and the determining the influence of the environmental information parameters on the number of the abnormal student individuals specifically includes, according to the number of the abnormal student individuals and the corresponding environmental information parameters:
when the difference between the decibel and a preset decibel threshold, the difference between the temperature and a preset temperature threshold, the difference between the humidity and a preset humidity threshold and the difference between the light intensity and a preset light intensity threshold at each moment are all within a preset range within a first preset time, and the number of abnormal student individuals has jumping compared with the previous moment, determining that the environmental information parameters have no influence on the number of the abnormal student individuals, and generating second prompt information for indicating that the environmental information parameters have no influence on the number of the abnormal student individuals;
when the number of abnormal student individuals at the current moment is in jump compared with the previous moment within a first preset time, if a first number of the difference between the decibel and a preset decibel threshold value, the difference between the temperature and a preset temperature threshold value, the difference between the humidity and a preset humidity threshold value and the light intensity and a preset light intensity threshold value are out of a preset range, determining that the environmental information parameter has influence on the number of the abnormal student individuals, and generating third prompt information for indicating that the environmental information parameter has influence on the number of the abnormal student individuals.
In a possible implementation manner, the environmental information parameters include decibels, temperature, humidity and light intensity of the first teaching place, and the determining the influence of the environmental information parameters on the number of the abnormal student individuals specifically includes, according to the number of the abnormal student individuals and the corresponding environmental information parameters:
generating a decibel curve, a temperature curve, a humidity curve and a light intensity curve under the same coordinate system according to the decibel, the temperature, the humidity and the light intensity of the first preset time;
obtaining a difference curve of the decibel curve, a difference curve of the humidity curve and a difference curve of the light intensity curve according to the difference values of the decibel curve, the temperature curve, the humidity curve and the light intensity curve with a preset decibel curve, a preset temperature curve, a preset humidity curve and a preset light intensity curve respectively;
when the number of abnormal individual students at the current moment has jump compared with the number of abnormal individual students at the previous moment, judging whether a difference curve of the decibel curve, a difference curve of the humidity curve and a difference curve of the light intensity curve have mutation or not;
when no mutation exists, determining that the environmental information parameters have no influence on the number of abnormal individual students;
and when the sudden change exists and the sudden change is consistent with the jump of the number of the abnormal student individuals, determining that the environmental information parameters have influence on the number of the abnormal student individuals.
In a second aspect, the invention provides an apparatus comprising a memory for storing a program and a processor for performing the method of any of the first aspects.
In a third aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method according to any one of the first aspect.
In a fourth aspect, the invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of the first aspects.
By applying the teaching environment safety analysis method provided by the embodiment of the invention, the detection of language violence in the teaching environment and the detection of the environmental conditions of the teaching field are realized, and the safety of the teaching environment is analyzed, so that the adverse environment is improved, and the teaching effect is improved.
Drawings
Fig. 1 is a schematic flow chart of a teaching environment security analysis method according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The first, second, third and fourth, etc. reference numerals are used only for distinguishing them and have no other meaning.
Fig. 1 is a schematic flow chart of a teaching environment security analysis method according to an embodiment of the present invention. The method is applied to a teaching scene, and the teaching environment safety analysis mainly comprises the analysis of teaching soft violence, environmental parameters and the like so as to ensure the safety of the teaching environment. The execution subject of the method is equipment with processing function, such as a server, a processor, a terminal and the like. As shown in fig. 1, the method comprises the steps of:
The classroom video and audio data comprise classroom teaching video and audio data and classroom listening video and audio data. For subsequent lateral and vertical comparisons, the lecture site may be set at the first lecture site. The first teaching place can be a classroom a of a certain school, and the school can be any one of schools such as primary school, middle school, university and professional school. The subjects giving lessons at the first teaching place include but are not limited to Chinese, mathematics and English conventional subjects or professional subjects simulating electronics and computer bases. The first preset time period may be three months, one month, one week, one day, etc., which is not limited in this application.
Specifically, in a classroom, video and audio data including classroom teaching video and audio data when a teacher gives a classroom and classroom listening video and audio data when a student listens can be obtained through the recording and broadcasting system. The classroom teaching video and audio data comprises teaching video and audio data, interactive video and audio data and answering and remitting video and audio data.
The classroom is provided with a recorded broadcast system which tracks, records and broadcasts the target in the course of teaching. For example, a classroom a is provided with five cameras, pickup equipment is arranged in a preset distance of the cameras, the five cameras are respectively used for shooting a teacher giving lessons, a blackboard, students, a front panorama and a back panorama, when the first teacher gives lessons, the first camera shoots teaching video data, when the teaching video data is interacted, if the teacher is asking questions, the first camera shoots the teacher asking questions, the student switches to a student answering picture shot by the second camera, when the student answers, the student answering, the second camera shoots the student asking questions, and if the teacher answers, the student answering the questions, the student answering the second camera shoots the student asking pictures. When the student answers, the picture of answering the question of the student shot by the second camera can be switched. As to how to realize the switching, Artificial Intelligence (AI), intelligent identification technology and big data technology can be adopted to perform the switching, so that the recording is performed in a targeted manner and the seamless switching is performed in the recording process. In the switching process, prompt information can be added for each section of video, for example, keywords such as first teaching video-audio data, first interactive video-audio data, first answering and remitting video-audio data and the like can be added to the first teaching video-audio data for prompting.
The class attendance video and audio data comprises panoramic video and audio data of students during class attendance, and panoramic recording can be carried out on the class attendance states of the students through a camera, so that the class attendance video and audio data of the students are obtained. Subsequently, through an intelligent recognition technology, specific students can be recognized, and the recognized students are compared with the image information of the students in the database, so that specific individual students can be determined, and the emotion information of the individual students in one subject can be analyzed.
Subsequently, by way of example and not limitation, the teaching video-audio data and the teaching listening video-audio data may be integrated into one path of data to be output, and stored in the server, and during playing, may be displayed in two small windows in one page of one terminal.
And 102, carrying out voice feature extraction on the video and audio data for classroom teaching.
In the classroom, in order to avoid classroom soft violence, the teacher's that gives lessons language violence promptly, can carry out speech recognition to the teacher's that gives lessons language to judge whether there is the speech violence, concrete realization process is as follows:
firstly, preprocessing a voice signal in video and audio data for classroom teaching to obtain a first digital signal; then, carrying out feature extraction on the first digital signal to obtain feature parameters; then, the similarity of the characteristic parameters and a reference template in a pre-built restrictive language model library is scored through an acoustic model, a language model and a pronunciation dictionary; and finally, determining whether the limiting language exists in the classroom teaching video-audio data or not according to the score.
Specifically, a first speech signal is obtained after the speech signal is sequentially sampled, quantized, pre-emphasized, framed, and windowed.
The sampling is to divide the analog audio signal waveform, and the quantization is to store the amplitude value measured by the sampling with a shaping value. The pre-emphasis is performed on the speech signal in order to emphasize the high-frequency part of the speech, remove the influence of lip radiation, and increase the high-frequency resolution of the speech. Pre-emphasis is typically achieved by a transfer function being a first order FIR high-pass digital filter, where a is the pre-emphasis coefficient, 0.9< a < 1.0. Let x (n) be the speech sample value at time n, and the result after pre-emphasis processing be y (n) ═ x (n) — ax (n-1), where a is 0.98. The speech signal has short-time stationarity (the speech signal can be considered to be approximately unchanged within 10-30 ms), so that the speech signal can be divided into a plurality of short segments for processing, namely framing, and the framing of the speech signal is realized by adopting a movable window with a limited length for weighting. Typically, the number of frames per second is about 33-100 frames, as the case may be. The general framing method is an overlapped segmentation method, the overlapped part of the previous frame and the next frame is called frame shift, and the ratio of the frame shift to the frame length is generally 0-0.5. Windowing is typically done by adding a hamming window or a rectangular window to increase the attenuation of the high frequency components.
Different characteristic parameters can be extracted according to different purposes of the first digital signal. The characteristic parameters mainly include Linear Predictive Coefficient (LPCC), Perceptual Linear Predictive Coefficient (PLP), Mel Frequency Cepstral Coefficient (MFCC).
Taking MFCC as an example, a Fast Fourier Transform (FFT) may be utilized to convert the first digital signal from a time-domain signal to a frequency-domain signal; then, performing convolution on the frequency domain signal according to a triangular filter bank distributed by Mel scales; next, according to the convolution result, performing off-Cosine Transform (DCT) on a vector formed by the outputs of each triangular filter in the triangular filter group; and finally, taking the first N coefficients in the DCT to obtain the characteristic parameters of the first digital signal.
Respectively scoring the similarity between the characteristic parameters and a reference template in a pre-built restrictive language model library through an acoustic model, a language model and a pronunciation dictionary to obtain a first score corresponding to the acoustic model, a second score corresponding to the voice model and a third score corresponding to the pronunciation dictionary; and performing weighted data fusion on the first score, the second score and the third score to obtain a final score, and judging whether the language is the restrictive language or not according to the score. For example, if templates such as 'stupid eggs' and 'stupid dead' exist in the pre-built restrictive language model library, and the score obtained after scoring is 98, it can be determined that restrictive language exists.
And 103, judging whether the limiting language exists in the classroom teaching video and audio data according to the extraction result.
And 104, generating first prompt information when the restrictive language exists.
Specifically, when there is a restrictive language, first prompt information, such as a prompt word like "attention wording" or a prompt sound like "dripping" may be generated, and the first prompt information may be sent to the terminal of the teacher giving a notice to the teacher.
Further, after the teacher is prompted, if the teacher still has no change, the teacher can react to the upper department.
Specifically, after step 104, the method further includes:
generating a first keyword according to the first prompt information;
and adding the first keyword into the classroom video and audio data according to the timestamp of the first prompt message. Therefore, the first prompt information is added into the classroom video and audio data, so that the video of the teacher giving lessons giving restricted languages can be obtained in time when the educational administration center or the education supervision department searches for the first prompt information.
Still further, step 104 is followed by:
calculating the number of the first prompt messages;
and when the number of the first prompt messages is larger than a preset first number threshold value, generating first early warning messages. The first early warning information can be sent to a first terminal, and the first terminal can be a terminal of a department for grading teaching, so that teachers can be evaluated in the teaching evaluation, teachers giving lessons can be restrained, and classroom soft violence can be avoided.
And 105, acquiring emotion information according to the class listening video and audio data.
Wherein, the video and audio data of class attending to the lessons includes emotion information, and the emotion information includes but is not limited to: normal, happy, sad, surprised, angry, flushed, blushing, overexcitement, grimacing, and crying.
And step 106, acquiring the number of abnormal individual students according to the emotion information.
Specifically, in one example, first, emotion information of each student during teaching of a first subject in a first teaching place within a second preset time is acquired; the second preset time length is a period of time length in the first preset time length. Then, setting a score for each emotion according to the emotion information to obtain a first curve of the change of the emotion of each student along with time; and calculating a first time number of the change rate of the first curve larger than a preset first change rate threshold value. Finally, when the first time is larger than a preset first time threshold value, determining that the student is an abnormal individual; and obtaining the number of the abnormal student individuals in the first teaching place in the first preset time according to the number of the abnormal student individuals in the first subject in the second preset time.
In particular, each score value may be assigned, a score of 5 may be assigned to normal, a score of 10 may be assigned to crying, etc., thereby forming a first curve of the change in time of each student's emotion. In the first teaching place, the number of abnormal student individuals in each class is calculated, so that the number of the abnormal student individuals in the first teaching place in the first preset time is calculated after the number of the abnormal student individuals in each class is added.
In another example, the duration and the number of times when the emotion information of the individual student is abnormal emotion are acquired; and when the duration is greater than a preset first duration threshold and the times are greater than a preset first time threshold, determining the student is an abnormal student individual.
Specifically, the time length and the frequency of abnormal emotion of each student in a class are calculated by taking the student individuals as a unit, so that abnormal student individuals are determined, and the number of the abnormal student individuals in the first preset time length is obtained after the abnormal student individuals in each class in the first preset time are added.
The negative emotions such as sadness, crying, and anger may include positive emotions such as happy emotions. The expression database stores the models of the expressions, and the recording and broadcasting system can obtain abnormal emotions through AI identification.
And step 107, acquiring the environment information parameters.
Wherein, when teaching, the environmental information parameters can be obtained in real time; the environmental information parameters include the current indoor temperature, the current indoor humidity, and the current indoor light intensity and noise decibels.
Specifically, the execution main part in this application, for example the treater, can acquire the indoor illumination intensity that temperature sensor in the classroom measured indoor temperature and collection illumination sensor gathered, or the noise intensity that the decibel appearance was measured, or the indoor humidity of humidity transducer measurement. Therefore, whether the teaching environment of the first teaching place has problems or not is judged, so that the objective teaching environment is conveniently associated with the emotion of the student, and the objective teaching environment is comprehensively analyzed when the emotion of the student has problems.
And step 108, judging the influence of the environmental information parameters on the number of the abnormal student individuals according to the number of the abnormal student individuals and the corresponding environmental information parameters.
Specifically, in one example, step 108 includes:
when the difference between the decibel of each moment and a preset decibel threshold value, the difference between the temperature and a preset temperature threshold value, the difference between the humidity and a preset humidity threshold value, and the light intensity and a preset light intensity threshold value are all in a preset range within a first preset time period, and the number of abnormal student individuals jumps compared with the last moment, determining that the environmental information parameters have no influence on the number of the abnormal student individuals, and generating second prompt information for indicating that the environmental information parameters have no influence on the number of the abnormal student individuals;
when the number of the abnormal student individuals at the current moment is in jump compared with the number of the abnormal student individuals at the previous moment within a first preset time period, if a first number of the difference value between decibel and a preset decibel threshold value, the difference value between temperature and a preset temperature threshold value, the difference value between humidity and a preset humidity threshold value and the difference value between light intensity and a preset light intensity threshold value are out of a preset range, determining that the environmental information parameter has influence on the number of the abnormal student individuals, and generating third prompt information for indicating that the environmental information parameter has influence on the number of the abnormal student individuals.
The method comprises the steps of generating curves of temperature, humidity, light intensity and decibel of a first teaching place within a first preset time length along with time change, placing the curves into the same coordinate system, judging whether the curves of the environmental parameter information have mutation or not when the number of abnormal student individuals increases suddenly, and judging whether the number of the abnormal student individuals is caused by environmental information change when the number of the abnormal student individuals in the curves of the current environmental parameter information reaches a certain number, such as the first number.
In another example, firstly, a decibel curve, a temperature curve, a humidity curve and a light intensity curve under the same coordinate system are generated according to decibels, temperature, humidity and light intensity of a first preset time; and then, according to the difference values of the decibel curve, the temperature curve, the humidity curve and the light intensity curve with a preset decibel curve, a preset temperature curve, a preset humidity curve and a preset light intensity curve, obtaining a difference value curve of the decibel curve, a difference value curve of the humidity curve and a difference value curve of the light intensity curve. Finally, when the number of abnormal individual students at the current moment has jump compared with the previous moment, judging whether a difference curve of a decibel curve, a difference curve of a humidity curve and a difference curve of a light intensity curve have mutation or not; when no mutation exists, determining that the environmental information parameters have no influence on the number of abnormal individual students; and when the sudden change exists and the sudden change is consistent with the jump of the number of the abnormal student individuals, determining that the environmental information parameters have influence on the number of the abnormal student individuals. Therefore, the emotion information of the individual students is analyzed by combining the teaching environment, and the accuracy of the emotion analysis of the individual students is improved.
The consistency here can be synchronous jump, and also can be that the jump of the environmental information is before and the jump of the number of abnormal student individuals is after. Thereby taking into account the delay in the time of change of emotion.
It can be understood that, when the restricted language exists, the emotion information corresponding to the time when the restricted language exists may be ignored, for example, when the environment is normal, there is a jump in the number of abnormal student individuals, and this time is exactly the time when the restricted language appears, it may be considered that the jump in the number of abnormal student individuals at this time is caused by the restricted language.
Further, this application still includes:
firstly, acquiring classroom behavior information of individual students; then, according to the classroom behavior information, calculating the classroom interaction participation degree of the first teaching field in each subject; then, comparing the classroom interaction participation of each subject with a preset participation threshold; and finally, determining whether the first teaching field has teaching environment defects or not according to the comparison result.
Specifically, the classroom behavior information includes but is not limited to hand-lifting actions, and the classroom interaction participation degree of each subject can be calculated according to the proportion of the hand-lifting actions to the total number of students, for example, teacher a gives questions in a classroom for 3 times, the three hand-lifting rates are 50%, 60% and 70%, respectively, the average value can be used as the classroom participation degree, the preset participation degree threshold value can be 65%, and if the classroom participation degrees of a plurality of subjects are smaller than the preset classroom participation degree threshold value within a first preset time period, it can be determined that the teaching environment defect exists in the first teaching place.
Furthermore, the classroom participation of the same teacher aiming at the same subject of the same class in the first teaching field and the second teaching field can be analyzed for a long time, if the classroom participation of the first teaching field is found to be smaller than that of the second teaching field in the long-term analysis, the first teaching field can be determined to have a teaching environment defect, and subsequently, the teaching environment aiming at the first teaching field can be detected to adjust the unfavorable teaching environment. Wherein, the teaching environment can be the teaching environment information parameter.
By applying the teaching environment safety analysis method provided by the embodiment of the invention, the detection of language violence in the teaching environment and the detection of the environment conditions of the teaching field are realized, and the safety of the teaching environment is analyzed, so that the adverse environment is improved, and the teaching effect is improved.
The second embodiment of the invention provides equipment which comprises a memory and a processor, wherein the memory is used for storing programs, and the memory can be connected with the processor through a bus. The memory may be a non-volatile memory such as a hard disk drive and a flash memory, in which a software program and a device driver are stored. The software program is capable of performing various functions of the above-described methods provided by embodiments of the present invention; the device drivers may be network and interface drivers. The processor is used for executing a software program, and the software program can realize the method provided by the first embodiment of the invention when being executed.
A third embodiment of the present invention provides a computer program product including instructions, which, when the computer program product runs on a computer, causes the computer to execute the method provided in the first embodiment of the present invention.
The fourth embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method provided in the first embodiment of the present invention is implemented.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A teaching environment security analysis method is characterized by comprising the following steps:
acquiring classroom video and audio data of a first teaching site within a first preset time length; the classroom video and audio data comprise classroom teaching video and audio data and classroom listening video and audio data;
carrying out voice feature extraction on the classroom teaching video-audio data;
judging whether restrictive language exists in the classroom teaching video-audio data or not according to the extraction result;
when the restrictive language exists, generating first prompt information;
acquiring emotion information according to the class attendance video and audio data;
acquiring the number of abnormal individual students according to the emotion information;
acquiring an environment information parameter;
judging the influence of the environmental information parameters on the number of the abnormal student individuals according to the number of the abnormal student individuals and the corresponding environmental information parameters;
wherein, the environmental information parameter includes decibel, temperature, humidity and the light intensity in first teaching place, according to individual number of unusual student and the environmental information parameter that corresponds, judge that the influence of environmental information parameter to individual number of unusual student specifically includes:
generating a decibel curve, a temperature curve, a humidity curve and a light intensity curve under the same coordinate system according to the decibel, the temperature, the humidity and the light intensity of the first preset time;
obtaining a difference curve of the decibel curve, a difference curve of the humidity curve and a difference curve of the light intensity curve according to the difference values of the decibel curve, the temperature curve, the humidity curve and the light intensity curve with a preset decibel curve, a preset temperature curve, a preset humidity curve and a preset light intensity curve respectively;
when the number of abnormal student individuals at the current moment jumps compared with the number of abnormal student individuals at the previous moment, judging whether the difference curve of the decibel curve, the difference curve of the humidity curve and the difference curve of the light intensity curve have sudden changes or not;
when no mutation exists, determining that the environmental information parameters have no influence on the number of abnormal individual students;
when the sudden change exists and the sudden change is consistent with the jump of the number of the abnormal individual students, determining that the environmental information parameters have influence on the number of the abnormal individual students;
wherein the method further comprises:
acquiring classroom behavior information of individual students;
according to the classroom behavior information, calculating classroom interaction participation of the first teaching site in each subject;
comparing the classroom interaction participation of each subject with a preset participation threshold;
determining whether the first teaching field has a teaching environment defect or not according to the comparison result;
the environmental information parameter includes decibel, temperature, humidity and the light intensity in first teaching place, according to individual number of unusual student and the environmental information parameter that corresponds, judge the influence of environmental information parameter to individual number of unusual student specifically includes:
when the difference between the decibel and a preset decibel threshold, the difference between the temperature and a preset temperature threshold, the difference between the humidity and a preset humidity threshold and the difference between the light intensity and a preset light intensity threshold at each moment are all within a preset range within a first preset time, and the number of abnormal student individuals has jumping compared with the previous moment, determining that the environmental information parameters have no influence on the number of the abnormal student individuals, and generating second prompt information for indicating that the environmental information parameters have no influence on the number of the abnormal student individuals;
when the number of abnormal student individuals at the current moment is in jump compared with the previous moment within a first preset time, if a first number of the difference between the decibel and a preset decibel threshold value, the difference between the temperature and a preset temperature threshold value, the difference between the humidity and a preset humidity threshold value and the light intensity and a preset light intensity threshold value are out of a preset range, determining that the environmental information parameter has influence on the number of the abnormal student individuals, and generating third prompt information for indicating that the environmental information parameter has influence on the number of the abnormal student individuals.
2. The method according to claim 1, wherein the determining whether the limiting language exists in the classroom teaching video-audio data according to the extracted result specifically comprises:
preprocessing a voice signal in the classroom teaching video-audio data to obtain a first digital signal;
performing feature extraction on the first digital signal to obtain feature parameters;
the similarity of the characteristic parameters and a reference template in a pre-built restrictive language model library is scored through an acoustic model, a language model and a pronunciation dictionary;
and determining whether the limiting language exists in the video and audio data for classroom teaching according to the score.
3. The method according to claim 1, wherein the obtaining of the number of abnormal student individuals according to the emotion information specifically comprises:
acquiring emotion information of each student when a first subject gives lessons in a first teaching place within a second preset time; the second preset time length is a period of time length in the first preset time length;
setting a score for each emotion according to the emotion information to obtain a first curve of the change of the emotion of each student along with time;
calculating a first time number that the change rate of the first curve is greater than a preset first change rate threshold;
when the first time is larger than a preset first time threshold value, determining that the student is an abnormal individual;
and obtaining the number of the abnormal student individuals in the first teaching place in the first preset time according to the number of the abnormal student individuals in the first subject in the second preset time.
4. The method according to claim 1, wherein the obtaining of the number of abnormal student individuals according to the emotion information specifically comprises:
acquiring duration and times of the individual emotion information of the student as abnormal emotion;
and when the duration is greater than a preset first duration threshold and the times are greater than a preset first time threshold, determining the student is an abnormal student individual.
5. The method of claim 4, wherein the abnormal emotion comprises: sadness, crying and anger.
6. The method of claim 1, wherein after generating the first prompt message when the restrictive language is present, the method further comprises:
calculating the number of the first prompt messages;
and when the number of the first prompt messages is larger than a preset first number threshold value, generating first early warning messages.
7. The method of claim 1, further comprising, after generating the first prompt message when the restrictive language is present:
generating a first keyword according to the first prompt message;
and adding the first keyword into the classroom video and audio data according to the timestamp of the first prompt message.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911061807.5A CN110808066B (en) | 2019-11-01 | 2019-11-01 | Teaching environment safety analysis method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911061807.5A CN110808066B (en) | 2019-11-01 | 2019-11-01 | Teaching environment safety analysis method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110808066A CN110808066A (en) | 2020-02-18 |
CN110808066B true CN110808066B (en) | 2022-06-14 |
Family
ID=69500924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911061807.5A Active CN110808066B (en) | 2019-11-01 | 2019-11-01 | Teaching environment safety analysis method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110808066B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118629417A (en) * | 2024-08-13 | 2024-09-10 | 华中师范大学 | Multi-mode classroom teacher teaching language behavior analysis method and system |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050053908A1 (en) * | 2003-09-04 | 2005-03-10 | Eazy Softech Private Limited | Education management system, method and computer program therefor |
JP5957933B2 (en) * | 2012-02-13 | 2016-07-27 | 富士通株式会社 | Class evaluation calculation method, class evaluation calculation program, and class evaluation calculation apparatus |
CN107146177B (en) * | 2017-04-21 | 2021-02-12 | 阔地教育科技有限公司 | Teaching system and method based on artificial intelligence technology |
CN108108903A (en) * | 2017-12-26 | 2018-06-01 | 重庆大争科技有限公司 | Classroom teaching quality assessment system |
CN108109445B (en) * | 2017-12-26 | 2019-12-10 | 重庆大争科技有限公司 | Teaching course condition monitoring method |
CN108281052B (en) * | 2018-02-09 | 2019-11-01 | 郑州市第十一中学 | A kind of on-line teaching system and online teaching method |
CN108648757B (en) * | 2018-06-14 | 2020-10-16 | 北京中庆现代技术股份有限公司 | Analysis method based on multi-dimensional classroom information |
CN109035089A (en) * | 2018-07-25 | 2018-12-18 | 重庆科技学院 | A kind of Online class atmosphere assessment system and method |
CN109859078A (en) * | 2018-12-24 | 2019-06-07 | 山东大学 | A kind of student's Learning behavior analyzing interference method, apparatus and system |
CN109727167B (en) * | 2019-01-07 | 2023-05-26 | 北京汉博信息技术有限公司 | Teaching auxiliary system |
CN110059614A (en) * | 2019-04-16 | 2019-07-26 | 广州大学 | A kind of intelligent assistant teaching method and system based on face Emotion identification |
-
2019
- 2019-11-01 CN CN201911061807.5A patent/CN110808066B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN110808066A (en) | 2020-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kartushina et al. | The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds | |
Cucchiarini et al. | Oral proficiency training in Dutch L2: The contribution of ASR-based corrective feedback | |
Lynch et al. | Listening | |
Cen et al. | A real-time speech emotion recognition system and its application in online learning | |
CN109697976B (en) | Pronunciation recognition method and device | |
CN110930781B (en) | Recording and broadcasting system | |
CN111612352A (en) | Student expression ability assessment method and device | |
McGuire | A brief primer on experimental designs for speech perception research | |
US20220015687A1 (en) | Method for Screening Psychiatric Disorder Based On Conversation and Apparatus Therefor | |
CN110808066B (en) | Teaching environment safety analysis method | |
CN110808075B (en) | Intelligent recording and broadcasting method | |
DE102020134752B4 (en) | METHOD OF EVALUATING THE QUALITY OF READING A TEXT, COMPUTER PROGRAM PRODUCT, COMPUTER READABLE MEDIA AND EVALUATION DEVICE | |
Lavechin et al. | Statistical learning models of early phonetic acquisition struggle with child-centered audio data | |
CN110826796A (en) | Score prediction method | |
Diaz | Towards Improving Japanese EFL Learners' Pronunciation: The Impact of Teaching Suprasegmentals on Intelligibility | |
Palo et al. | Effect of phonetic onset on acoustic and articulatory speech reaction times studied with tongue ultrasound | |
Alhinti et al. | The Dysarthric expressed emotional database (DEED): An audio-visual database in British English | |
Altalmas et al. | Lips tracking identification of a correct Quranic letters pronunciation for Tajweed teaching and learning | |
Osborne | The L2 perception of initial English/h/and/ɹ/by Brazilian Portuguese learners of English | |
Liu et al. | Design of Voice Style Detection of Lecture Archives | |
CN110853428A (en) | Recording and broadcasting control method and system based on Internet of things | |
Zhang | Application of Speech Recognition in English PronunciationCorrection | |
CN116705070B (en) | Method and system for correcting speech pronunciation and nasal sound after cleft lip and palate operation | |
ŠIMÁČKOVÁ | Czech accent in English: Linguistics and biometric speech technologies | |
Tutul et al. | Sound Recognition with a Humanoid Robot for a Quiz Game in an Educational Environment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |