CN105608960A - Spoken language formative teaching method and system based on multi-parameter analysis - Google Patents

Spoken language formative teaching method and system based on multi-parameter analysis Download PDF

Info

Publication number
CN105608960A
CN105608960A CN201610057339.4A CN201610057339A CN105608960A CN 105608960 A CN105608960 A CN 105608960A CN 201610057339 A CN201610057339 A CN 201610057339A CN 105608960 A CN105608960 A CN 105608960A
Authority
CN
China
Prior art keywords
pronunciation
voice
evaluation
accuracy
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610057339.4A
Other languages
Chinese (zh)
Inventor
李心广
王桂珍
陈君宇
王泽铿
陈伟峰
徐集优
张胜斌
李升恒
王晓杰
邱嘉敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Foreign Studies
Original Assignee
Guangdong University of Foreign Studies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Foreign Studies filed Critical Guangdong University of Foreign Studies
Priority to CN201610057339.4A priority Critical patent/CN105608960A/en
Publication of CN105608960A publication Critical patent/CN105608960A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/01Assessment or evaluation of speech recognition systems
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/187Phonemic context, e.g. pronunciation rules, phonotactical constraints or phoneme n-grams
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/221Announcement of recognition results

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Artificial Intelligence (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a spoken language formative teaching method and system based on multi-parameter analysis. The method comprises the following steps: obtaining spoken language test voice, and preprocessing the spoken language test voice; extracting feature parameters from the preprocessed spoken language test voice; performing multi-parameter evaluation on the feature parameters to obtain a multi-parameter evaluation result; performing graded processing on the multi-parameter evaluation result to obtain comprehensive pronunciation guidance; and according to the comprehensive pronunciation guidance and preset rules, forming personal learning information, class learning information and paragraph learning information. The spoken language formative teaching method and system based on the multi-parameter analysis, provided by the embodiments of the invention, test spoken language through multiple parameters, process a test result and can satisfy the learning demand of students and the teaching demand of teachers.

Description

A kind of spoken formative teaching method and system based on Multivariate analysis
Technical field
The present invention relates to speech recognition and assessment technique field, relate in particular to a kind of mouth based on Multivariate analysisLanguage formative teaching method and system.
Background technology
The research of computer aided instruction system is current hot issue, and that is worked out by the Ministry of Education of the state is " largeEnglish learning course teaching requires " point out that " new teaching pattern should be with modern information technologies, particularly network skillArt is for supporting, and in making full use of modern information technologies, excellent in reasonable inheriting tradition teaching patternElegant part, the advantage of performance Traditional Classroom teaching. Should have while personally instructing tutorial accordingly, with what ensure to learnEffect. " formative assessment (formativeassessment) claims again Process Character evaluation, is in teaching processImmediately the evaluation of, dynamically, repeatedly student being implemented, its focuses in time feedback, in order to strengthening with improve studentStudy.
Current Teaching Evaluation System does not use formative to assess " by certain teaching means wellKnow student's study condition, teacher, student are on the basis that carefully analyzes this study condition, to teachingMethod or learning method are made corresponding adjustment " theory, great majority are just anti-merely by student's study conditionReflect individual teacher, not by student's feedback being given full play to student's subjective initiative. In addition,Showing that to teacher aspect Students ' Learning situation, existing system has just been carried out simply performance of the test mostlyDescriptive statistic, so statistics can only reflect total overview of tested group, and fail to be deep into toolThe analysis of the individual students of body and one by one test problem, more can not reflect that a student learns within one periodThe variation tendency of level.
Summary of the invention
The object of the embodiment of the present invention is to provide a kind of spoken formative teaching method based on Multivariate analysisAnd system, by many reference amounts, spoken language is tested, and test result is processed, can meet studentThe demand of study and teachers ' teaching.
To achieve these goals, on the one hand, it is a kind of based on Multivariate analysis that the embodiment of the present invention providesSpoken formative teaching method, comprising:
Obtain oral test voice, described oral test voice are carried out to pretreatment;
From pretreated oral test voice, extract characteristic parameter;
Described characteristic parameter is carried out to many reference amounts evaluation, obtain many reference amounts evaluation result;
Described many reference amounts evaluation result is carried out to classification processing, obtain comprehensive pronunciation and instruct;
Form self-study information, class's learning information and language according to described comprehensive pronunciation guidance and preset rulesSection learning information.
Further, described pretreatment comprises preemphasis, point frame, windowing and end-point detection.
Further, describedly from pretreated oral test voice, extract characteristic parameter and comprise:
Pretreated oral test voice are carried out to the segmentation of words, are language by described oral test phonetic segmentationSegment; Extract the voice keyword of described voice segments.
Further, the parameter of described many reference amounts evaluation comprise pronouncing accuracy, emotion, stress, word speed,Rhythm and intonation;
Described described characteristic parameter is carried out to many reference amounts evaluation, obtains many reference amounts evaluation result and comprise:
According to the accuracy to described voice keyword pronunciation identification, obtain the pronunciation standard of described voice keywordExactness is evaluated;
According to the accuracy to described voice keyword pronunciation emotion, obtain the pronunciation feelings of described voice keywordSense accuracy estimating;
According to the accuracy to described voice keyword pronunciation stress, obtain the pronunciation weight of described voice keywordSound accuracy estimating;
According to the speed to described voice keyword pronunciation, obtain the pronunciation word speed evaluation of described voice keyword;
According to the rhythm to described voice keyword pronunciation, obtain the pronunciation rhythm evaluation of described voice keyword;
According to the accuracy to described voice keyword pronunciation intonation, obtain the pronunciation language of described voice keywordAdjust accuracy estimating.
Further, described described many reference amounts evaluation result is carried out to classification processing, obtain comprehensive pronunciation and instructComprise:
Form personalogy according to the evaluation result of pronouncing accuracy, emotion, stress, word speed, rhythm and intonationHabit information, class's learning information and paragraph learning information.
Further, described comprehensive pronunciation is instructed and is comprised individual spoken achievement, class's achievement distributed intelligence, languageDuan Chengji;
Describedly form self-study information, class's learning information according to described comprehensive pronunciation guidance and preset rulesWith paragraph learning information, comprising:
According to the spoken achievement of described individual and the first preset rules forms individual overall score information and based on described manyThe individual average mark information of parameter;
Form class's overall score information, class according to the achievement distributed intelligence of described class and the second preset rulesRaw overall score and/or the class's average mark information based on described many reference amounts;
Form paragraph average mark information according to described paragraph achievement and the 3rd preset rules.
In order to realize described object, on the other hand, it is a kind of based on Multivariate analysis that the embodiment of the present invention providesSpoken formative tutoring system, comprise voice collecting unit, voice pretreatment unit, speech feature extractionUnit, Multivariate analysis unit, overall merit unit, evaluation information forming unit and master pattern storehouse;
Described voice collecting unit, for obtaining oral test voice;
Described voice pretreatment unit, for carrying out pretreatment to described oral test voice;
Described speech feature extraction unit, for extracting characteristic parameter from pretreated oral test voice;
Described Multivariate analysis unit, for described characteristic parameter is carried out to many reference amounts evaluation, obtains many reference amountsEvaluation result;
Described overall merit unit, for described many reference amounts evaluation result is carried out to classification processing, obtains comprehensivePronunciation is instructed;
Described evaluation information forming unit, for forming individual according to described comprehensive pronunciation guidance and preset rulesLearning information, class's learning information and paragraph learning information;
Described master pattern storehouse, for the phonetic feature of storage standards speech sentences and described received pronunciation statementParameter.
Further, described speech feature extraction unit comprises:
Segmentation of words unit, for pretreated oral test voice are carried out to the segmentation of words, by described mouthfulThe cutting of language tested speech is voice segments;
Keyword extracting unit, for extracting the voice keyword of described voice segments;
Further, described multi-parameter evaluation unit comprises:
Pronouncing accuracy evaluation unit, for according to the accuracy to described voice keyword pronunciation identification, obtainsObtain the pronouncing accuracy evaluation of described voice keyword;
Emotion accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation emotion, obtainsObtain the pronunciation emotion accuracy estimating of described voice keyword;
Stress accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation stress, obtainsObtain the pronunciation stress accuracy estimating of described voice keyword;
Word speed accuracy estimating unit, for according to the speed to the pronunciation of described voice keyword, described in acquisitionThe pronunciation word speed of voice keyword is evaluated;
Rhythm accuracy estimating unit, for according to the rhythm to the pronunciation of described voice keyword, described in acquisitionThe pronunciation rhythm of voice keyword is evaluated;
Intonation accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation intonation, obtainsObtain the pronunciation intonation accuracy estimating of described voice keyword.
Further, described evaluation information forming unit comprises:
Personal evaluation's information forming unit, for forming according to the spoken achievement of described individual and the first preset rulesIndividual's overall score information and the individual average mark information based on described many reference amounts;
Class's evaluation information forming unit, for according to the achievement distributed intelligence of described class and the second preset rulesForm class's overall score information, the student's of class overall score and/or the class's average mark letter based on described many reference amountsBreath;
Paragraph evaluation information forming unit, for forming paragraph according to described paragraph achievement and the 3rd preset rulesAverage mark information.
Spoken formative teaching method and system based on Multivariate analysis that the embodiment of the present invention provides, thisThe spoken formative teaching method based on Multivariate analysis that bright embodiment provides, by extracting oral test languageThe characteristic parameter of sound, and according to described characteristic parameter, utilize the hidden Markov model of setting up in advance to carry out languageSound keyword identification, obtains many reference amounts evaluation result according to voice keyword, by many reference amounts evaluation resultProcess obtaining comprehensive pronunciation and instruct, by comprehensively pronounce to instruct with the formal intuition of chart show teacherAnd student, thereby for the teaching of teacher's everyday spoken english provides online formative appraisal tool, be daily mouthful of studentLanguage study provides online formative to instruct. Compared to prior art, one aspect of the present invention spoken language pronunciation carries out manyParameter objective evaluation, feeds back to student in time by achievement, has given full play to student's subjective initiative; AnotherAspect is goed deep into individual students, paragraph individuality, class's individuality study condition is analyzed, for teachers ' teaching is carriedFor formative assessment foundation.
Brief description of the drawings
Fig. 1 is an embodiment of the spoken formative teaching method based on Multivariate analysis provided by the inventionMethod flow diagram;
Fig. 2 is individual overall score hum pattern provided by the invention;
Fig. 3 is individual average mark hum pattern provided by the invention;
Fig. 4 is class provided by the invention overall score information and the class's average mark information based on described many reference amountsFigure;
Fig. 5 is the student's of class provided by the invention overall score distribution map;
Fig. 6 is paragraph average mark hum pattern provided by the invention;
Fig. 7 is an embodiment of the spoken formative tutoring system based on Multivariate analysis provided by the inventionSystem construction drawing.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clearlyChu, intactly description, obviously, described embodiment is only the present invention's part embodiment, instead ofWhole embodiment. Based on the embodiment in the present invention, those of ordinary skill in the art are not making creationThe every other embodiment obtaining under property work prerequisite, belongs to the scope of protection of the invention. In embodimentLabel before each step is only in order more clearly to identify each step, and between each step, not having mustThe restriction of right sequencing. In the embodiment of the present invention, though only taking the evaluation and test of English Phonetics as example, abilityField technique personnel should be appreciated that the present invention also may be used in the speech processes of other language.
Referring to Fig. 1, it is a reality of the spoken formative teaching method based on Multivariate analysis provided by the inventionExecute routine method flow diagram.
As shown in Figure 1, the described spoken formative teaching method based on Multivariate analysis comprises the following steps:
S11, obtains oral test voice, and described oral test voice are carried out to pretreatment;
Wherein, described pretreatment includes but not limited to preemphasis, point frame, windowing and end-point detection.
S12 extracts characteristic parameter from pretreated oral test voice;
This step specifically comprises: pretreated oral test voice are carried out to the segmentation of words, by described spoken languageTested speech cutting is voice segments; Extract the characteristic parameter of described voice segments, according to described characteristic parameter extractionThe voice keyword of described voice segments.
In the middle of concrete enforcement, preferably adopt double threshold method to carry out pretreated described oral test voiceWhether the segmentation of words, reach default threshold value according to short-time average energy and short-time average zero-crossing rate, described in inciting somebody to actionOral test phonetic segmentation becomes multiple voice segments. Described characteristic parameter includes but not limited to MFCC(Mel-FrequencyCepstralCoefficients, Mel cepstrum coefficient) characteristic parameter.
According to described characteristic parameter, utilize hidden Markov (HMM) model of setting up in advance to enter voice segmentsThe identification of row keyword voice, extracts the keyword using in described oral test voice, obtains spoken crucialWord. Can set up in advance hidden according to the synonym of the answer keyword of received pronunciation statement and described answer keywordMarkov model, and being stored in master pattern storehouse, carries out when to carry out the identification of keyword voice at needsCall. In the identification of keyword voice, can, according to the Mel cepstrum coefficient of oral test voice, set up in advanceHidden Markov model carry out Model Matching, used in described acquisition oral test voice to identifyKeyword.
S13, carries out many reference amounts evaluation to described characteristic parameter, obtains many reference amounts evaluation result;
Wherein, described many reference amounts evaluating comprise pronouncing accuracy, emotion, stress, word speed, rhythm andIntonation;
Described described characteristic parameter is carried out to many reference amounts evaluation, obtains many reference amounts evaluation result, specifically comprise:
According to the accuracy to described voice keyword pronunciation identification, obtain the pronunciation standard of described voice keywordExactness is evaluated;
According to the accuracy to described voice keyword pronunciation emotion, obtain the pronunciation feelings of described voice keywordSense accuracy estimating;
According to the accuracy to described voice keyword pronunciation stress, obtain the pronunciation weight of described voice keywordSound accuracy estimating;
According to the speed to described voice keyword pronunciation, obtain the pronunciation word speed evaluation of described voice keyword;
According to the rhythm to described voice keyword pronunciation, obtain the pronunciation rhythm evaluation of described voice keyword;
According to the accuracy to described voice keyword pronunciation intonation, obtain the pronunciation language of described voice keywordAdjust accuracy estimating.
S14, carries out classification processing to described many reference amounts evaluation result, obtains comprehensive pronunciation and instructs;
This step specifically comprises:
According to pronouncing accuracy evaluation, pronunciation emotion accuracy estimating, pronunciation stress accuracy estimating, pronunciationWord speed evaluation, pronunciation rhythm evaluation and pronunciation intonation accuracy estimating form individual spoken achievement, class's achievementDistributed intelligence, paragraph achievement.
S15, forms self-study information, class's learning information according to described comprehensive pronunciation guidance and preset rulesWith paragraph learning information.
This step specifically comprises:
S151, according to the spoken achievement of described individual and the first preset rules form individual overall score information and based onThe individual average mark information of described many reference amounts;
Wherein, individual spoken achievement derives from the school grade on Android cell-phone customer terminal, and server is at every turnAfter completing spoken tone testing, preserve each general individual classification and each parameter achievement. Described the first preset rulesCan be the rule that forms cake chart according to discrete data, also can form radar map according to discrete dataRule. In the present embodiment, form the first preset rules of individual overall score information for to form according to discrete dataThe rule of cake chart, the first preset rules that forms the individual average mark information based on described many reference amounts is basisDiscrete data forms the rule of radar map. Corresponding, individual overall score information is with cake chart as shown in Figure 2Form embody; Individual average mark information based on described many reference amounts is with the form of radar map as shown in Figure 3Embody. Facilitate student to grasp in time the situation of own verbal learning, given full play to student's subjective initiative.
S152, according to the achievement distributed intelligence of described class and the second preset rules form class's overall score information,The student's of class overall score and/or the class's average mark information based on described many reference amounts;
Wherein, server is preserved each class's achievement distributed intelligence completing at every turn after spoken tone testing.Described the second preset rules can be the rule that forms Line Chart according to discrete data, can be also according to discreteThe rule of data formation cake chart. Corresponding, class's overall score information and the class based on described many reference amounts are flatDividing equally information embodies with the form of linear graph as shown in Figure 4. The student's of class overall score is taught according to teacherDifferent classes the student's of class overall score is embodied with the form of cake chart as shown in Figure 5. Facilitate teacherCarry out the amendment of teaching programme for the situation of different classes.
S153, form paragraph average mark information according to described paragraph achievement and the 3rd preset rules.
Wherein, server is preserved each paragraph information completing at every turn after spoken tone testing. The described the 3rdPreset rules is the rule that forms block diagram according to discrete data. Corresponding, paragraph average mark information is with as Fig. 6The form of shown block diagram embodies, and facilitates teacher to have for the difference grasp situation of sentence according to studentTeach targetedly.
By step S15 by comprehensively pronounce to instruct with the formal intuition of chart show Faculty and Students, fromAnd for the teaching of teacher's everyday spoken english provides online formative appraisal tool, for student's everyday spoken english study providesOnline formative instructs.
The spoken formative teaching method based on Multivariate analysis that the embodiment of the present invention provides, by extracting mouthThe characteristic parameter of language tested speech, and according to described characteristic parameter, utilize the hidden Markov mould of setting up in advanceType carries out the identification of voice keyword, obtains many reference amounts evaluation results according to voice keyword, by many reference amountsEvaluation result is processed and is obtained comprehensive pronunciation and instruct, by comprehensively pronounce to instruct with the formal intuition of chart open upShow to Faculty and Students, thereby for the teaching of teacher's everyday spoken english provides online formative appraisal tool, be studentEveryday spoken english study provide online formative to instruct. Compared to prior art, one aspect of the present invention is spoken to be sent outSound carries out many reference amounts objective evaluation, and achievement is fed back to student in time, has given full play to student's subjective initiativeProperty; Going deep on the other hand individual students, paragraph individuality, class's individuality study condition is analyzed, is religionEducation of school provides formative assessment foundation.
Referring to Fig. 7, it is a reality of the spoken formative tutoring system based on Multivariate analysis provided by the inventionExecute routine system construction drawing. The flesh and blood of the described spoken formative tutoring system based on Multivariate analysis withSpoken formative teaching method correspondence based on Multivariate analysis embodiment illustrated in fig. 1, in the present embodiment notThe associated description of part in can embodiment shown in Figure 1 is described in detail in detail.
As shown in Figure 7, the described spoken evaluating system based on Multivariate analysis comprise voice collecting unit 21,Voice pretreatment unit 22, speech feature extraction unit 23, Multivariate analysis unit 24, overall merit unit25, evaluation information forming unit 26 and master pattern storehouse 27.
Voice collecting unit 21 is installed on if the client of mobile phone etc. is for verbal learning person, voice pretreatmentUnit 22, speech feature extraction unit 23, Multivariate analysis unit 24, overall merit unit 25, evaluationInformation forming unit 26 and master pattern storehouse 27 are installed in server, allow service by online way of submissionDevice is evaluated and tested and is processed spoken learner's pronunciation, and oral English teaching person can check tester by this systemThe multiple visual statistical graphs such as individual results, class's entirety achievement and paragraph achievement, make statisticsEvaluate foundation for formative, student is taught targetedly.
Wherein, described voice collecting unit 21, for obtaining oral test voice;
Described voice pretreatment unit 22, for carrying out pretreatment to described oral test voice;
Described speech feature extraction unit 23, for extracting feature ginseng from pretreated oral test voiceNumber;
Described Multivariate analysis unit 24, for described characteristic parameter is carried out to many reference amounts evaluation, obtains joining moreAmount evaluation result;
Described overall merit unit 25, for described many reference amounts evaluation result is carried out to classification processing, obtains and combinesClosing pronunciation instructs;
Described evaluation information forming unit 26, for forming individual according to described comprehensive pronunciation guidance and preset rulesPeople's learning information, class's learning information and paragraph learning information;
Described master pattern storehouse 27, for the voice spy of storage standards speech sentences and described received pronunciation statementLevy parameter.
Further, described speech feature extraction unit 23 comprises:
Segmentation of words unit, for pretreated oral test voice are carried out to the segmentation of words, by described mouthfulThe cutting of language tested speech is voice segments;
Keyword extracting unit, for extracting the voice keyword of described voice segments;
Further, described multi parameter analysis unit 24 comprises:
Pronouncing accuracy evaluation unit, for according to the accuracy to described voice keyword pronunciation identification, obtainsObtain the pronouncing accuracy evaluation of described voice keyword;
Emotion accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation emotion, obtainsObtain the pronunciation emotion accuracy estimating of described voice keyword;
Stress accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation stress, obtainsObtain the pronunciation stress accuracy estimating of described voice keyword;
Word speed accuracy estimating unit, for according to the speed to the pronunciation of described voice keyword, described in acquisitionThe pronunciation word speed of voice keyword is evaluated;
Rhythm accuracy estimating unit, for according to the rhythm to the pronunciation of described voice keyword, described in acquisitionThe pronunciation rhythm of voice keyword is evaluated;
Intonation accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation intonation, obtainsObtain the pronunciation intonation accuracy estimating of described voice keyword.
Further, described evaluation information forming unit 26 comprises:
Personal evaluation's information forming unit, for forming according to the spoken achievement of described individual and the first preset rulesIndividual's overall score information and the individual average mark information based on described many reference amounts;
Class's evaluation information forming unit, for according to the achievement distributed intelligence of described class and the second preset rulesForm class's overall score information, the student's of class overall score and/or the class's average mark letter based on described many reference amountsBreath;
Paragraph evaluation information forming unit, for forming paragraph according to described paragraph achievement and the 3rd preset rulesAverage mark information.
In sum, the spoken formative teaching method based on Multivariate analysis that the embodiment of the present invention provides andSystem, by extracting the characteristic parameter of oral test voice, and according to described characteristic parameter, utilizes and builds in advanceVertical hidden Markov model carries out the identification of voice keyword, obtains many reference amounts evaluation knot according to voice keywordReally, obtain comprehensive pronunciation guidance by many reference amounts evaluation result is processed, will comprehensively pronounce to instruct to schemeShow Faculty and Students the formal intuition of table, thereby impart knowledge to students online formative is provided for teacher's everyday spoken englishAppraisal tool, for student's everyday spoken english study provides online formative to instruct. Compared to prior art, thisMany reference amounts objective evaluation is carried out in invention on the one hand spoken language pronunciation, and achievement is fed back to student in time, gives full play toStudent's subjective initiative; Go deep on the other hand individual students, paragraph individuality, class's individuality to study shapeCondition is analyzed, for teachers ' teaching provides formative assessment foundation.
By the description of above embodiment, those skilled in the art can be well understood to the present inventionThe mode that can add essential common hardware by software realizes, and can certainly comprise specially by specialized hardwareRealize with integrated circuit, dedicated cpu, private memory, special components and parts etc. Technical side of the present inventionThe part that case contributes to prior art in essence in other words can embody with the form of software product,This software product is stored in the storage medium can read, as the floppy disk of computer, USB flash disk, portable hard drive,Read-only storage (ROM, Read-OnlyMemory), random access memory (RAM, RandomAccessMemory), magnetic disc or CD etc.
The above be only the specific embodiment of the present invention, but protection scope of the present invention is not limited toThis, any be familiar with those skilled in the art the present invention disclose technical scope in, can expect easilyChange or replace, within all should being encompassed in protection scope of the present invention. Therefore, protection scope of the present invention shouldProtection domain with described claim is as the criterion.

Claims (10)

1. the spoken formative teaching method based on Multivariate analysis, is characterized in that, comprising:
Obtain oral test voice, described oral test voice are carried out to pretreatment;
From pretreated oral test voice, extract characteristic parameter;
Described characteristic parameter is carried out to many reference amounts evaluation, obtain many reference amounts evaluation result;
Described many reference amounts evaluation result is carried out to classification processing, obtain comprehensive pronunciation and instruct;
Form self-study information, class's learning information according to described comprehensive pronunciation guidance and preset rulesWith paragraph learning information.
2. the spoken formative teaching method based on Multivariate analysis as claimed in claim 1, its featureBe, described pretreatment comprises preemphasis, point frame, windowing and end-point detection.
3. the spoken formative teaching method based on Multivariate analysis as claimed in claim 1, its featureBe, describedly from pretreated oral test voice, extract characteristic parameter and comprise:
Pretreated oral test voice are carried out to the segmentation of words, are language by described oral test phonetic segmentationSegment; Extract the characteristic parameter of described voice segments, according to the voice of voice segments described in described characteristic parameter extractionKeyword.
4. the spoken formative teaching method based on Multivariate analysis as claimed in claim 3, its feature existsIn,
The parameter of described many reference amounts evaluation comprises pronouncing accuracy, emotion, stress, word speed, rhythm and intonation;
Described described characteristic parameter is carried out to many reference amounts evaluation, obtains many reference amounts evaluation result and comprise:
According to the accuracy to described voice keyword pronunciation identification, obtain the pronunciation standard of described voice keywordExactness is evaluated;
According to the accuracy to described voice keyword pronunciation emotion, obtain the pronunciation feelings of described voice keywordSense accuracy estimating;
According to the accuracy to described voice keyword pronunciation stress, obtain the pronunciation weight of described voice keywordSound accuracy estimating;
According to the speed to described voice keyword pronunciation, obtain the pronunciation word speed evaluation of described voice keyword;
According to the rhythm to described voice keyword pronunciation, obtain the pronunciation rhythm evaluation of described voice keyword;
According to the accuracy to described voice keyword pronunciation intonation, obtain the pronunciation language of described voice keywordAdjust accuracy estimating.
5. the spoken formative teaching method based on Multivariate analysis as claimed in claim 4, its featureBe, described described many reference amounts evaluation result carried out to classification processing, obtain comprehensive pronunciation and instruct and comprise:
According to pronouncing accuracy evaluation, pronunciation emotion accuracy estimating, pronunciation stress accuracy estimating, pronunciationWord speed evaluation, pronunciation rhythm evaluation and pronunciation intonation accuracy estimating form individual spoken achievement, class's achievementDistributed intelligence, paragraph achievement.
6. the spoken formative teaching method based on Multivariate analysis as claimed in claim 5, its feature existsIn,
Describedly form self-study information, class's learning information according to described comprehensive pronunciation guidance and preset rulesWith paragraph learning information, comprising:
According to the spoken achievement of described individual and the first preset rules forms individual overall score information and based on described manyThe individual average mark information of parameter;
Form class's overall score information, class according to the achievement distributed intelligence of described class and the second preset rulesRaw overall score and/or the class's average mark information based on described many reference amounts;
Form paragraph average mark information according to described paragraph achievement and the 3rd preset rules.
7. the spoken formative tutoring system based on Multivariate analysis, is characterized in that, comprises that voice adoptCollection unit, voice pretreatment unit, speech feature extraction unit, Multivariate analysis unit, overall merit listUnit, evaluation information forming unit and master pattern storehouse;
Described voice collecting unit, for obtaining oral test voice;
Described voice pretreatment unit, for carrying out pretreatment to described oral test voice;
Described speech feature extraction unit, for extracting characteristic parameter from pretreated oral test voice;
Described Multivariate analysis unit, for described characteristic parameter is carried out to many reference amounts evaluation, obtains many reference amountsEvaluation result;
Described overall merit unit, for described many reference amounts evaluation result is carried out to classification processing, obtains comprehensivePronunciation is instructed;
Described evaluation information forming unit, for forming individual according to described comprehensive pronunciation guidance and preset rulesLearning information, class's learning information and paragraph learning information;
Described master pattern storehouse, for the phonetic feature of storage standards speech sentences and described received pronunciation statementParameter.
8. the spoken formative tutoring system based on Multivariate analysis according to claim 7, its featureBe, described speech feature extraction unit comprises:
Segmentation of words unit, for pretreated oral test voice are carried out to the segmentation of words, by described mouthfulThe cutting of language tested speech is voice segments;
Keyword extracting unit, for extracting the voice keyword of described voice segments.
9. the spoken formative tutoring system based on Multivariate analysis according to claim 8, its featureBe, described multi-parameter evaluation unit comprises:
Pronouncing accuracy evaluation unit, for according to the accuracy to described voice keyword pronunciation identification, obtainsObtain the pronouncing accuracy evaluation of described voice keyword;
Emotion accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation emotion, obtainsObtain the pronunciation emotion accuracy estimating of described voice keyword;
Stress accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation stress, obtainsObtain the pronunciation stress accuracy estimating of described voice keyword;
Word speed accuracy estimating unit, for according to the speed to the pronunciation of described voice keyword, described in acquisitionThe pronunciation word speed of voice keyword is evaluated;
Rhythm accuracy estimating unit, for according to the rhythm to the pronunciation of described voice keyword, described in acquisitionThe pronunciation rhythm of voice keyword is evaluated;
Intonation accuracy estimating unit, for according to the accuracy to described voice keyword pronunciation intonation, obtainsObtain the pronunciation intonation accuracy estimating of described voice keyword.
10. the spoken formative tutoring system based on Multivariate analysis according to claim 8, its spyLevy and be, described evaluation information forming unit comprises:
Personal evaluation's information forming unit, for forming according to the spoken achievement of described individual and the first preset rulesIndividual's overall score information and the individual average mark information based on described many reference amounts;
Class's evaluation information forming unit, for according to the achievement distributed intelligence of described class and the second preset rulesForm class's overall score information, the student's of class overall score and/or the class's average mark letter based on described many reference amountsBreath;
Paragraph evaluation information forming unit, for forming paragraph according to described paragraph achievement and the 3rd preset rulesAverage mark information.
CN201610057339.4A 2016-01-27 2016-01-27 Spoken language formative teaching method and system based on multi-parameter analysis Pending CN105608960A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610057339.4A CN105608960A (en) 2016-01-27 2016-01-27 Spoken language formative teaching method and system based on multi-parameter analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610057339.4A CN105608960A (en) 2016-01-27 2016-01-27 Spoken language formative teaching method and system based on multi-parameter analysis

Publications (1)

Publication Number Publication Date
CN105608960A true CN105608960A (en) 2016-05-25

Family

ID=55988862

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610057339.4A Pending CN105608960A (en) 2016-01-27 2016-01-27 Spoken language formative teaching method and system based on multi-parameter analysis

Country Status (1)

Country Link
CN (1) CN105608960A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107704996A (en) * 2017-09-12 2018-02-16 青岛大学 A kind of Teacher Evaluation System based on sentiment analysis
CN108074203A (en) * 2016-11-10 2018-05-25 中国移动通信集团公司 A kind of teaching readjustment method and apparatus
CN108648526A (en) * 2018-05-15 2018-10-12 刘光荣 A kind of Americanese phonetic symbol pronunciation training system
CN111915940A (en) * 2020-06-29 2020-11-10 厦门快商通科技股份有限公司 Method, system, terminal and storage medium for evaluating and teaching spoken language pronunciation
CN112767961A (en) * 2021-02-07 2021-05-07 哈尔滨琦音科技有限公司 Mouth sound correction method based on cloud computing
CN113516410A (en) * 2021-07-31 2021-10-19 北京翰雅科技有限公司 Language teaching system and method
CN113793238A (en) * 2021-04-26 2021-12-14 王晶 Education system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002072860A (en) * 2001-09-12 2002-03-12 Yasuhiko Nagasaka Multiple language learning supporting server device, terminal device and multiple language learning support system using these devices and multiple language learning supporting program
US20040214145A1 (en) * 2003-04-23 2004-10-28 Say-Ling Wen Sentence-conversation teaching system with environment and role selections and method of the same
CN103617799A (en) * 2013-11-28 2014-03-05 广东外语外贸大学 Method for detecting English statement pronunciation quality suitable for mobile device
CN103928023A (en) * 2014-04-29 2014-07-16 广东外语外贸大学 Voice scoring method and system
CN104050965A (en) * 2013-09-02 2014-09-17 广东外语外贸大学 English phonetic pronunciation quality evaluation system with emotion recognition function and method thereof
CN104732977A (en) * 2015-03-09 2015-06-24 广东外语外贸大学 On-line spoken language pronunciation quality evaluation method and system
CN104810017A (en) * 2015-04-08 2015-07-29 广东外语外贸大学 Semantic analysis-based oral language evaluating method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002072860A (en) * 2001-09-12 2002-03-12 Yasuhiko Nagasaka Multiple language learning supporting server device, terminal device and multiple language learning support system using these devices and multiple language learning supporting program
US20040214145A1 (en) * 2003-04-23 2004-10-28 Say-Ling Wen Sentence-conversation teaching system with environment and role selections and method of the same
CN104050965A (en) * 2013-09-02 2014-09-17 广东外语外贸大学 English phonetic pronunciation quality evaluation system with emotion recognition function and method thereof
CN103617799A (en) * 2013-11-28 2014-03-05 广东外语外贸大学 Method for detecting English statement pronunciation quality suitable for mobile device
CN103928023A (en) * 2014-04-29 2014-07-16 广东外语外贸大学 Voice scoring method and system
CN104732977A (en) * 2015-03-09 2015-06-24 广东外语外贸大学 On-line spoken language pronunciation quality evaluation method and system
CN104810017A (en) * 2015-04-08 2015-07-29 广东外语外贸大学 Semantic analysis-based oral language evaluating method and system

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108074203A (en) * 2016-11-10 2018-05-25 中国移动通信集团公司 A kind of teaching readjustment method and apparatus
CN107704996A (en) * 2017-09-12 2018-02-16 青岛大学 A kind of Teacher Evaluation System based on sentiment analysis
CN107704996B (en) * 2017-09-12 2021-07-02 青岛大学 Teacher evaluation system based on emotion analysis
CN108648526A (en) * 2018-05-15 2018-10-12 刘光荣 A kind of Americanese phonetic symbol pronunciation training system
CN111915940A (en) * 2020-06-29 2020-11-10 厦门快商通科技股份有限公司 Method, system, terminal and storage medium for evaluating and teaching spoken language pronunciation
CN112767961A (en) * 2021-02-07 2021-05-07 哈尔滨琦音科技有限公司 Mouth sound correction method based on cloud computing
CN112767961B (en) * 2021-02-07 2022-06-03 哈尔滨琦音科技有限公司 Accent correction method based on cloud computing
CN113793238A (en) * 2021-04-26 2021-12-14 王晶 Education system and method
CN113516410A (en) * 2021-07-31 2021-10-19 北京翰雅科技有限公司 Language teaching system and method

Similar Documents

Publication Publication Date Title
CN105608960A (en) Spoken language formative teaching method and system based on multi-parameter analysis
CN104732977B (en) A kind of online spoken language pronunciation quality evaluating method and system
Kang Impact of rater characteristics and prosodic features of speaker accentedness on ratings of international teaching assistants' oral performance
Koolagudi et al. IITKGP-SESC: speech database for emotion analysis
KR100733469B1 (en) Pronunciation Test System and Method of Foreign Language
US9262941B2 (en) Systems and methods for assessment of non-native speech using vowel space characteristics
US9489864B2 (en) Systems and methods for an automated pronunciation assessment system for similar vowel pairs
Bolanos et al. Automatic assessment of expressive oral reading
Cheng Automatic assessment of prosody in high-stakes English tests.
Ahsiah et al. Tajweed checking system to support recitation
CN102723077A (en) Method and device for voice synthesis for Chinese teaching
Ghanem et al. Pronunciation features in rating criteria
Sztahó et al. A computer-assisted prosody pronunciation teaching system.
Tao et al. DNN Online with iVectors Acoustic Modeling and Doc2Vec Distributed Representations for Improving Automated Speech Scoring.
Loukina et al. Automated assessment of pronunciation in spontaneous speech
Sakamoto Investigation of factors behind foreign accent in the L2 acquisition of Japanese lexical pitch accent by adult English speakers
Zechner et al. Automatic scoring of children’s read-aloud text passages and word lists
Yarra et al. Automatic intonation classification using temporal patterns in utterance-level pitch contour and perceptually motivated pitch transformation
Kim et al. Automatic assessment of American English lexical stress using machine learning algorithms
Yamashita et al. Automatic scoring for prosodic proficiency of English sentences spoken by Japanese based on utterance comparison
Greenberg Deep Language Learning
Hussein et al. Mandarin tone perception and production by German learners
Septiyani et al. The Influence of English Song and Joox Application toward Students’ Pronunciation (A True Experimental Study at the Eighth Grade of SMPN 6 Kota Serang)
Li et al. English sentence pronunciation evaluation using rhythm and intonation
Santoz et al. Locutionary act in motivating students at SMPN 2 Wungu

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160525