CN110246514B - English word pronunciation learning system based on pattern recognition - Google Patents

English word pronunciation learning system based on pattern recognition Download PDF

Info

Publication number
CN110246514B
CN110246514B CN201910640573.3A CN201910640573A CN110246514B CN 110246514 B CN110246514 B CN 110246514B CN 201910640573 A CN201910640573 A CN 201910640573A CN 110246514 B CN110246514 B CN 110246514B
Authority
CN
China
Prior art keywords
standard
english
pronunciation
voice
english word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910640573.3A
Other languages
Chinese (zh)
Other versions
CN110246514A (en
Inventor
孙烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Petroleum East China
Original Assignee
China University of Petroleum East China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Petroleum East China filed Critical China University of Petroleum East China
Priority to CN201910640573.3A priority Critical patent/CN110246514B/en
Publication of CN110246514A publication Critical patent/CN110246514A/en
Application granted granted Critical
Publication of CN110246514B publication Critical patent/CN110246514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Abstract

An english word pronunciation learning system based on pattern recognition, comprising: a human-computer interaction system and a data processing system; the man-machine interaction system is used for acquiring English word voice of a user, sending an interaction instruction to the user and performing character interaction through the display screen; the human-computer interaction system comprises: voice input end, voice output end and display screen. Aiming at the characteristics of English pronunciation, the invention utilizes the computer technology to extract the characteristic parameters of accent, tone and speed of word pronunciation, and analyzes the fit degree between the actual human pronunciation and the system built-in standard by the methods of normalization processing, pattern recognition and the like. The invention has the advantages of high word pronunciation feature recognition rate, high computer processing efficiency and the like.

Description

English word pronunciation learning system based on pattern recognition
Technical Field
The invention relates to an English word pronunciation learning system based on pattern recognition, and belongs to the technical field of specific application of computer type pattern recognition.
Background
With the internationalization of economic development, the role of english in economic life is becoming more and more important. University is an important place for professor to proffer professional English, and the innovation of teaching method and means is very important. Because English teaching is a system project, words are one of the roots of the system, word bases and learning abilities of different students have more or less difference, especially in actual word teaching practice, the problems of irregular pronunciation of the words of the students, difficulty in correction and the like exist. How to quantitatively evaluate the pronunciation level of words of students and the learning ability of the students becomes one of the keys of English teaching.
And with the development of artificial intelligence technology, new opportunity is brought to English teaching. For example, various App software exists at present, recognition and evaluation functions of user pronunciation are realized, and after the App software is used, the user does not have prominent technical advantages in feeding back evaluation effects and evaluation efficiency.
The pattern recognition technology mainly comprises data acquisition, preprocessing, feature extraction and selection, classifier design, classification decision and the like, and is mainly applied to image analysis and processing, voice recognition, sound classification, communication, computer-aided diagnosis, data mining and the like. Therefore, how to effectively utilize the advantages of pattern recognition in speech recognition, sound classification, data mining, etc. is a new research direction in the field of the technology to apply the advantages to the development of english word pronunciation level and learning ability evaluation systems.
Through the relevant data of the pronunciation of the spoken language of gathering the student, its english word pronunciation level and learning ability of ration aassessment have the significance to promoting english teaching level and student's learning efficiency.
Disclosure of Invention
Aiming at the defects of the prior art, the invention discloses an English word pronunciation learning system based on pattern recognition.
The invention can process the collected learner word pronunciation by establishing and analyzing a large amount of standard pronunciations, determine the pronunciation defects of the learner by using a mode recognition technology, and finally carry out technical quantification on the pronunciation defects and the defects. The quantification result can also provide a basic objective basis for subsequent teaching research.
Specifically, the invention aims to provide an English pronunciation learning system based on pattern recognition, which is used for collecting and analyzing the relevant data of the spoken pronunciation of a student and providing an effective tool for quantitatively evaluating the English spoken language level and learning ability of the student.
The technical scheme of the invention is as follows:
an english word pronunciation learning system based on pattern recognition, comprising: a human-computer interaction system and a data processing system;
the human-computer interaction system is used for collecting English word voice of a user and sending an interaction instruction to the user, and the human-computer interaction system comprises the following steps: please follow the word; the system is also used for carrying out character interaction through the display screen; the human-computer interaction system comprises: voice input end, voice output end and display screen.
According to the invention, the method for preprocessing the collected English single-word voice data in the data processing module preferably comprises a noise reduction processing step and a key parameter extraction step:
the method comprises the following steps of carrying out noise reduction treatment for filtering environmental noise, and carrying out noise reduction treatment in a filtering mode according to the frequency range of the voice of 80-1000Hz to extract effective voice, wherein the specific method comprises the following steps:
1) converting collected English single word voice data into a sound wave time domain signal A (t), performing Fourier transform on the time domain signal A (t), and converting the time domain signal A (t) into a frequency domain signal F (f); as shown in fig. 4;
2) screening effective human voices, namely F (valid), according to the formula of 80Hz (low) < F (f) < F (high) < 1000Hz, wherein F (low) represents the lower limit of the effective human voice frequency, and F (high) represents the upper limit of the effective human voice frequency;
3) fourier transform is carried out on F (valid) to obtain a time domain diagram of the effective human voice signal, and the time domain range is t0~tT(ii) a As shown in FIG. 3 (b);
wherein the key parameter extraction is as follows: on the basis of the effective human voice signal, namely comprising a time domain graph A (valid) and a frequency domain graph F (valid), extracting at least three key sound parameter information of stress, tone and speech speed, and specifically comprising the following steps:
4) accent, abbreviated ZY: define A (t) as the amplitude of sound wave at time t, if A (t) exists<△ A, where t e [ t ∈ [ [ t ]1,t2]△ A is selected based on equipment error, typically less than 1db, and t2-t1>0.01s, the accent is at t2Time of day; otherwise, the stress of the English word voice is at the time when t is 0;
5) tone, abbreviated YD: the pitch is the degree of tone of the pronunciation of the word and includes at least two parameters to characterize, and is the variation value A of the amplitude of the sound wave with timemax(t) and frequency curves f (t) of word pronunciation; firstly, the peak value corresponding to each time in A (valid) time domain graph is taken to obtain Amax(t) the change of the sound wave amplitude along with the time, secondly, △ t is taken as the step length, the time domain graph is divided into a plurality of segments, and the Fourier transform is utilized to obtain the segment △ tiAverage frequency f ofiThen a plurality of fiFrequency curve f (t) composing word pronunciation, wherein, preferably, △ t is 10ms, Amax(t) and f (t) are shown in FIGS. 5 and 6;
6) speech rate, abbreviated YS: according to the judgment of the step 4) in the key parameter extraction, if the English word stress is at the moment when t is equal to 0, the speed of speech is the duration t of the human voice effective signalT(ii) a If the word stress is t ═ t2Time of day, then speechThe speed parameters are two, namely: t is t1-t0And tT-t2
The parameter information corresponding to the standard sound with English words in the data processing system is respectively as follows: stress, pitch, and pace.
According to a preferred embodiment of the present invention, in the pattern recognition method in the data processing module, the parameter information extracted in the steps 4) -6) and the parameter information corresponding to the standard sound built in the system are subjected to pattern recognition analysis, so as to finally determine the degree of engagement between the pronunciation of the user and the pronunciation of the standard english word, and provide an evaluation score, and the specific method is as follows:
7) establishing a system built-in standard: recording standard English word pronunciation data, and recording by professional English teachers according to four-level, six-level and professional eight-level vocabularies; processing the pronunciation data of the recorded standard English words and extracting key parameters according to the steps 1) to 6); establishing a standard database, and storing English words, standard pronunciations of the English words and parameter information corresponding to the standard English words in a database of a system;
8) normalization treatment: the three parameters of stress, tone and speech rate are normalized, and the specific method is as follows:
i. normalization processing of accents:
Figure BDA0002131713180000051
obtaining accent standard ZYT
Normalization processing of pitch:
Figure BDA0002131713180000052
deriving the tone criterion YDT
Normalization processing of speech rate:
Figure BDA0002131713180000053
obtain the speech rate standard YST
According to the steps i-iii, carrying out normalization processing on the parameter information of the system built-in standard to obtain standard ZYT、YDTAnd YSTThey are stored in the system database together with the words, the original pronunciation and the original parameter information;
9) after the collected voice information is processed through the steps 1-6, corresponding key parameter information is extracted, and normalization processing is carried out on the voice information according to the steps i-iii to obtain corresponding ZYR、YDRAnd YSR
10) And (3) integrating degree calculation: the deviations between the actual english word pronunciation corresponding parameter information of the user and the standard english word pronunciation parameter information are calculated as follows,
for the stress parameter: the parameter belongs to a constant value parameter, when t is 0, ZY is required to be satisfiedR=ZYTI.e. accent deviation E ZY0; when t ≠ 0, accent bias EZY=|ZYR-ZYT|;
For the pitch parameter, dividing the curves shown in fig. 4 and 5 (the system-in standard curve and the actual voice curve) equally by using △ t as a step length of 10ms, and performing pitch deviation calculation for each infinitesimal, then the pitch total deviation is:
Figure BDA0002131713180000061
Figure BDA0002131713180000062
for the speech rate parameter: the parameter belongs to a constant value parameter, and when t is 0, YS is required to be satisfiedRYS T1, namely the speech speed deviation E is 0; when t ≠ 0, speed deviation EYS=max[|YS1R-YS1T|,|YS2R-YS2T|];
Wherein YS1RAnd YS2RRespectively indicates that the accents are t ═ t2At any moment, the speech speed normalization processing result YS corresponding to the actual human voice1And YS2;YS1TAnd YS2TRespectively indicates that the accents are t ═ t2At the moment, the system is internally provided with a speech speed normalization processing result YS corresponding to the standard1And YS2
10) And (4) evaluation judgment: the user's english word pronunciation is scored,
Figure BDA0002131713180000063
the system is internally preset with evaluation levels corresponding to the English word difficulty level library, for example, the four, six and eight level word libraries of the invention are respectively corresponding to the following steps:
the English level test level four vocabulary table corresponds to 'difficulty level 1';
the English level test six-level vocabulary table corresponds to 'difficulty level 2';
the English level examination eight-level vocabulary table corresponds to 'difficulty level 3';
certainly, the present invention is not limited to determining the difficulty level of the english word by the above method, for example, the english word may be subjected to difficulty classification according to pronunciation habit or syllable number, and the difficulty classification method does not belong to the contents to be protected by the present invention, but the system and the method of the present invention may be adopted for efficiently training the pronunciation of the english word of the user for databases with different difficulty levels of the english word;
if the evaluation score does not reach the corresponding preset evaluation grade, the system automatically reduces the difficulty grade to test the evaluation score again until the preset evaluation grade of the system is reached.
According to a preferred embodiment of the present invention, the criteria of the preset evaluation level are as follows:
Figure BDA0002131713180000071
the technical advantages of the invention are as follows:
1. aiming at the characteristics of English pronunciation, the invention utilizes the computer technology to extract the characteristic parameters of accent, tone and speed of word pronunciation, and analyzes the fit degree between the actual human pronunciation and the system built-in standard by the methods of normalization processing, pattern recognition and the like. The invention has the advantages of high word pronunciation feature recognition rate, high computer processing efficiency and the like.
2. The invention realizes an English word pronunciation learning system integrating the functions of human voice acquisition, real-time comparison analysis and evaluation judgment by using a human-computer interaction system, can efficiently evaluate the accuracy of English pronunciation of students and give quantitative evaluation results, and is favorable for the students to practice, improve and perfect the pronunciation in a targeted manner.
Drawings
FIG. 1 is a schematic diagram of the English word pronunciation learning system based on pattern recognition according to the present invention;
FIG. 2 is a schematic diagram of the working principle of the English word pronunciation learning system based on pattern recognition according to the present invention;
FIG. 3 is a time domain diagram of the original signal and the noise-reduced signal collected by the system of the present invention;
FIG. 4 is a frequency domain plot of the original signal of the present invention;
FIG. 5 is a graph of the amplitude of sound waves of the present invention over time;
FIG. 6 is a graph of the average frequency of the present invention over time;
FIG. 7 is a graph of the amplitude of the acoustic wave of the word discovery in the system of the present invention over time;
FIG. 8 is a graph of average frequency of words discovery versus time in a system of the present invention.
Detailed Description
The invention is described in detail below with reference to the following examples and the accompanying drawings of the specification, but is not limited thereto.
As shown in fig. 1-8.
Examples of the following,
An english word pronunciation learning system based on pattern recognition, comprising: a human-computer interaction system and a data processing system; the human-computer interaction system is used for collecting English word voice of a user and sending an interaction instruction to the user, and the human-computer interaction system comprises the following steps: please follow the word; the system is also used for carrying out character interaction through the display screen; the human-computer interaction system comprises: voice input end, voice output end and display screen.
The display screen prompts words, multiple sets of words with different standard difficulties are built in the system, and students who are initially tested can select the difficulty standard according to self conditions.
The difficulty levels are classified into 3 levels, 1 level, 2 levels, and 3 levels, in order from easy to difficult. Wherein, level 1 corresponds to the difficulty of college English level four, and the selected words, phrases and sentences come from college English level four category; level 2 corresponds to college English level six difficulty, and the selected words, phrases and sentences are from college English level six category; grade 3 corresponds to eight degrees of difficulty for the college english profession, and the selected words, phrases and sentences are all from eight categories of college english profession.
Meanwhile, the voice input terminal inputs audio. And the students click corresponding words, phrases or sentences according to the prompt of the display screen, and the system microphone reads the words, the phrases or the sentences aloud to finish the input of the voice data.
According to the frequency of the human voice, a high-speed data acquisition card with the sampling frequency of 100K is selected to acquire a time domain signal A (t) of the voice, as shown in fig. 3 (a).
Taking word discovery as an example, the difficulty level 1 is selected for testing, and the collected signals, the processed signals and the corresponding word phonetic symbol are analyzed as shown in fig. 3.
Extracting key parameters:
① accent:
actual human voice: t is t0=0,t1=325ms,t2=894ms,tT=2216ms。A(t1~t2)=0.32db<△ A is 1db, and t is2-t1=894ms-325ms=589ms>0.01s, then ZYR=0.894s。
② tone:
the actual human voice is shown in fig. 5 and 6.
③ speech rate:
actual human voice: t is t1-t0=0.325s,tT-t2=1.322s。
Pattern recognition:
① System built-in Standard t0=0,t1=308ms,t2=869ms,tT=2149ms。
Accenting: ZYT=0.78s。
Tone: as shown in fig. 7 and 8.
The speed of speech: t is t1-t0=0.308s,tT-t2=1.28s
② normalization processing:
the system built-in standard:
i. accenting:
Figure BDA0002131713180000101
pitch: the normalization processing of the parameters is to uniformly convert the range of the parameter values into [0,1], namely the maximum value is 1, the change rule of the curves in the graphs 7 and 8 is not influenced in the conversion process, and the change rule is shown in the graphs 7 and 8;
speed of speech:
Figure BDA0002131713180000102
actual human voice:
i. accenting:
Figure BDA0002131713180000103
pitch: the normalization processing of the parameters is to uniformly convert the range of the parameter values into [0,1], namely the maximum value is 1, the change rule of the curves in the graphs in the figures 5 and 6 is not influenced in the conversion process, and the change rule is shown in the graphs in the figures 5 and 6;
speed of speech:
Figure BDA0002131713180000104
③ fitness calculation
i. Accents, deviations EZY=|ZYR-ZYT|=|0.403-0.404=0.001;
The frequency of the tone is set to be,
Figure BDA0002131713180000111
Figure BDA0002131713180000112
speed of speech, deviation EYS=max[|YS1R-YS1T|,|YS2R-YS2T|]=0.003。
Evaluation judgment evaluation score
Figure BDA0002131713180000113
And (4) passing.
And (4) evaluation results: g<0.1, meeting the evaluation judgment standard of the selected difficulty level 1, and giving out comparison data between the key sound parameter and the standard sound parameter corresponding to the pronunciation of the student, namely the deviation E by the systemZY=0.001、EYD(Amax)=0.028、EYD(f)=0.033、EYSThe image can be stored as legend data of subsequent research, and data guarantee and technical support are provided for upgrading of a subsequent system, so that an exercise effect prompt which is more visual or quantitative can be extended and designed, a user can have deeper knowledge on own pronunciation, and further improvement and perfection on pronunciation are facilitated.

Claims (4)

1. An english word pronunciation learning system based on pattern recognition, comprising: a human-computer interaction system and a data processing module;
the man-machine interaction system is used for acquiring English word voice of a user, sending an interaction instruction to the user and performing character interaction through the display screen; the human-computer interaction system comprises: the voice input end, the voice output end and the display screen;
the method for preprocessing the collected English single-word voice data in the data processing module comprises a noise reduction processing step and a key parameter extraction step:
the method comprises the following steps of carrying out noise reduction treatment for filtering environmental noise, and carrying out noise reduction treatment in a filtering mode according to the frequency range of the voice of 80-1000Hz to extract effective voice, wherein the specific method comprises the following steps:
1) converting collected English single word voice data into a sound wave time domain signal A (t), performing Fourier transform on the time domain signal A (t), and converting the time domain signal A (t) into a frequency domain signal F (f);
2) screening effective human voices, namely F (valid), according to the formula of 80Hz (low) < F (f) < F (high) < 1000Hz, wherein F (low) represents the lower limit of the effective human voice frequency, and F (high) represents the upper limit of the effective human voice frequency;
3) fourier transform is carried out on F (valid) to obtain a time domain diagram of the effective human voice signal, and the time domain range is t0~tT
Wherein the key parameter extraction is as follows: on the basis of the effective human voice signal, namely comprising a time domain graph A (valid) and a frequency domain graph F (valid), extracting at least three key sound parameter information of stress, tone and speech speed, and specifically comprising the following steps:
4) accent, abbreviated ZY: define A (t) as the amplitude of sound wave at time t, if A (t) exists<△ A, where t e [ t ∈ [ [ t ]1,t2]△ A is selected based on the equipment error, and t2-t1>0.01s, the accent is at t2Time of day; otherwise, the stress of the English word voice is at the time when t is 0;
5) tone, abbreviated YD: at least comprises two parameters for characterization, namely a variation value A of the sound wave amplitude along with timemax(t) and frequency curves f (t) of word pronunciation; firstly, the peak value corresponding to each time in A (valid) time domain graph is taken to obtain Amax(t) the change of the sound wave amplitude along with the time, secondly, △ t is taken as the step length, the time domain graph is divided into a plurality of segments, and the Fourier transform is utilized to obtain the segment △ tiAverage frequency f ofiThen a plurality of fiA frequency curve f (t) forming the pronunciation of the word;
6) speech rate, abbreviated YS: according to the judgment of the step 4) in the key parameter extraction, if the English word stress is at the moment when t is equal to 0, the speed of speech is the duration t of the human voice effective signalT(ii) a If the word stress is t ═ t2At any moment, there are two speech rate parameters, which are: t is t1-t0And tT-t2
The parameter information corresponding to the standard sound with English words built in the data processing module is also respectively as follows: stress, pitch, and pace.
2. The pattern recognition-based english word pronunciation learning system according to claim 1, wherein the data processing module is loaded with a pattern recognition method, and the parameter information extracted in steps 4) -6) and the parameter information corresponding to the standard sound built in the system are subjected to pattern recognition analysis, so as to finally determine the degree of engagement between the user pronunciation and the standard english word pronunciation and provide an evaluation score, and the specific method is as follows:
7) establishing a system built-in standard: recording standard English word pronunciation data, and processing and extracting key parameters of the recorded standard English word pronunciation data according to steps 1) -6); establishing a standard database, and storing English words, standard pronunciations of the English words and parameter information corresponding to the standard English words in a database of a system;
8) normalization treatment: the three parameters of stress, tone and speech rate are normalized, and the specific method is as follows:
i. normalization processing of accents:
Figure FDA0002427624820000031
obtaining accent standard ZYT
Normalization processing of pitch:
Figure FDA0002427624820000032
deriving the tone criterion YDT
Normalization processing of speech rate:
Figure FDA0002427624820000033
obtain the speech rate standard YST
According to the steps i-iii, carrying out normalization processing on the parameter information of the system built-in standard to obtain standard ZYT、YDTAnd YSTThey are stored in the system database together with the words, the original pronunciation and the original parameter information;
9) for the collected voiceAfter the information is processed through the steps 1) -6), extracting corresponding key parameter information, and then carrying out normalization processing on the key parameter information according to the steps i-iii to obtain corresponding ZYR、YDRAnd YSR
10) And (3) integrating degree calculation: the deviations between the actual english word pronunciation corresponding parameter information of the user and the standard english word pronunciation parameter information are calculated as follows,
for the stress parameter: the parameter belongs to a constant value parameter, when t is 0, ZY is required to be satisfiedR=ZYTI.e. accent deviation EZY0; when t ≠ 0, accent bias EZY=|ZYR-ZYT|;
For the pitch parameter, using △ t-10 ms as a step, performing pitch deviation calculation for each element, and then the pitch total deviation:
Figure FDA0002427624820000041
Figure FDA0002427624820000042
for the speech rate parameter: the parameter belongs to a constant value parameter, and when t is 0, YS is required to be satisfiedR=YST1, namely the speech speed deviation E is 0; when t ≠ 0, speed deviation EYS=max[|YS1R-YS1T|,|YS2R-YS2T|];
Wherein YS1RAnd YS2RRespectively indicates that the accents are t ═ t2At any moment, the speech speed normalization processing result YS corresponding to the actual human voice1And YS2;YS1TAnd YS2TRespectively indicates that the accents are t ═ t2At the moment, the system is internally provided with a speech speed normalization processing result YS corresponding to the standard1And YS2
And (4) evaluation judgment: the user's english word pronunciation is scored,
Figure FDA0002427624820000043
presetting an evaluation grade in the system, wherein the evaluation grade corresponds to an English word difficulty grade library,
if the evaluation score does not reach the corresponding preset evaluation grade, the system automatically reduces the difficulty grade to test the evaluation score again until the preset evaluation grade of the system is reached.
3. The pattern recognition-based english word pronunciation learning system according to claim 2, wherein the criteria of the preset evaluation level are as follows:
Figure FDA0002427624820000051
4. the pattern recognition-based english word pronunciation learning system according to claim 1, wherein in step 5), △ t is 10 ms.
CN201910640573.3A 2019-07-16 2019-07-16 English word pronunciation learning system based on pattern recognition Active CN110246514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910640573.3A CN110246514B (en) 2019-07-16 2019-07-16 English word pronunciation learning system based on pattern recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910640573.3A CN110246514B (en) 2019-07-16 2019-07-16 English word pronunciation learning system based on pattern recognition

Publications (2)

Publication Number Publication Date
CN110246514A CN110246514A (en) 2019-09-17
CN110246514B true CN110246514B (en) 2020-06-16

Family

ID=67892370

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910640573.3A Active CN110246514B (en) 2019-07-16 2019-07-16 English word pronunciation learning system based on pattern recognition

Country Status (1)

Country Link
CN (1) CN110246514B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111739365A (en) * 2020-07-15 2020-10-02 郑州玛源网络科技有限公司 English learning platform based on big data
CN116109455B (en) * 2023-03-09 2023-06-30 电子科技大学成都学院 Language teaching auxiliary system based on artificial intelligence

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714727A (en) * 2012-10-06 2014-04-09 南京大五教育科技有限公司 Man-machine interaction-based foreign language learning system and method thereof
CN106205634A (en) * 2016-07-14 2016-12-07 东北电力大学 A kind of spoken English in college level study and test system and method
CN107221318B (en) * 2017-05-12 2020-03-31 广东外语外贸大学 English spoken language pronunciation scoring method and system
KR101943520B1 (en) * 2017-06-16 2019-01-29 한국외국어대학교 연구산학협력단 A new method for automatic evaluation of English speaking tests
CN107945625A (en) * 2017-11-20 2018-04-20 陕西学前师范学院 A kind of pronunciation of English test and evaluation system

Also Published As

Publication number Publication date
CN110246514A (en) 2019-09-17

Similar Documents

Publication Publication Date Title
US7280964B2 (en) Method of recognizing spoken language with recognition of language color
US7299188B2 (en) Method and apparatus for providing an interactive language tutor
Sroka et al. Human and machine consonant recognition
US7962327B2 (en) Pronunciation assessment method and system based on distinctive feature analysis
KR20180137207A (en) A new method for automatic evaluation of English speaking tests
CN109727608B (en) Chinese speech-based ill voice evaluation system
CN101751919A (en) Spoken Chinese stress automatic detection method
CN111612352A (en) Student expression ability assessment method and device
Ahsiah et al. Tajweed checking system to support recitation
US20060053012A1 (en) Speech mapping system and method
CN110246514B (en) English word pronunciation learning system based on pattern recognition
CN116206496B (en) Oral english practice analysis compares system based on artificial intelligence
CN112767961B (en) Accent correction method based on cloud computing
WO1999013446A1 (en) Interactive system for teaching speech pronunciation and reading
Tverdokhleb et al. Implementation of accent recognition methods subsystem for eLearning systems
Wang Detecting pronunciation errors in spoken English tests based on multifeature fusion algorithm
Wang A Machine Learning Assessment System for Spoken English Based on Linear Predictive Coding
Zheng An analysis and research on Chinese college students’ psychological barriers in oral English output from a cross-cultural perspective
Wang et al. Putonghua proficiency test and evaluation
Barczewska et al. Detection of disfluencies in speech signal
Sigurgeirsson et al. Manual speech synthesis data acquisition-from script design to recording speech
CN111210845A (en) Pathological voice detection device based on improved autocorrelation characteristics
Lin et al. Native Listeners' Shadowing of Non-native Utterances as Spoken Annotation Representing Comprehensibility of the Utterances.
CN111508523A (en) Voice training prompting method and system
Duan et al. An English pronunciation and intonation evaluation method based on the DTW algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant