CN101290720A - Visualized pronunciation teaching method and apparatus - Google Patents

Visualized pronunciation teaching method and apparatus Download PDF

Info

Publication number
CN101290720A
CN101290720A CNA2008101151145A CN200810115114A CN101290720A CN 101290720 A CN101290720 A CN 101290720A CN A2008101151145 A CNA2008101151145 A CN A2008101151145A CN 200810115114 A CN200810115114 A CN 200810115114A CN 101290720 A CN101290720 A CN 101290720A
Authority
CN
China
Prior art keywords
pronunciation
information
elementary cell
video
variation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CNA2008101151145A
Other languages
Chinese (zh)
Other versions
CN101290720B (en
Inventor
李伟
刘凤楼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Xunfei Information Technology Co ltd
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN2008101151145A priority Critical patent/CN101290720B/en
Publication of CN101290720A publication Critical patent/CN101290720A/en
Application granted granted Critical
Publication of CN101290720B publication Critical patent/CN101290720B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a teaching method for visual pronunciation. The method comprises the following steps of: receiving pronunciation basic unit information which is input by a user; searching audio pronunciation information corresponding to the received pronunciation basic unit information according to mapping relation of each pronunciation basic unit information and the corresponding audio pronunciation information; searching a first video pronunciation information corresponding to the received pronunciation basic unit information according to the mapping relation of each pronunciation basic unit information and the corresponding first video pronunciation information; and synchronously playing the searched audio pronunciation information and the first video pronunciation information. The invention discloses a visual pronunciation teaching device at the same time. The proposal of the invention is adopted to make learners hear correct pronunciation of each pronunciation basic unit and intuitively notice dynamic change of each pronunciation organ and strength change of air current of each pronunciation basic unit in the process of correct pronunciation, thereby effectively improving scientificity, intuition and interest of the pronunciation teaching.

Description

Visualized pronunciation teaching method and device
Technical field
The present invention relates to the computer-aided instruction field, particularly a kind of visualized pronunciation teaching method and device.
Background technology
The standard Chinese education in the bilingual education of minority area in recent years, dialect area and foreign student's Chinese studying progressively rise, and wherein the teaching of the Chinese phonetic alphabet is particularly important.
Current teaching of Chinese pin yin mainly is to record a video by the pronunciation state figure that shows the elementary cell of respectively pronouncing or by the degree of lip-rounding pronunciation of showing the elementary cell of respectively pronouncing, and explains on this basis.But the pronunciation state figure of the elementary cell of respectively pronouncing only shows shape of the mouth as one speaks position and the tongue bitmap of single width figure, the degree of lip-rounding pronunciation video recording of each elementary cell of pronouncing only shows the dynamic change track of the shape of the mouth as one speaks, these two kinds of form of teachings all do not have dynamically to show the change in location of each organ in the overall process of pronunciation and the phonation, imagery is provided for the pronunciation teaching, assisting of visualize, and the explanation that all depends on the Chinese speech pronunciation teacher of different experience level in the teaching process that pronounces is at present demonstrated, lack the Received Pronunciation demonstration and instruct, also do not have close pronunciation, contrast teaching between the fallibility pronunciation.
Summary of the invention
The invention provides a kind of visualized pronunciation teaching method, in order to solve the problem that can not dynamically show the change in location of each organ in pronunciation overall process and the phonation in the pronunciation teaching process that exists in the prior art.
Accordingly, the present invention also provides a kind of visualized pronunciation instructional device.
The invention provides following technical scheme:
A kind of visualized pronunciation teaching method, the method comprising the steps of: the pronunciation elementary cell information that receives user's input; According to the mapping relations of each pronunciation elementary cell information and corresponding audio pronunciation information, the pronunciation elementary cell information corresponding audio pronunciation information of searching and receiving; And according to the mapping relations of each pronunciation elementary cell information with the corresponding first video pronunciation information, the first video pronunciation information that the pronunciation elementary cell information of searching and receiving is corresponding, wherein the first video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the orthoepy process that pronunciation elementary cell information comprises, and in the orthoepy process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described first video pronunciation information are corresponding along with the variation of time; With the described audio frequency pronunciation information and the first video pronunciation information synchronous playing that finds.
A kind of visualized pronunciation instructional device comprises receiving element, is used to receive the pronunciation elementary cell information of user's input; Storage unit stores the mapping relations of elementary cell information and corresponding audio pronunciation information of respectively pronouncing, and the mapping relations of respectively pronounce elementary cell information and the corresponding first video pronunciation information; First searches the unit, is used for the mapping relations in respectively pronounce the elementary cell information and the corresponding audio pronunciation information of described cell stores, searches the pronunciation elementary cell information corresponding audio pronunciation information that receives with described receiving element; Second searches the unit, is used for the mapping relations in respectively pronounce elementary cell information and the corresponding first video pronunciation information of described cell stores, searches the first corresponding video pronunciation information of pronunciation elementary cell information that receives with described receiving element; Wherein the first video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the orthoepy process that pronunciation elementary cell information comprises, and in the orthoepy process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described first video pronunciation information that finds are corresponding along with the variation of time; First broadcast unit is used for searching the audio frequency pronunciation information and second that the unit finds with described first and searches the first video pronunciation information synchronous playing that the unit finds.
Beneficial effect of the present invention is as follows:
Technical scheme provided by the invention is by receiving the pronunciation elementary cell information of user's input, mapping relations according to each pronunciation elementary cell information and corresponding audio pronunciation information, the pronunciation elementary cell information corresponding audio pronunciation information of searching and receiving, and according to the mapping relations of each pronunciation elementary cell information with the corresponding first video pronunciation information, the first video pronunciation information that the pronunciation elementary cell information of searching and receiving is corresponding, with the audio frequency pronunciation information and the first video pronunciation information synchronous playing that finds, thereby in the pronunciation teaching process, the learner can not only hear the orthoepy of each pronunciation elementary cell, can also see intuitively that each pronunciation elementary cell is in the orthoepy process, the dynamic change of each vocal organs and the power of air-flow change, thereby effectively raise the science of pronunciation teaching, intuitive and interest.
Description of drawings
Fig. 1 is in the embodiment of the invention, the visualized pronunciation teaching method schematic flow sheet;
Fig. 2 is in the embodiment of the invention, visualized pronunciation instructional device structural representation.
Embodiment
The embodiment of the invention proposes, the Received Pronunciation audio file and the orthoepy cartoon material file synchronization of each pronunciation elementary cell information correspondence are play, make the word pronunciation learning person can not only hear the orthoepy of each pronunciation elementary cell, can also see intuitively that each pronunciation elementary cell is in the orthoepy process, the dynamic change of each vocal organs and the power of air-flow change, and have improved science, intuitive and the interest of pronunciation teaching.
Below in conjunction with Figure of description the embodiment of the invention is elaborated.
As shown in Figure 1, for visualized pronunciation teaching method process flow diagram in the embodiment of the invention, specifically comprise the steps:
Step 101 is set up pronunciation elementary cell information bank.
Pronunciation elementary cell information comprises the pronunciation elementary cell and people's that this pronunciation elementary cell is pronounced age and sex information etc., and gives a unique number for each elementary cell information of pronouncing.
In teaching of Chinese pin yin, the pronunciation elementary cell comprises that specifically 22 initial consonants, 39 simple or compound vowel of a Chinese syllable, 416 no tuning joints and 1333 have tuning to save, to the people that each elementary cell of pronouncing is pronounced be divided according to age bracket, can be divided into 7 years old, 18 years old and 35 years old three age bracket, wherein, on behalf of 3 years old to 12 years old, 18 years old age bracket, 7 years old age bracket represent 13 years old to 24 years old, 35 years old age bracket to represent 25 years old to 60 years old.
With pronunciation elementary cell " a " is example, comprises that each pronunciation elementary cell information of this pronunciation elementary cell " a " is as shown in the table:
Table 1:
Numbering The pronunciation elementary cell Age Sex
1 a 7 The woman
2 a 7 The man
3 a 18 The woman
4 a 18 The man
5 a 35 The woman
6 a 35 The man
Step 102 is set up audio frequency pronunciation information storehouse.
Audio frequency pronunciation information storehouse is made up of each pronunciation elementary cell corresponding audio pronunciation information, the corresponding audio frequency pronunciation information file of each pronunciation elementary cell information of the elementary cell of wherein pronouncing corresponding audio pronunciation information, and the numbering of each pronunciation elementary cell information is identical with the numbering of its corresponding audio pronunciation information file.The pronunciation of each pronunciation elementary cell is all recorded according to age and sex, and the resonance peak of the corresponding recording of statistical study, the rectangular window that it is N=960 (0.12 second) that Resonance Peak Analysis adopts the mutual length that does not superpose is divided into the K section to the voice signal of recording, each section voice signal is expressed as s (n) (0≤n<N), each section voice signal is carried out zero padding, that is:
S FFT ( n ) = s ( n ) , 0 &le; n < N 0 N &le; n &le; N FFT N FFT = 1024
If the short-term spectrum of the audio digital signals frame of input is | X (ω) |, ω L≤ ω≤ω H, ω wherein LAnd ω HFor the edge frequency of audio digital signals frame of input, for S FFT(n) carrying out the FFT conversion obtains:
X(ω)=FFT{S FFT(n)}
Will | X (ω) | be divided into M subband according to frequencies omega | X 1(ω) | (ω 1 L≤ ω≤ω 1 H), | X 2(ω) | (ω 2 L≤ ω≤ω 2 H) ..., | X M(ω) | (ω M L≤ ω≤ω M H), ω wherein i LAnd ω i HBe the edge frequency of i subband, &omega; L &le; &omega; i L &le; &omega; i H &le; &omega; H ( 1 &le; i &le; M )
E j iThe energy of representing i subband of j segment signal, promptly E j i = | X i j ( &omega; ) | 2 , Then each sub belt energy is expressed as:
E i = 1 K &Sigma; j = 1 K E j i ( &omega; ) , 1 &le; i &le; M
Choose the first three groups record of sub belt energy maximum, be designated as first, second, third resonance peak, after the first, second, third resonance peak average of asking for for same pronunciation elementary cell, same sex, age-grade all recording, choose the audio frequency pronunciation information file of the immediate recording of resonance peak and average as this elementary cell of pronouncing, this sex, this age.
Each audio frequency pronunciation information file corresponding with each pronunciation elementary cell information in the above-mentioned table 1 is as shown in table 2 below:
Table 2:
Numbering Audio frequency pronunciation information file
1 A-7-woman .wav
2 A-7-man .wav
3 A-18-woman .wav
4 A-18-man .wav
5 A-35-woman .wav
6 A-35-man .wav
Each wherein corresponding audio frequency pronunciation information file with each pronunciation elementary cell information can but be not limited to preserve with the wav form.
Step 103 is set up the first video pronunciation information storehouse.
The pronunciation elementary cell that the first video pronunciation information storehouse is comprised by each pronunciation elementary cell information is in the orthoepy process, the formed orthoepy cartoon material of the variation of the dynamic change of each vocal organs and air-flow power file is formed, all corresponding one the first video pronunciation information of each pronunciation elementary cell information (promptly corresponding orthoepy cartoon material file), and the numbering of each pronunciation elementary cell information is identical with the numbering of its first corresponding video pronunciation information (promptly corresponding orthoepy cartoon material file).
Voice signal in the phonation is stably in short-term, and vocal organs are owing to physical characteristics, and its position is stablized constant in the time of a few tens of milliseconds.In order to show that more accurately each pronunciation elementary cell is in the orthoepy process, the continuous variation track in the position of each vocal organs, every 40 milliseconds of vocal organs locations drawing of drawing each pronunciation elementary cell, comprise the people nasal cavity, upper lip, lower lip, go up in tooth gums, lower tooth gums, preceding hard palate, middle hard palate, back hard palate, soft palate, uvula, the tip of the tongue, the tip of the tongue, behind the tip of the tongue, before the lingual surface, behind the lingual surface, the change in location of vocal cords and the variation of air-flow power, form the orthoepy cartoon material file of each pronunciation elementary cell information correspondence.
With pronunciation elementary cell " O " is example, the initial state of " O " vocal organs multidate information in the orthoepy process is that mouth is half-open, upper lip lifts slightly, expose front tooth slightly, lower jaw is motionless substantially, the vocal organs multidate information is depicted the gradual track that swell slightly towards soft palate at the lingual surface rear portion in the phonation, gradual track contracts behind the tongue, the tongue position rises to half high gradual track, and while vocal organs multidate information demonstrates lips and begins slowly to hold together circle by the exhibition lip, glottis is opened by being closed to greatly, vocal cords by static to the vibration, the variation track that the air-flow of breathing out from lung is gone out from the oral cavity.
The orthoepy cartoon material file of each pronunciation elementary cell information correspondence and the corresponding preservation of this pronunciation elementary cell information corresponding audio pronunciation information file, be the pronunciation elementary cell that comprises in each pronunciation elementary cell information in the orthoepy process, the orthoepy cartoon material file of this pronunciation elementary cell information correspondence is along with the variation of time and this pronunciation elementary cell information corresponding audio pronunciation information file are corresponding along with the variation of time.
The concrete preserving type of orthoepy cartoon material file is: the time point of choosing the initial tongue of each orthoepy cartoon material file position and termination tongue position is put corresponding with the initial sum termination time of corresponding audio pronunciation information file respectively, choose the climax point time of above-mentioned orthoepy cartoon material file and the climax point time correspondence of above-mentioned audio frequency pronunciation information file then, wherein for same pronunciation elementary cell, the climax point can be selected a plurality of, between the time corresponding point of having set up, insert the display frame of vocal organs dynamic change, and on this basis with two to three times of whole prolongations of the internal actions time in all oral cavities, the positive shape of the mouth as one speaks still adopts the mode that cooperates fully with audio frequency pronunciation information document time, does not carry out the prolongation of time.
Each orthoepy cartoon material file corresponding with each pronunciation elementary cell information in the above-mentioned table 1 is as shown in table 3 below:
Table 3:
Numbering Orthoepy cartoon material file
1 A-7-woman .swf
2 A-7-man .swf
3 A-18-woman .swf
4 A-18-man .swf
5 A-35-woman .swf
6 A-35-man .swf
Each wherein corresponding orthoepy cartoon material file with each pronunciation elementary cell information can but be not limited to preserve with the swf form.
Step 104 is set up the second video pronunciation information storehouse.
Chinese education according to the minority area, the standard Chinese education in dialect area, in foreign student's Chinese education process, be subjected to mother tongue, the influence of accent and the statistical information of the incorrect pronunciations that causes, can also set up the second video pronunciation information storehouse, specifically the pronunciation elementary cell that is comprised by each pronunciation elementary cell information is in the incorrect pronunciations process, the formed incorrect pronunciations cartoon material of the variation of the dynamic change of each vocal organs and air-flow power file is formed, all corresponding one the second video pronunciation information of each pronunciation elementary cell information (promptly corresponding incorrect pronunciations cartoon material file), and the numbering of each pronunciation elementary cell information is identical with the numbering of its second corresponding video pronunciation information (promptly corresponding incorrect pronunciations cartoon material file).
The method for building up in the concrete method for building up in the second video information storehouse and the above-mentioned first video pronunciation information storehouse is similar, repeats no more here.
Wherein, the incorrect pronunciations cartoon material file of this pronunciation elementary cell information correspondence is along with the variation of the time orthoepy cartoon material file corresponding with this pronunciation elementary cell information is corresponding along with the variation of time.
Each incorrect pronunciations cartoon material file corresponding with each pronunciation elementary cell information in the above-mentioned table 1 is as shown in table 4 below:
Table 4:
Numbering Incorrect pronunciations cartoon material file
1 A-7-woman .tsh
2 A-7-man .tsh
3 A-18-woman .tsh
4 A-18-man .tsh
5 A-35-woman .tsh
6 A-35-man .tsh
Each wherein corresponding incorrect pronunciations cartoon material file with each pronunciation elementary cell information can but be not limited to preserve with the tsh form.
Step 105 is searched the numbering of the pronunciation elementary cell information corresponding with the pronunciation elementary cell information of user's input in the pronunciation elementary cell information bank of setting up.
For example the pronunciation elementary cell information of user input is: the pronunciation elementary cell is for " a ", age are that 18 years old, sex are the woman, then finds corresponding with above-mentioned pronunciation elementary cell information to be numbered 3 in above-mentioned table 1.
Step 106, in the audio frequency pronunciation information storehouse of setting up, search with step 105 in the identical audio frequency pronunciation information file of numbering that finds.
According to the numbering 3 that finds in the step 105, finding in above-mentioned table 2 and being numbered 3 audio frequency pronunciation information file is a-18-woman .wav.
Step 107, in the first video pronunciation information storehouse of setting up, search with step 105 in the first identical video pronunciation information file of numbering that finds.
According to the numbering 3 that finds in the step 105, finding in above-mentioned table 3 and being numbered 3 the first video pronunciation information file is a-18-woman .swf.
Step 108, in the second video pronunciation information storehouse of setting up, search with step 105 in the second identical video pronunciation information file of numbering that finds.
According to the numbering 3 that finds in the step 105, finding in above-mentioned table 4 and being numbered 3 the second video pronunciation information file is a-18-woman .tsh.
Step 109 is play the audio frequency pronunciation information file and the first video pronunciation information file synchronization that find.
After above-mentioned audio frequency pronunciation information file that finds and the broadcast of the first video pronunciation information file synchronization, any time point in the pronunciation elementary cell that this pronunciation elementary cell information comprises is being pronounced procedure for displaying, accurate vocal organs location drawing correspondence is all arranged, accurately show this time point people nasal cavity, upper lip, lower lip, go up in tooth gums, lower tooth gums, preceding hard palate, middle hard palate, back hard palate, soft palate, uvula, the tip of the tongue, the tip of the tongue, behind the tip of the tongue, before the lingual surface, behind the lingual surface, the position of vocal cords and air-flow power etc.
Step 110 is play the audio frequency pronunciation information file and the second video pronunciation information file synchronization that find.
After above-mentioned audio frequency pronunciation information file that finds and the broadcast of the second video pronunciation information file synchronization, the learner can not only learn the orthoepy process of each pronunciation elementary cell, the fallibility phonation of each pronunciation elementary cell that can also present according to the second video pronunciation information is deepened understanding and memory to correct phonation by the mode of contrast study.
Wherein, above-mentioned pronunciation elementary cell information, audio frequency pronunciation information, the first video pronunciation information and the second video pronunciation information can but be not limited to adopt the form storage of database.
According to above-mentioned processing procedure as can be known, when adopting the present invention program to pronounce teaching, the word pronunciation learning person can not only hear the orthoepy of each pronunciation elementary cell, can also see intuitively that each pronunciation elementary cell is in the orthoepy process, the dynamic change of each organ and the power of air-flow change, and have improved science, intuitive and the interest of pronunciation teaching.
Accordingly, the present invention also provides a kind of visualized pronunciation instructional device.
As shown in Figure 2, the visualized pronunciation instructional device comprises:
Receiving element 201 is used to receive the pronunciation elementary cell information that the user imports.
Storage unit 202, store the elementary cell information of respectively pronouncing with the mapping relations of corresponding audio pronunciation information, respectively the pronounce mapping relations of elementary cell information and the corresponding first video pronunciation information and the mapping relations of respectively pronounce elementary cell information and the corresponding second video pronunciation information.
First searches unit 203, is used for the mapping relations in respectively pronounce the elementary cell information and the corresponding audio pronunciation information of storage unit 202 storages, searches the pronunciation elementary cell information corresponding audio pronunciation information that receives with receiving element 201.
Second searches unit 204, is used for the mapping relations in respectively pronounce elementary cell information and the corresponding first video pronunciation information of storage unit 202 storage, searches the first corresponding video pronunciation information of pronunciation elementary cell information that receives with receiving element 201.
The 3rd searches unit 205, is used for the mapping relations in respectively pronounce elementary cell information and the corresponding second video pronunciation information of storage unit 202 storage, searches the second corresponding video pronunciation information of pronunciation elementary cell information that receives with receiving element 201.
First broadcast unit 206 is used for searching the audio frequency pronunciation information and second that unit 203 finds with described first and searches the first video pronunciation information synchronous playing that unit 204 finds.
Second broadcast unit 207 is used for searching the second video pronunciation information and first that unit 205 finds with the described the 3rd and searches the audio frequency pronunciation information synchronous playing that unit 203 finds.
Wherein, the first video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the orthoepy process that pronunciation elementary cell information comprises, and the second video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the incorrect pronunciations process that pronunciation elementary cell information comprises.
In the orthoepy process of pronunciation elementary cell, first searches audio frequency pronunciation information that unit 203 finds, and to search the first video pronunciation information that unit 204 finds be corresponding along with the variation of time along with the variation and second of time, and first searches audio frequency pronunciation information that unit 203 finds to search the second video pronunciation information that unit 205 finds also be corresponding along with the variation of time along with the variation and the 3rd of time;
The people's who comprises the pronunciation elementary cell in the pronunciation elementary cell information of storage unit 202 storage and this pronunciation elementary cell is pronounced age and sex information, and the first video pronunciation information of storage unit 202 storage for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the orthoepy process, people's nasal cavity, lip, gums, Hubei Province, the formed animation information of the variation of the change in location of vocal organs such as tongue and vocal cords and air-flow power, the second video pronunciation information of storage for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the incorrect pronunciations process, people's nasal cavity, lip, gums, Hubei Province, the formed animation information of the variation of the change in location of vocal organs such as tongue and vocal cords and air-flow power.
In addition, visualized pronunciation teaching method provided by the invention and device are not only applicable to the teaching of the Chinese phonetic alphabet, are equally applicable to the pronunciation teaching of other language.
Obviously, those skilled in the art can carry out various changes and modification to the present invention and not break away from the spirit and scope of the present invention.Like this, if of the present invention these are revised and modification belongs within the scope of claim of the present invention and equivalent technologies thereof, then the present invention also is intended to comprise these changes and modification interior.

Claims (10)

1, a kind of visualized pronunciation teaching method is characterized in that, comprises step:
Receive the pronunciation elementary cell information of user's input;
According to the mapping relations of each pronunciation elementary cell information and corresponding audio pronunciation information, the pronunciation elementary cell information corresponding audio pronunciation information of searching and receiving; And
According to the mapping relations of each pronunciation elementary cell information with the corresponding first video pronunciation information, the first video pronunciation information that the pronunciation elementary cell information of searching and receiving is corresponding, wherein the first video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the orthoepy process that pronunciation elementary cell information comprises, and in the orthoepy process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described first video pronunciation information are corresponding along with the variation of time;
With the described audio frequency pronunciation information and the first video pronunciation information synchronous playing that finds.
2, the method for claim 1 is characterized in that, also comprises step:
According to the mapping relations of each pronunciation elementary cell information with the corresponding second video pronunciation information, the second video pronunciation information that the pronunciation elementary cell information of searching and receiving is corresponding, wherein the second video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the incorrect pronunciations process that pronunciation elementary cell information comprises, and in the incorrect pronunciations process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described second video pronunciation information are corresponding along with the variation of time;
With described second video pronunciation information and the described audio frequency pronunciation information synchronous playing that finds.
3, the method for claim 1, it is characterized in that, the described first video pronunciation information for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the orthoepy process, the formed animation information of variation of the change in location of vocal organs and air-flow power.
4, method as claimed in claim 2, it is characterized in that, the described second video pronunciation information for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the incorrect pronunciations process, the formed animation information of variation of the change in location of vocal organs and air-flow power.
5, as the described method of 1~4 arbitrary claim, it is characterized in that, comprise the pronunciation elementary cell in the described pronunciation elementary cell information and people's that this pronunciation elementary cell is pronounced age and sex information.
6, a kind of visualized pronunciation instructional device is characterized in that, comprising:
Receiving element is used to receive the pronunciation elementary cell information that the user imports;
Storage unit stores the mapping relations of elementary cell information and corresponding audio pronunciation information of respectively pronouncing, and the mapping relations of respectively pronounce elementary cell information and the corresponding first video pronunciation information;
First searches the unit, is used for the mapping relations in respectively pronounce the elementary cell information and the corresponding audio pronunciation information of described cell stores, searches the pronunciation elementary cell information corresponding audio pronunciation information that receives with described receiving element;
Second searches the unit, is used for the mapping relations in respectively pronounce elementary cell information and the corresponding first video pronunciation information of described cell stores, searches the first corresponding video pronunciation information of pronunciation elementary cell information that receives with described receiving element;
Wherein the first video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the orthoepy process that pronunciation elementary cell information comprises, and in the orthoepy process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described first video pronunciation information that finds are corresponding along with the variation of time;
First broadcast unit is used for searching the audio frequency pronunciation information and second that the unit finds with described first and searches the first video pronunciation information synchronous playing that the unit finds.
7, device as claimed in claim 6 is characterized in that, described storage unit also stores the mapping relations of respectively pronounce elementary cell information and the corresponding second video pronunciation information;
Described device comprises that also the 3rd searches the unit, be used for mapping relations, search the second corresponding video pronunciation information of pronunciation elementary cell information that receives with described receiving element according to respectively pronounce elementary cell information and the corresponding second video pronunciation information of described cell stores;
Wherein the second video pronunciation information is used for presenting the dynamic changing process of pronunciation elementary cell each vocal organs in the incorrect pronunciations process that pronunciation elementary cell information comprises, and in the incorrect pronunciations process of described pronunciation elementary cell, the described audio frequency pronunciation information that finds is along with the variation of time and the described second video pronunciation information are corresponding along with the variation of time;
Second broadcast unit is used for searching the second video pronunciation information and first that the unit finds with the described the 3rd and searches the audio frequency pronunciation information synchronous playing that the unit finds.
8, device as claimed in claim 6, it is characterized in that, the first video pronunciation information of described cell stores for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the orthoepy process, the formed animation information of variation of the change in location of vocal organs and air-flow power.
9, device as claimed in claim 6, it is characterized in that, the second video pronunciation information of described cell stores for the pronunciation elementary cell that comprises in the pronunciation elementary cell information in the incorrect pronunciations process, the formed animation information of variation of the change in location of vocal organs and air-flow power.
10, as the described device of 6~9 arbitrary claims, it is characterized in that, comprise the pronunciation elementary cell in the pronunciation elementary cell information of described cell stores and people's that this pronunciation elementary cell is pronounced age and sex information.
CN2008101151145A 2008-06-17 2008-06-17 Visualized pronunciation teaching method and apparatus Expired - Fee Related CN101290720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2008101151145A CN101290720B (en) 2008-06-17 2008-06-17 Visualized pronunciation teaching method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2008101151145A CN101290720B (en) 2008-06-17 2008-06-17 Visualized pronunciation teaching method and apparatus

Publications (2)

Publication Number Publication Date
CN101290720A true CN101290720A (en) 2008-10-22
CN101290720B CN101290720B (en) 2011-08-31

Family

ID=40034961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2008101151145A Expired - Fee Related CN101290720B (en) 2008-06-17 2008-06-17 Visualized pronunciation teaching method and apparatus

Country Status (1)

Country Link
CN (1) CN101290720B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930747A (en) * 2010-07-30 2010-12-29 四川微迪数字技术有限公司 Method and device for converting voice into mouth shape image
CN103745423B (en) * 2013-12-27 2016-08-24 浙江大学 A kind of shape of the mouth as one speaks teaching system and teaching method
CN106354767A (en) * 2016-08-19 2017-01-25 语当先有限公司 Practicing system and method
CN107591163A (en) * 2017-08-17 2018-01-16 天津快商通信息技术有限责任公司 One kind pronunciation detection method and device, voice category learning method and system
CN108447497A (en) * 2018-03-07 2018-08-24 陈勇 A method of independently going out oneself sounding in noisy environment
CN111554318A (en) * 2020-04-27 2020-08-18 天津大学 Method for realizing mobile phone end pronunciation visualization system
CN113051985A (en) * 2019-12-26 2021-06-29 深圳云天励飞技术有限公司 Information prompting method and device, electronic equipment and storage medium
CN113593374A (en) * 2021-07-06 2021-11-02 浙江大学 Multi-modal speech rehabilitation training system combining oral muscle training
CN114937379A (en) * 2022-05-17 2022-08-23 北京语言大学 Construction method of interactive Chinese pronunciation teaching system based on virtual reality technology

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1707550A (en) * 2005-04-14 2005-12-14 张远辉 Establishment of pronunciation and articalation mouth shape cartoon databank and access method thereof
CN1851779B (en) * 2006-05-16 2010-04-14 黄中伟 Multi-language available deaf-mute language learning computer-aid method
CN101105939B (en) * 2007-09-04 2012-07-18 安徽科大讯飞信息科技股份有限公司 Sonification guiding method

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930747A (en) * 2010-07-30 2010-12-29 四川微迪数字技术有限公司 Method and device for converting voice into mouth shape image
CN103745423B (en) * 2013-12-27 2016-08-24 浙江大学 A kind of shape of the mouth as one speaks teaching system and teaching method
CN106354767A (en) * 2016-08-19 2017-01-25 语当先有限公司 Practicing system and method
CN107591163A (en) * 2017-08-17 2018-01-16 天津快商通信息技术有限责任公司 One kind pronunciation detection method and device, voice category learning method and system
CN107591163B (en) * 2017-08-17 2022-02-01 厦门快商通科技股份有限公司 Pronunciation detection method and device and voice category learning method and system
CN108447497A (en) * 2018-03-07 2018-08-24 陈勇 A method of independently going out oneself sounding in noisy environment
CN113051985A (en) * 2019-12-26 2021-06-29 深圳云天励飞技术有限公司 Information prompting method and device, electronic equipment and storage medium
CN111554318A (en) * 2020-04-27 2020-08-18 天津大学 Method for realizing mobile phone end pronunciation visualization system
CN111554318B (en) * 2020-04-27 2023-12-05 天津大学 Method for realizing mobile phone terminal pronunciation visualization system
CN113593374A (en) * 2021-07-06 2021-11-02 浙江大学 Multi-modal speech rehabilitation training system combining oral muscle training
CN114937379A (en) * 2022-05-17 2022-08-23 北京语言大学 Construction method of interactive Chinese pronunciation teaching system based on virtual reality technology

Also Published As

Publication number Publication date
CN101290720B (en) 2011-08-31

Similar Documents

Publication Publication Date Title
CN101290720B (en) Visualized pronunciation teaching method and apparatus
CN105551328A (en) Language teaching coaching and study synchronization integration system on the basis of mobile interaction and big data analysis
Edo-Marzá Pronunciation and comprehension of oral English in the English as a foreign language class: Key aspects, students’ perceptions and proposals
Engwall et al. Designing the user interface of the computer-based speech training system ARTUR based on early user tests
Bolanos et al. Automatic assessment of expressive oral reading
Akram et al. Problems in learning and teaching English pronunciation in Pakistan
Beckman et al. Methods for eliciting, annotating, and analyzing databases for child speech development
Bellés-Fortuño et al. Teaching English pronunciation with OERs: the case of Voki
Suryatiningsih A study on the students' ability in pronouncing diphthongs at STKIP PGRI Pasuruan
Arıkan et al. Pre-service English language teachers’ problematic sounds
Demenko et al. The use of speech technology in foreign language pronunciation training
Czap et al. Features and results of a speech improvement experiment on hard of hearing children
Sanjadireja Subtitle in teaching pronunciation with video
Widyaningsih Improving Pronunciation Ability by Using Animated Films
Songkhro et al. Effectiveness of Using Animated Videos via Google Sites in Enhancing Socio-culture of Native English-Speaking Countries
Leng et al. A study on the hierarchy of difficulty setting of the consonant allophones in Korean through recording and listening test
Nishio et al. Improving fossilized English pronunciation by simultaneously viewing a video footage of oneself on an ICT self-learning system
Yu A Model for Evaluating the Quality of English Reading and Pronunciation Based on Computer Speech Recognition
Howard Raising public awareness of acoustic principles using voice and speech production
He Design of a Speaking Training System for English Speech Education using Speech Recognition Technology
Sodikin The use of English Podcast Video in practicing pronunciation skill of stressed word in e-Learning class.
Utari et al. English pronunciation by 3 years old 6 months child influenced by YouTube channel Coco Melon
Saputri et al. The Students’ Responses of Video Recording and E-Sorogan Learning Methods to Improve Pronunciation
Wahyuni et al. Identification the students’ pronunciation problems in pronouncing–ed ending at English Study Program of IAIN Bone
Brubaker et al. Fundamental frequency characteristics of modal and vocal fry registers

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20090206

Address after: B, block 602, Jin code building, 38 Tong Ching Road, Beijing, Haidian District, China: 100083

Applicant after: Li Wei

Address before: B, block 602, Jin code building, 38 Tong Ching Road, Beijing, Haidian District, China: 100083

Applicant before: Li Wei

Co-applicant before: Liu Fenglou

ASS Succession or assignment of patent right

Owner name: BEIJING TAILI TONGLIAN TECHNOLOGY DEVELOPMENT CO.,

Free format text: FORMER OWNER: LI WEI

Effective date: 20110104

Owner name: BEIJING ZHICHENG ZHUOSHENG TECHNOLOGY DEVELOPMENT

Free format text: FORMER OWNER: BEIJING TAILI TONGLIAN TECHNOLOGY DEVELOPMENT CO., LTD.

Effective date: 20110104

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100083 602, TOWER B, JINMA BUILDING, NO.38, XUEQING ROAD, HAIDIAN DISTRICT, BEIJING TO: 100083 601-603, TOWER B, JINMA BUILDING, NO.38, XUEQING ROAD, HAIDIAN DISTRICT, BEIJING

TA01 Transfer of patent application right

Effective date of registration: 20110104

Address after: 100083, B building, block 38, Jin Qing Road, 601-603, Beijing, Haidian District

Applicant after: Beijing ZhichengZhuosheng Technology Development Co.,Ltd.

Address before: 100083, B building, block 38, Jin Qing Road, 601-603, Beijing, Haidian District

Applicant before: Beijing Taili Communications Technology Development Co.,Ltd.

Effective date of registration: 20110104

Address after: 100083, B building, block 38, Jin Qing Road, 601-603, Beijing, Haidian District

Applicant after: Beijing Taili Communications Technology Development Co.,Ltd.

Address before: 100083, B building, block 38, Jin Qing Road, 602, Beijing, Haidian District

Applicant before: Li Wei

C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: TIANJIN XUNFEI INFORMATION TECHNOLOGY CO., LTD

Free format text: FORMER OWNER: BEIJING ZHICHENG ZHUOSHENG TECHNOLOGY DEVELOPMENT CO., LTD.

Effective date: 20140303

COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100083 HAIDIAN, BEIJING TO: 300308 BINHAI NEW DISTRICT, TIANJIN

TR01 Transfer of patent right

Effective date of registration: 20140303

Address after: 300308, 7 floor, building 3, Crowne Plaza, 55 Central Avenue, Tianjin Airport Economic Zone, 701

Patentee after: TIANJIN XUNFEI INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 100083, B building, block 38, Jin Qing Road, 601-603, Beijing, Haidian District

Patentee before: Beijing ZhichengZhuosheng Technology Development Co.,Ltd.

TR01 Transfer of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20110831

Termination date: 20170617

CF01 Termination of patent right due to non-payment of annual fee