CN105070118A - Method of correcting pronunciation aiming at language class learning and device of correcting pronunciation aiming at language class learning - Google Patents
Method of correcting pronunciation aiming at language class learning and device of correcting pronunciation aiming at language class learning Download PDFInfo
- Publication number
- CN105070118A CN105070118A CN201510466367.7A CN201510466367A CN105070118A CN 105070118 A CN105070118 A CN 105070118A CN 201510466367 A CN201510466367 A CN 201510466367A CN 105070118 A CN105070118 A CN 105070118A
- Authority
- CN
- China
- Prior art keywords
- pronunciation
- user
- mouth
- shape
- speaks
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The present invention discloses a method of correcting pronunciation aiming at the language class learning and a device of correcting pronunciation aiming at the language class learning. The method comprises the steps of outputting a preset standard pronunciation audio and a standard mouth shape image; obtaining the user pronunciation and the user mouth shape at the standard pronunciation imitation, and real-timely outputting the user pronunciation and a user mouth shape image; comparing the user pronunciation with the standard pronunciation audio, comparing the user mouth shape image with the standard mouth shape image, assessing the accuracy of the user pronunciation and outputting an assessment result. By the technical scheme of the present invention, users can compare the difference of the own mouth shape and the standard mouth shape visually and are convenient to adjust the own pronunciation mouth shape, and the user pronunciation accuracy can be assessed, thereby realizing the effect of correcting the wrong pronunciation of users.
Description
Technical field
The present invention relates to technical field of electronic products, particularly relate to a kind of method corrected one's pronunciation for class of languages study and device.
Background technology
The existing electronic product in market, such as learning machine, has the function corrected one's pronunciation for class of languages study, learns a language to teach user.Existing product is when carrying out user pronunciation exercise, major way is by providing standard voice, the pattern that user's normative reference pronounces to carry out with reading, but this mode is limited to the problem error-correcting effect that user pronunciation is inaccurate, trace it to its cause, mainly comprise: 1, the shape of the mouth as one speaks of user pronunciation is not right, causes cacoepy; 2, audio tone is not right; 3, define the pronunciation mouth shape custom of oneself, even if with reading still mispronounce; 4, Regional Accents has an impact to user, and the southern human hair of such as part is forbidden " ten ", "Yes", " four ", and Hunan human hair is forbidden " lake " etc.
Therefore, the pronunciation correction function of existing electronic product need to improve.
Summary of the invention
The object of the invention is to propose a kind of method corrected one's pronunciation for class of languages study and device, can assisted user oneself pronunciation mouth shape of adjustment and audio tone, reach the effect of correcting user error pronunciation.
For reaching this object, the present invention by the following technical solutions:
For the method corrected one's pronunciation of class of languages study, comprise,
Export the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset;
Obtain with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time;
By described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
Wherein, the Received Pronunciation audio frequency that described output is preset and standard shape of the mouth as one speaks image, comprise,
Export the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset, obtain corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, Received Pronunciation oscillogram described in synchronism output.
Wherein, described acquisition with user pronunciation when reading and user's shape of the mouth as one speaks, and exports user pronunciation and user's shape of the mouth as one speaks image in real time, comprises,
Obtain with user pronunciation when reading and user's shape of the mouth as one speaks, obtain corresponding user pronunciation oscillogram by described user pronunciation, when exporting user pronunciation and user's shape of the mouth as one speaks image by described user pronunciation oscillogram synchronism output.
Described acquisition, with after user pronunciation when reading and user's shape of the mouth as one speaks, also comprises,
Two viewing areas are generated, Received Pronunciation oscillogram and standard shape of the mouth as one speaks image described in the first viewing area output display, user pronunciation oscillogram and user's shape of the mouth as one speaks image described in the second viewing area output display in the viewing area of display screen.
Wherein, described user pronunciation and the comparison of described Received Pronunciation audio frequency specifically to be comprised,
With the unit of phoneme, described user pronunciation oscillogram and described Received Pronunciation oscillogram are compared.
Wherein, described output assessment result, comprises,
Export the comparison diagram with the Received Pronunciation oscillogram of the unit of phoneme and user pronunciation oscillogram, identify inaccurate phoneme part;
Export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information.
Wherein, described acquisition, with user pronunciation when reading and user's shape of the mouth as one speaks, specifically comprises,
Obtained with user pronunciation when reading by microphone, obtained with user's shape of the mouth as one speaks image when reading by camera.
The present invention provides a kind of device corrected one's pronunciation for class of languages study on the other hand, comprises,
Standard output module, for exporting default Received Pronunciation audio frequency and standard shape of the mouth as one speaks image;
With read control module, for obtaining with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time;
Evaluation module, for by described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
Wherein, described standard output module, also for being obtained corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, Received Pronunciation oscillogram described in synchronism output.
Wherein, described with read control module, also for being obtained corresponding user pronunciation oscillogram by described user pronunciation, by described user pronunciation oscillogram synchronism output when output user pronunciation and user's shape of the mouth as one speaks image.
Described with read control module, also for generating two viewing areas in the viewing area of display screen, Received Pronunciation oscillogram and standard shape of the mouth as one speaks image described in the first viewing area output display, user pronunciation oscillogram and user's shape of the mouth as one speaks image described in the second viewing area output display.
Wherein, described user pronunciation and the comparison of described Received Pronunciation audio frequency specifically to be comprised,
With the unit of phoneme, described user pronunciation oscillogram and described Received Pronunciation oscillogram are compared.
Wherein, described output assessment result, comprises,
Export the comparison diagram with the Received Pronunciation oscillogram of the unit of phoneme and user pronunciation oscillogram, identify inaccurate phoneme part;
Export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information.
Wherein, described acquisition, with user pronunciation when reading and user's shape of the mouth as one speaks, specifically comprises,
Obtained with user pronunciation when reading by microphone, obtained with user's shape of the mouth as one speaks image when reading by camera.
Implement the embodiment of the present invention, there is following beneficial effect:
The embodiment of the present invention is by exporting the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset; Obtain with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time, thus user directly can hear the pronunciation of oneself, see the shape of the mouth as one speaks image of oneself in real time, be convenient to the difference that user intuitively contrasts oneself shape of the mouth as one speaks and the standard shape of the mouth as one speaks; By described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.Pass through the present invention program, user is with when reading, not only whether accurate from the pronunciation of acoustically perception oneself, also can see the pronunciation mouth shape whether standard of oneself intuitively in real time, carry out oneself's adjustment, and by with Received Pronunciation and shape of the mouth as one speaks image contrast, provide assessment result, subtle helped user adjusts pronunciation mouth shape and audio tone, is convenient to pronunciation mouth shape and audio tone that user adjusts oneself, reaches the effect of correcting user error pronunciation.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing described below is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of the method corrected one's pronunciation for class of languages study of first embodiment of the invention.
Fig. 2 is the effect schematic diagram of the Received Pronunciation audio frequency preset of the output of first embodiment of the invention and standard shape of the mouth as one speaks image.
Fig. 3 is the output user pronunciation of first embodiment of the invention and the effect schematic diagram of user's shape of the mouth as one speaks image.
Fig. 4 is the effect schematic diagram that the user pronunciation assessment result of first embodiment of the invention exports.
Fig. 5 is the structural representation of the device corrected one's pronunciation for class of languages study of second embodiment of the invention.
Embodiment
Carry out clear, complete description below in conjunction with accompanying drawing of the present invention to the technical scheme in the embodiment of the present invention, obviously, described embodiment is only a part of embodiment of the present invention, instead of whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those of ordinary skill in the art obtain under the prerequisite of not making creative work, all belongs to the scope of protection of the invention.
The hardware foundation realizing following examples of the present invention can be the electric terminals such as learning machine, smart mobile phone, panel computer, PC, this Terminal Type has the function of the pronunciation exercises for class of languages study, can standard voice be provided, be convenient to the pronunciation of user's normative reference and carry out with reading.In the present embodiment, also need to possess the hardware such as loudspeaker, microphone and cam device.
First embodiment.
Below in conjunction with Fig. 1, the method stream corrected one's pronunciation for class of languages study of first embodiment of the invention is described, comprises the steps:
Step S101, exports the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset.
In first embodiment, such as, for the pronunciation exercises function of English study, pre-set the Received Pronunciation of some English alphabets, word, phrase etc., according to actual conditions, can be American Received Pronunciation, English Received Pronunciation etc., this is not construed as limiting.Meanwhile, the image of the standard shape of the mouth as one speaks corresponding to described Received Pronunciation is also preserved in advance.During this kind of pronunciation exercises function of user's choice for use, export described Received Pronunciation, export corresponding standard shape of the mouth as one speaks image simultaneously; Preferably, can also obtain corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, the present embodiment, also can Received Pronunciation oscillogram described in synchronism output when exporting described Received Pronunciation, corresponding standard shape of the mouth as one speaks image.Output effect as shown in Figure 2, two viewing areas are generated in the viewing area of display screen, Received Pronunciation oscillogram and standard shape of the mouth as one speaks image described in the first viewing area output display, user pronunciation oscillogram and user's shape of the mouth as one speaks image described in the second viewing area output display.
Step S102, obtains with user pronunciation when reading and user's shape of the mouth as one speaks, and exports user pronunciation and user's shape of the mouth as one speaks image in real time.
In first embodiment, can obtain with user pronunciation when reading by the microphone of terminal, can be obtained with user's shape of the mouth as one speaks image when reading by the camera of terminal.Preferably, corresponding user pronunciation oscillogram can also be obtained by described user pronunciation, therefore, as shown in Figure 3, in the present embodiment when exporting user pronunciation and user's shape of the mouth as one speaks image, also can by described user pronunciation oscillogram synchronism output.Particularly, the loudspeaker by terminal in the present embodiment play user pronunciation in real time, and show user's shape of the mouth as one speaks image of camera acquisition in real time by the display screen of terminal.User intuitively can contrast the difference of oneself shape of the mouth as one speaks and the standard shape of the mouth as one speaks, is convenient to oneself's adjustment.
In the present embodiment, obtain and export user pronunciation, user's shape of the mouth as one speaks image and user pronunciation oscillogram, user can be made not only whether accurate from the pronunciation of acoustically perception oneself, also can see the pronunciation mouth shape whether standard of oneself intuitively, carry out oneself's adjustment, and by with Received Pronunciation and shape of the mouth as one speaks image contrast, provide assessment result, subtle helped user adjusts pronunciation mouth shape and audio tone.
Step S103, by described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
In first embodiment, described user pronunciation oscillogram and described Received Pronunciation oscillogram can be compared, in conjunction with described user's shape of the mouth as one speaks image and standard shape of the mouth as one speaks image, calculate comprehensive pronouncing accuracy result.The more important thing is, as shown in Figure 4, the present embodiment not only exports the fractional value of assessment, give orthoepic part, mispronounce part prompting and pronunciation time points for attention, help user grasp pronunciation main points.Preferably, in the present embodiment with pronunciation phonemes (can with reference to English intemational phonetic symbols) for unit contrasts, namely intuitively reflect the Received Pronunciation oscillogram of each phoneme and user pronunciation oscillogram pronouncing to comprise in described comparison diagram, be convenient to the degree that user finds out the true and deviation of the cacoepy of which phoneme intuitively.Certainly, if phrase or short sentence, also can be set to contrast respectively in units of word, be convenient to cacoepy which word user find out intuitively really and the degree of deviation.
Further, in the first embodiment, also can export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information, concrete correction suggestion needs to analyze in conjunction with the inaccurate pronunciation letter of user and inaccurate degree, corrects suggestion with the shape of the mouth as one speaks providing adaptation.
By the method for first embodiment of the invention, after outputting standard pronunciation, user is with when reading, not only can the pronunciation of synchronism output user make user whether accurate from the pronunciation of acoustically perception oneself, also can synchronism output user shape of the mouth as one speaks image and pronunciation oscillogram, user is made to recognize oneself pronunciation mouth shape and audio tone whether standard intuitively, subtle helped user adjusts pronunciation mouth shape and audio tone, pronunciation mouth shape and the audio tone that user adjusts oneself is convenient to finally by the detailed appreciation information of output, reach the effect of correcting user error pronunciation.
Be the embodiment of the device corrected one's pronunciation for class of languages study that the embodiment of the present invention provides below.Embodiment and the above-mentioned embodiment of the method for described device belong to same design, and the detail content of not detailed description in the embodiment of device can with reference to said method embodiment.
Second embodiment
Fig. 5 shows the structural representation of the device corrected one's pronunciation for class of languages study of second embodiment of the invention, the described device corrected one's pronunciation for class of languages study comprises: standard output module 310, with read control module 320 and evaluation module 330, be specifically described below to each module.
Described standard output module 310, for exporting default Received Pronunciation audio frequency and standard shape of the mouth as one speaks image.
In the present embodiment, described standard output module 310, also can be used for obtaining corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, the Received Pronunciation oscillogram described in synchronism output when exporting the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset.
Described with read control module 320, for obtaining with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time.
In the present embodiment, described with read control module 320, also can be used for obtaining corresponding user pronunciation oscillogram by described user pronunciation, by described user pronunciation oscillogram synchronism output when output user pronunciation and user's shape of the mouth as one speaks image.Whether make user not only accurate from the pronunciation of acoustically perception oneself, also can recognize oneself pronunciation mouth shape and audio tone whether standard intuitively, subtle helped user adjusts pronunciation mouth shape and audio tone.
Described evaluation module 330, for by described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
In the present embodiment, by described user pronunciation oscillogram and described Received Pronunciation oscillogram being compared, in conjunction with user's shape of the mouth as one speaks image and standard shape of the mouth as one speaks image, calculate comprehensive pronouncing accuracy result.The more important thing is, as shown in Figure 4, the present embodiment not only exports the fractional value of assessment, goes back the comparison diagram of outputting standard pronunciation oscillogram and user pronunciation oscillogram, identifies inaccurate pronunciation part in conjunction with this comparison diagram.Preferably, in the present embodiment with pronunciation phonemes (can with reference to English intemational phonetic symbols) for unit contrasts, namely intuitively reflect the Received Pronunciation oscillogram of each phoneme and user pronunciation oscillogram pronouncing to comprise in described comparison diagram, be convenient to the degree that user finds out the true and deviation of the cacoepy of which phoneme intuitively.Certainly, if phrase or short sentence, also can be set to contrast respectively in units of word, be convenient to cacoepy which word user find out intuitively really and the degree of deviation.
Further, described evaluation module 330 also can export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information, and concrete correction suggestion needs to analyze in conjunction with the inaccurate pronunciation letter of user and inaccurate degree, corrects suggestion with the shape of the mouth as one speaks providing adaptation.
By the device of above-mentioned second embodiment, after outputting standard pronunciation, user is with when reading, not only can the pronunciation of synchronism output user make user whether accurate from the pronunciation of acoustically perception oneself, also can synchronism output user shape of the mouth as one speaks image and pronunciation oscillogram, user is made to recognize oneself pronunciation mouth shape and audio tone whether standard intuitively, subtle helped user adjusts pronunciation mouth shape and audio tone, be convenient to finally by the detailed appreciation information of output pronunciation mouth shape and the audio tone that user adjusts oneself, reach the effect of correcting user error pronunciation.
Above disclosedly be only present pre-ferred embodiments, certainly the right of the present invention can not be limited with this, therefore, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., still belong to the scope that the present invention is contained.
Claims (14)
1., for the method corrected one's pronunciation of class of languages study, it is characterized in that, comprise,
Export the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset;
Obtain with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time;
By described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
2. as claimed in claim 1 for the method corrected one's pronunciation of class of languages study, it is characterized in that, the Received Pronunciation audio frequency that described output is preset and standard shape of the mouth as one speaks image, comprise,
Export the Received Pronunciation audio frequency and standard shape of the mouth as one speaks image preset, obtain corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, Received Pronunciation oscillogram described in synchronism output.
3. as claimed in claim 2 for the method corrected one's pronunciation of class of languages study, it is characterized in that, described acquisition with user pronunciation when reading and user's shape of the mouth as one speaks, and exports user pronunciation and user's shape of the mouth as one speaks image in real time, comprises,
Obtain with user pronunciation when reading and user's shape of the mouth as one speaks, obtain corresponding user pronunciation oscillogram by described user pronunciation, when exporting user pronunciation and user's shape of the mouth as one speaks image by described user pronunciation oscillogram synchronism output.
4., as claimed in claim 3 for the method corrected one's pronunciation of class of languages study, it is characterized in that, described acquisition, with after user pronunciation when reading and user's shape of the mouth as one speaks, also comprises,
Two viewing areas are generated, Received Pronunciation oscillogram and standard shape of the mouth as one speaks image described in the first viewing area output display, user pronunciation oscillogram and user's shape of the mouth as one speaks image described in the second viewing area output display in the viewing area of display screen.
5., as claimed in claim 3 for the method corrected one's pronunciation of class of languages study, it is characterized in that, described user pronunciation and the comparison of described Received Pronunciation audio frequency specifically to be comprised,
With the unit of phoneme, described user pronunciation oscillogram and described Received Pronunciation oscillogram are compared.
6., as claimed in claim 5 for the method corrected one's pronunciation of class of languages study, it is characterized in that, described output assessment result, comprises,
Export the comparison diagram with the Received Pronunciation oscillogram of the unit of phoneme and user pronunciation oscillogram, identify inaccurate phoneme part;
Export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information.
7., as claimed in claim 1 for the method corrected one's pronunciation of class of languages study, it is characterized in that, described acquisition, with user pronunciation when reading and user's shape of the mouth as one speaks, specifically comprises,
Obtained with user pronunciation when reading by microphone, obtained with user's shape of the mouth as one speaks image when reading by camera.
8., for the device corrected one's pronunciation of class of languages study, it is characterized in that, comprise,
Standard output module, for exporting default Received Pronunciation audio frequency and standard shape of the mouth as one speaks image;
With read control module, for obtaining with user pronunciation when reading and user's shape of the mouth as one speaks, and export user pronunciation and user's shape of the mouth as one speaks image in real time;
Evaluation module, for by described user pronunciation and the comparison of described Received Pronunciation audio frequency, by described user's shape of the mouth as one speaks image and the image comparison of the standard shape of the mouth as one speaks, the accuracy of assessment user pronunciation, exports assessment result.
9. as claimed in claim 8 for the device corrected one's pronunciation of class of languages study, it is characterized in that, described standard output module, also for being obtained corresponding Received Pronunciation oscillogram by described Received Pronunciation audio frequency, Received Pronunciation oscillogram described in synchronism output.
10. as claimed in claim 9 for the device corrected one's pronunciation of class of languages study, it is characterized in that, described with read control module, also for being obtained corresponding user pronunciation oscillogram by described user pronunciation, by described user pronunciation oscillogram synchronism output when output user pronunciation and user's shape of the mouth as one speaks image.
11. as claimed in claim 10 for the device corrected one's pronunciation of class of languages study, it is characterized in that, described with read control module, also for generating two viewing areas in the viewing area of display screen, Received Pronunciation oscillogram and standard shape of the mouth as one speaks image described in the first viewing area output display, user pronunciation oscillogram and user's shape of the mouth as one speaks image described in the second viewing area output display.
12., as claimed in claim 10 for the device corrected one's pronunciation of class of languages study, is characterized in that, describedly user pronunciation and the comparison of described Received Pronunciation audio frequency specifically to be comprised,
With the unit of phoneme, described user pronunciation oscillogram and described Received Pronunciation oscillogram are compared.
13. as claimed in claim 12 for the device corrected one's pronunciation of class of languages study, and it is characterized in that, described output assessment result, comprises,
Export the comparison diagram with the Received Pronunciation oscillogram of the unit of phoneme and user pronunciation oscillogram, identify inaccurate phoneme part;
Export the corresponding shape of the mouth as one speaks according to assessment result and correct advisory information.
14. as claimed in claim 8 for the device corrected one's pronunciation of class of languages study, and it is characterized in that, described acquisition, with user pronunciation when reading and user's shape of the mouth as one speaks, specifically comprises,
Obtained with user pronunciation when reading by microphone, obtained with user's shape of the mouth as one speaks image when reading by camera.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510466367.7A CN105070118B (en) | 2015-07-30 | 2015-07-30 | A kind of method and device to correct one's pronunciation for class of languages study |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510466367.7A CN105070118B (en) | 2015-07-30 | 2015-07-30 | A kind of method and device to correct one's pronunciation for class of languages study |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105070118A true CN105070118A (en) | 2015-11-18 |
CN105070118B CN105070118B (en) | 2019-01-11 |
Family
ID=54499472
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510466367.7A Active CN105070118B (en) | 2015-07-30 | 2015-07-30 | A kind of method and device to correct one's pronunciation for class of languages study |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105070118B (en) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228996A (en) * | 2016-07-15 | 2016-12-14 | 黄河科技学院 | Vocality study electron assistant articulatory system |
CN106251717A (en) * | 2016-09-21 | 2016-12-21 | 北京光年无限科技有限公司 | Intelligent robot speech follow read learning method and device |
CN106940939A (en) * | 2017-03-16 | 2017-07-11 | 牡丹江师范学院 | Oral English Teaching servicing unit and its method |
CN107274736A (en) * | 2017-08-14 | 2017-10-20 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN107424450A (en) * | 2017-08-07 | 2017-12-01 | 英华达(南京)科技有限公司 | Pronunciation correction system and method |
CN108257615A (en) * | 2018-01-15 | 2018-07-06 | 北京物灵智能科技有限公司 | A kind of user language appraisal procedure and system |
CN108428458A (en) * | 2018-03-15 | 2018-08-21 | 河南科技学院 | A kind of vocality study electron assistant articulatory system |
CN109147419A (en) * | 2018-07-11 | 2019-01-04 | 北京美高森教育科技有限公司 | Language learner system based on incorrect pronunciations detection |
CN109147404A (en) * | 2018-07-11 | 2019-01-04 | 北京美高森教育科技有限公司 | A kind of detection method and device of the phonetic symbol by incorrect pronunciations |
CN109255988A (en) * | 2018-07-11 | 2019-01-22 | 北京美高森教育科技有限公司 | Interactive learning methods based on incorrect pronunciations detection |
CN109327379A (en) * | 2018-09-19 | 2019-02-12 | 淄博职业学院 | A kind of Korean pronunciation correction device and method |
CN109410664A (en) * | 2018-12-12 | 2019-03-01 | 广东小天才科技有限公司 | A kind of pronunciation correction method and electronic equipment |
CN109671308A (en) * | 2018-09-18 | 2019-04-23 | 张滕滕 | A kind of generation method of pronunciation mouth shape correction system |
CN109903606A (en) * | 2019-04-23 | 2019-06-18 | 江苏海事职业技术学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN110491241A (en) * | 2019-09-05 | 2019-11-22 | 河南理工大学 | A kind of vocal music pronounciation training devices and methods therefor |
CN111292769A (en) * | 2020-03-04 | 2020-06-16 | 苏州驰声信息科技有限公司 | Method, system, device and storage medium for correcting pronunciation of spoken language |
CN111638781A (en) * | 2020-05-15 | 2020-09-08 | 广东小天才科技有限公司 | AR-based pronunciation guide method and device, electronic equipment and storage medium |
CN112614489A (en) * | 2020-12-22 | 2021-04-06 | 作业帮教育科技(北京)有限公司 | User pronunciation accuracy evaluation method and device and electronic equipment |
CN113053186A (en) * | 2019-12-26 | 2021-06-29 | 京东数字科技控股有限公司 | Interaction method, interaction device and storage medium |
CN113782055A (en) * | 2021-07-15 | 2021-12-10 | 北京墨闻教育科技有限公司 | Student characteristic-based voice evaluation method and system |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010088350A (en) * | 2000-03-10 | 2001-09-26 | 이동익 | Apparatus for training language and Method for analyzing language thereof |
JP2003208084A (en) * | 2002-01-15 | 2003-07-25 | Keihin Tokushu Insatsu:Kk | Device and method for learning foreign language |
CN1512300A (en) * | 2002-12-30 | 2004-07-14 | 艾尔科技股份有限公司 | User's interface, system and method for automatically marking phonetic symbol to correct pronunciation |
CN1804934A (en) * | 2006-01-13 | 2006-07-19 | 黄中伟 | Computer-aided Chinese language phonation learning method |
CN101241656A (en) * | 2008-03-11 | 2008-08-13 | 黄中伟 | Computer assisted training method for mouth shape recognition capability |
CN101393694A (en) * | 2008-10-21 | 2009-03-25 | 无敌科技(西安)有限公司 | Chinese character pronunciation studying device with pronunciation correcting function of Chinese characters, and method therefor |
CN101409022A (en) * | 2007-10-11 | 2009-04-15 | 英业达股份有限公司 | Language learning system with mouth shape comparison and method thereof |
CN101727765A (en) * | 2009-11-03 | 2010-06-09 | 无敌科技(西安)有限公司 | Face simulation pronunciation system and method thereof |
CN101958060A (en) * | 2009-08-28 | 2011-01-26 | 陈美含 | English spelling instant technical tool |
CN103218924A (en) * | 2013-03-29 | 2013-07-24 | 上海众实科技发展有限公司 | Audio and video dual mode-based spoken language learning monitoring method |
CN103413469A (en) * | 2013-08-26 | 2013-11-27 | 苏州跨界软件科技有限公司 | Social type language learning system |
CN103745423A (en) * | 2013-12-27 | 2014-04-23 | 浙江大学 | Mouth-shape teaching system and mouth-shape teaching method |
KR20140087950A (en) * | 2013-01-01 | 2014-07-09 | 주홍찬 | Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data. |
CN204010373U (en) * | 2014-09-02 | 2014-12-10 | 李莹 | A kind of Oral English Practice correcting device |
CN104537901A (en) * | 2014-12-02 | 2015-04-22 | 渤海大学 | Spoken English learning machine based on audios and videos |
-
2015
- 2015-07-30 CN CN201510466367.7A patent/CN105070118B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20010088350A (en) * | 2000-03-10 | 2001-09-26 | 이동익 | Apparatus for training language and Method for analyzing language thereof |
JP2003208084A (en) * | 2002-01-15 | 2003-07-25 | Keihin Tokushu Insatsu:Kk | Device and method for learning foreign language |
CN1512300A (en) * | 2002-12-30 | 2004-07-14 | 艾尔科技股份有限公司 | User's interface, system and method for automatically marking phonetic symbol to correct pronunciation |
CN1804934A (en) * | 2006-01-13 | 2006-07-19 | 黄中伟 | Computer-aided Chinese language phonation learning method |
CN101409022A (en) * | 2007-10-11 | 2009-04-15 | 英业达股份有限公司 | Language learning system with mouth shape comparison and method thereof |
CN101241656A (en) * | 2008-03-11 | 2008-08-13 | 黄中伟 | Computer assisted training method for mouth shape recognition capability |
CN101393694A (en) * | 2008-10-21 | 2009-03-25 | 无敌科技(西安)有限公司 | Chinese character pronunciation studying device with pronunciation correcting function of Chinese characters, and method therefor |
CN101958060A (en) * | 2009-08-28 | 2011-01-26 | 陈美含 | English spelling instant technical tool |
CN101727765A (en) * | 2009-11-03 | 2010-06-09 | 无敌科技(西安)有限公司 | Face simulation pronunciation system and method thereof |
KR20140087950A (en) * | 2013-01-01 | 2014-07-09 | 주홍찬 | Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data. |
CN103218924A (en) * | 2013-03-29 | 2013-07-24 | 上海众实科技发展有限公司 | Audio and video dual mode-based spoken language learning monitoring method |
CN103413469A (en) * | 2013-08-26 | 2013-11-27 | 苏州跨界软件科技有限公司 | Social type language learning system |
CN103745423A (en) * | 2013-12-27 | 2014-04-23 | 浙江大学 | Mouth-shape teaching system and mouth-shape teaching method |
CN204010373U (en) * | 2014-09-02 | 2014-12-10 | 李莹 | A kind of Oral English Practice correcting device |
CN104537901A (en) * | 2014-12-02 | 2015-04-22 | 渤海大学 | Spoken English learning machine based on audios and videos |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106228996A (en) * | 2016-07-15 | 2016-12-14 | 黄河科技学院 | Vocality study electron assistant articulatory system |
CN106228996B (en) * | 2016-07-15 | 2019-08-02 | 黄河科技学院 | Vocality study electron assistant articulatory system |
CN106251717A (en) * | 2016-09-21 | 2016-12-21 | 北京光年无限科技有限公司 | Intelligent robot speech follow read learning method and device |
CN106940939A (en) * | 2017-03-16 | 2017-07-11 | 牡丹江师范学院 | Oral English Teaching servicing unit and its method |
CN107424450A (en) * | 2017-08-07 | 2017-12-01 | 英华达(南京)科技有限公司 | Pronunciation correction system and method |
CN107274736B (en) * | 2017-08-14 | 2019-03-12 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN107274736A (en) * | 2017-08-14 | 2017-10-20 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN108257615A (en) * | 2018-01-15 | 2018-07-06 | 北京物灵智能科技有限公司 | A kind of user language appraisal procedure and system |
CN108428458A (en) * | 2018-03-15 | 2018-08-21 | 河南科技学院 | A kind of vocality study electron assistant articulatory system |
CN109147419A (en) * | 2018-07-11 | 2019-01-04 | 北京美高森教育科技有限公司 | Language learner system based on incorrect pronunciations detection |
CN109147404A (en) * | 2018-07-11 | 2019-01-04 | 北京美高森教育科技有限公司 | A kind of detection method and device of the phonetic symbol by incorrect pronunciations |
CN109255988A (en) * | 2018-07-11 | 2019-01-22 | 北京美高森教育科技有限公司 | Interactive learning methods based on incorrect pronunciations detection |
CN109671308A (en) * | 2018-09-18 | 2019-04-23 | 张滕滕 | A kind of generation method of pronunciation mouth shape correction system |
CN109671308B (en) * | 2018-09-18 | 2022-04-22 | 张滕滕 | Generation method of pronunciation mouth shape correction system |
CN109327379A (en) * | 2018-09-19 | 2019-02-12 | 淄博职业学院 | A kind of Korean pronunciation correction device and method |
CN109410664A (en) * | 2018-12-12 | 2019-03-01 | 广东小天才科技有限公司 | A kind of pronunciation correction method and electronic equipment |
CN109903606A (en) * | 2019-04-23 | 2019-06-18 | 江苏海事职业技术学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN110491241A (en) * | 2019-09-05 | 2019-11-22 | 河南理工大学 | A kind of vocal music pronounciation training devices and methods therefor |
CN113053186A (en) * | 2019-12-26 | 2021-06-29 | 京东数字科技控股有限公司 | Interaction method, interaction device and storage medium |
CN111292769A (en) * | 2020-03-04 | 2020-06-16 | 苏州驰声信息科技有限公司 | Method, system, device and storage medium for correcting pronunciation of spoken language |
CN111638781A (en) * | 2020-05-15 | 2020-09-08 | 广东小天才科技有限公司 | AR-based pronunciation guide method and device, electronic equipment and storage medium |
CN111638781B (en) * | 2020-05-15 | 2024-03-19 | 广东小天才科技有限公司 | AR-based pronunciation guide method and device, electronic equipment and storage medium |
CN112614489A (en) * | 2020-12-22 | 2021-04-06 | 作业帮教育科技(北京)有限公司 | User pronunciation accuracy evaluation method and device and electronic equipment |
CN113782055A (en) * | 2021-07-15 | 2021-12-10 | 北京墨闻教育科技有限公司 | Student characteristic-based voice evaluation method and system |
Also Published As
Publication number | Publication date |
---|---|
CN105070118B (en) | 2019-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105070118A (en) | Method of correcting pronunciation aiming at language class learning and device of correcting pronunciation aiming at language class learning | |
CN106575500B (en) | Method and apparatus for synthesizing speech based on facial structure | |
CN108447474A (en) | A kind of modeling and the control method of virtual portrait voice and Hp-synchronization | |
CN107274736B (en) | A kind of interactive Oral English Practice speech sound teaching apparatus in campus | |
EP3503074A1 (en) | Language learning system and language learning program | |
CN104778865A (en) | Method for conducting spoken language correction through speech recognition technology and language learning machine | |
CN108806719A (en) | Interacting language learning system and its method | |
KR20190061191A (en) | Speech recognition based training system and method for child language learning | |
CN111312255A (en) | Pronunciation self-correcting device for word and pinyin tones based on voice recognition | |
CN107578653A (en) | A kind of pen for correcting irregular Chinese speech pronunciation | |
US9087512B2 (en) | Speech synthesis method and apparatus for electronic system | |
US20070061139A1 (en) | Interactive speech correcting method | |
KR20140087956A (en) | Apparatus and method for learning phonics by using native speaker's pronunciation data and word and sentence and image data | |
KR20140079677A (en) | Apparatus and method for learning sound connection by using native speaker's pronunciation data and language data. | |
KR20140107067A (en) | Apparatus and method for learning word by using native speakerpronunciation data and image data | |
KR20170056253A (en) | Method of and system for scoring pronunciation of learner | |
RU153322U1 (en) | DEVICE FOR TEACHING SPEAK (ORAL) SPEECH WITH VISUAL FEEDBACK | |
JP6155102B2 (en) | Learning support device | |
KR20140082127A (en) | Apparatus and method for learning word by using native speaker's pronunciation data and origin of a word | |
KR20140079245A (en) | Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data. | |
TW202109474A (en) | Language pronunciation learning system and method | |
CN110858457A (en) | Interactive education method and teaching electronic device | |
KR20140087950A (en) | Apparatus and method for learning rhythm pattern by using native speaker's pronunciation data and language data. | |
KR102207812B1 (en) | Speech improvement method of universal communication of disability and foreigner | |
US20240013668A1 (en) | Information Processing Method, Program, And Information Processing Apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |