KR20160122542A - Method and apparatus for measuring pronounciation similarity - Google Patents
Method and apparatus for measuring pronounciation similarity Download PDFInfo
- Publication number
- KR20160122542A KR20160122542A KR1020150052579A KR20150052579A KR20160122542A KR 20160122542 A KR20160122542 A KR 20160122542A KR 1020150052579 A KR1020150052579 A KR 1020150052579A KR 20150052579 A KR20150052579 A KR 20150052579A KR 20160122542 A KR20160122542 A KR 20160122542A
- Authority
- KR
- South Korea
- Prior art keywords
- data
- speech
- similarity
- user
- voice
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/28—Constructional details of speech recognition systems
Abstract
The method of measuring similarity according to the present invention includes receiving user speech data corresponding to reference speech data, processing the reference speech data with a speech recognition algorithm, Processing the user speech data with a speech recognition algorithm to generate second speech processing data; comparing the first speech processing data and the second speech processing data with learning target word data to generate pronunciation, accent, Speed and the speed of the user, and comparing the first similarity and the second similarity, and measuring the final similarity of the reference voice data and the user voice data, In speaking English, the pronunciation similarity degree which can efficiently learn pronunciation and correction There is an advantageous effect that it is possible to provide a static method and apparatus.
Description
The present invention relates to a pronunciation similarity measuring method and apparatus, and more particularly, to a pronunciation similarity measuring method capable of comparing pronunciation of a user's voice with pronunciation of a preceding reference voice to evaluate similarity, ≪ / RTI >
The importance of foreign language learning is emphasized by the trend of globalization, and interest in English education is increasing in particular. Also, in the modern society, interest in English ability centering on communication in real life is increasing, and researches on more effective and accurate English learning methods and language programs are continuously being carried out.
Meanwhile, since English speech recognition using speech recognition technology is popularized, an English learning method using speech recognition becomes popular. However, in the English learning method using currently used speech recognition technology, a recognition target word to be recognized is determined in advance And when the user utters the determined recognition target word, it finds out which of the words of the input user is closest to which of the previously registered words, and outputs the result, and displays this progress in the form of noon or score, It is judged whether or not the pronunciation of the speech is accurate.
This type of English learning method provides a pronunciation score only when the user watches and follows the text provided by a language program, and the user first hears the native speaker's speech and then hears content such as a movie, a video or a song There is a limitation that the pronunciation score is not evaluated based on the pronunciation of the voice of the user who listens to his / her ears.
In addition, in the English learning method using the speech recognition technology, a person must directly register a recognition target word to create a candidate group, and there is no standard for judging which pronunciation is more appropriate. In addition, when a sentence or a word includes rhyme, intonation, accentuation, and rhythm, the pronunciation of the user may not be recognized, or a wrong evaluation result may occur.
Accordingly, in a learning method in which the user is listening to the preceding reference voice and does not compare with the predetermined candidate group, the similarity can be measured by directly comparing the preceding reference voice with the preceding reference voice, There is a growing need to provide a method for measuring pronunciation similarity that is comparable to rhythm, intonation, accent, and rhythm included.
A problem to be solved by the present invention is to provide a pronunciation similarity measuring method and apparatus capable of measuring pronunciation similarity between a reference voice and a user voice preceded by speech recognition technology for English learning to hear and speak.
Another problem to be solved by the present invention is to provide a pronunciation similarity measuring method and apparatus capable of measuring pronunciation similarity by evaluating pronunciation, intonation, accentuation and speed included in a preceding reference voice and user voice.
Another object of the present invention is to provide a pronunciation similarity measuring method and apparatus capable of efficient pronunciation learning and correction.
The problems of the present invention are not limited to the above-mentioned problems, and other problems not mentioned can be clearly understood by those skilled in the art from the following description.
According to an aspect of the present invention, there is provided a pronunciation similarity measuring method comprising: receiving user speech data corresponding to reference speech data; processing the reference speech data with a speech recognition algorithm, Generating second sound processing data by processing the user sound data with a speech recognition algorithm; comparing the first sound processing data and the second sound processing data with learning target word data to generate pronunciation, accent, and accent And speed, and comparing the first similarity and the second similarity, and measuring the final similarity of the reference voice data and the user voice data to the first similarity degree and the second similarity degree .
According to another aspect of the present invention, there is provided a speech recognition method including the steps of receiving reference speech data corresponding to learning target speech data and learning target speech data.
According to still another aspect of the present invention, the step of calculating the first similarity degree and the second degree of similarity is characterized by dividing the first speech processing data and the second speech processing data into phoneme units and comparing them with learning target word data.
According to another aspect of the present invention, the method further includes providing the final similarity score to the user as a score.
According to an aspect of the present invention, there is provided a pronunciation similarity measuring apparatus comprising: a receiver for receiving user voice data corresponding to reference voice data; Processing the user speech data with a speech recognition algorithm to generate second speech processing data, and comparing the first speech processing data and the second speech processing data with learning target word data to generate pronunciation, accent, Speed and a speed of the user, and a processing unit for comparing the first similarity and the second similarity to calculate the final similarity between the reference voice data and the user voice data .
According to another aspect of the present invention, the receiving unit receives the reference speech data corresponding to the learning target speech data and the learning target speech data.
According to still another aspect of the present invention, the speech recognition unit includes dividing the first speech processing data and the second speech processing data into phonemes and comparing them with learning target word data.
According to another aspect of the present invention, there is provided an image processing apparatus including an output unit that provides a score to a user as a final similarity.
According to an aspect of the present invention, there is provided a computer-readable recording medium storing instructions for providing pronunciation similarity measurement method, the computer program product comprising: receiving user voice data corresponding to reference voice data; Processing the first speech processing data with the speech recognition algorithm to process the user speech data with the speech recognition algorithm to generate the second speech processing data, The first similarity degree and the second similarity degree which are evaluated by at least one of pronunciation, accentuation, accentuation, and speed are compared with the speech data, and the first similarity degree and the second similarity degree are compared with each other, And the degree of similarity is measured.
The details of other embodiments are included in the detailed description and drawings.
The present invention has the effect of measuring the pronunciation similarity between the reference voice and the user voice preceding by using the voice recognition technology for the English learning to hear and speak.
The present invention has the effect of measuring pronunciation similarity measured by evaluating pronunciation, intonation, accentuation, and speed included in the preceding reference voice and user voice.
The present invention has the effect of providing a pronunciation similarity measuring method and apparatus capable of efficient pronunciation learning and correction in English learning to be heard and spoken.
The effects according to the present invention are not limited by the contents exemplified above, and more various effects are included in the specification.
1 is a schematic configuration diagram of a pronunciation similarity measuring apparatus according to an embodiment of the present invention.
FIG. 2 is a flowchart of a pronunciation similarity measurement method according to an embodiment of the present invention.
FIG. 3 exemplarily shows a method of measuring the first similarity degree, the second similarity degree, and the final similarity degree in the pronunciation similarity measurement method according to an embodiment of the present invention.
FIG. 4 illustrates an exemplary screen implemented by the pronunciation similarity measurement method according to an embodiment of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS The advantages and features of the present invention and the manner of achieving them will become apparent with reference to the embodiments described in detail below with reference to the accompanying drawings. The present invention may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Is provided to fully convey the scope of the invention to those skilled in the art, and the invention is only defined by the scope of the claims.
Like reference numerals refer to like elements throughout the specification.
It is to be understood that each of the features of the various embodiments of the present invention may be combined or combined with each other partially or entirely and technically various interlocking and driving is possible as will be appreciated by those skilled in the art, It may be possible to cooperate with each other in association.
In the present specification, the reference voice data means data including a preceding reference voice which is a learning target that the user wants to listen to and follow. The reference voice data can be input in various ways corresponding to the format in which the preceding reference voice is provided. For example, the reference voice data can be classified into direct reference voice data provided by a speaker's voice such as a native speaker (standard pronunciation provider) and indirect reference voice data provided through contents such as a movie or a song. At this time, the direct reference voice data is recognized from the microphone or the voice recorder, and the indirect reference voice data can be input through the moving picture application or the voice reproduction application.
In the present specification, the user voice data is data input by the user corresponding to the reference voice data, and the user learns pronunciation and intonation of the preceding reference voice by listening and following the preceding reference voice. The user voice data can be input to various applications through voice recognition. The user voice data can be converted into text type data through speech recognition.
In the present specification, the learning target word data is a word or a sentence that the user wants to listen to and follow, and is data in the form of text corresponding to the reference speech data, and is data to be used as a reference in calculating the similarity. The learning target speech data may be provided directly to receive the reference speech data or the user speech data. For example, the learning target data may be displayed through the display unit of the terminal used by the user. Further, when the user listens to and follows the preceding reference voice, it may not be directly displayed to the user or transmitted.
Various embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
1 is a schematic configuration diagram of a pronunciation similarity measuring apparatus according to an embodiment of the present invention.
Referring to FIG. 1, the pronunciation
The pronunciation
The pronunciation
The
On the other hand, the
The
The speech recognition algorithm basically refers to a task in which an electronic device interprets a reference voice and a voice uttered by the user and recognizes the contents as text. Although not limited thereto, when the waveforms of the reference voice and the user voice are input to the electronic device, voice pattern information can be obtained by analyzing the voice waveform with reference to the acoustic model or the like. Then, the obtained voice pattern information is compared with the identification information, so that the text having the highest probability of matching in the identification information can be recognized.
Further, the
The
The
1, the receiving
FIG. 2 is a flowchart illustrating a pronunciation similarity measurement method according to an embodiment of the present invention. Will be described with reference to Fig. 1 for convenience of explanation.
The pronunciation similarity measuring method according to the present invention is started by the receiving
The receiving
On the other hand, the receiving
For example, when the receiving
The
The
The speech recognition algorithm basically refers to a task in which the pronunciation
The first speech processing data and the second speech processing data may be composed of sentences corresponding to the received reference speech data and user speech data by combining the matched words by the speech recognition algorithm. The first speech processing data and the second speech processing data may be temporarily stored in the
The
The first similarity degree and the second similarity degree are similar to each other between the first speech processing data and the second speech processing data and the learning target word data. The
The first speech processing data, the second speech processing data, and the learning target speech data are data in text format, and the comparison between speech processing data and learning target speech data can be performed by direct comparison at the text level. In calculating the degree of similarity, basically one syllable can be measured as a minimum unit, and it may be measured by dividing it by a predetermined time interval according to the type of reference voice, or by measuring a word or a phoneme.
In the conventional speech recognition method, when the reference voice or the user voice includes a rhyme such as strong, rhythm, or intonation, there is no matching acoustic model or identification information, so that the voice itself is not recognized or an incorrect evaluation result is generated . However, when the speech processing data and learning target speech data are divided and compared on a phoneme-by-phoneme basis, the speech processing data and the learning target speech data are not the same regardless of whether the matching acoustic model or the false information is present or the speech containing the rhyme Speech is recognized per phoneme unit. Therefore, by comparing the texts of the phonemes in which the speech is recognized, the reference speech and the user speech can be compared. In addition, a certain rule may be present in units of phonemes for the error that a user makes to pronounce an English word. It is more preferable to construct a database by arranging words similar in phonemic unit, and then compare the phonemes in a database.
The first similarity degree and the second similarity degree are calculated by evaluating at least one of pronunciation, accentuation, accentuation, and speed between speech processing data and learning target word data. At this time, pronunciation, accentuation, accentuation, and speed can be compared by dividing speech processing data by phonemic unit. For each phoneme unit, the position of articulation, method of articulation, vocal fold vibration, vowel triangle, vowel angle, The first similarity degree and the second similarity degree can be calculated through a method of analyzing and comparing the pitches.
The calculated first similarity degree and the second similarity degree may be numerically expressed and optionally stored in the
The
The final degree of similarity indicates the degree of similarity between the reference voice data and the user voice data, which means a similar degree of pronunciation of the reference voice and pronunciation of the user voice. The final similarity degree is measured by comparing the first similarity degree and the second similarity degree calculated by the
The measured final similarity can be represented by numbers, letters, symbols, and the like. For example, the final similarity may be expressed as a number between " 0 " and " 10 ". At this time, the larger the number, the more similar the reference voice and the user voice are. The final similarity can be obtained by subtracting the difference between the first similarity degree and the second similarity degree to ' 10 ' points as shown in the following equation (1).
[Equation 1]
Final similarity = 10 - (| second similarity - first similarity |)
The method of measuring the final similarity and the final similarity expressed in numbers will be further described with reference to FIG.
The pronunciation similarity measuring method of the present invention may further include the step of the
FIG. 3 exemplarily shows a method of measuring the first similarity degree, the second similarity degree, and the final similarity degree in the pronunciation similarity measurement method according to an embodiment of the present invention.
Referring to FIG. 3, the learning
Referring to FIG. 3, in the case of 'back' of the learning
FIG. 4 illustrates an exemplary screen implemented by the pronunciation similarity measurement method according to an embodiment of the present invention.
Referring to FIG. 4, the pronunciation
The learning
The reference
The final similarity
The learning
In this specification, each block or each step may represent a part of a module, segment or code that includes one or more executable instructions for executing the specified logical function (s). It should also be noted that in some alternative embodiments, the functions mentioned in the blocks or steps may occur out of order. For example, two blocks or steps shown in succession may in fact be performed substantially concurrently, or the blocks or steps may sometimes be performed in reverse order according to the corresponding function.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software module may reside in a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable disk, a CD-ROM or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor, which is capable of reading information from, and writing information to, the storage medium. Alternatively, the storage medium may be integral with the processor. The processor and the storage medium may reside within an application specific integrated circuit (ASIC). The ASIC may reside within the user terminal. Alternatively, the processor and the storage medium may reside as discrete components in a user terminal.
Although the embodiments of the present invention have been described in detail with reference to the accompanying drawings, it is to be understood that the present invention is not limited to those embodiments and various changes and modifications may be made without departing from the scope of the present invention. . Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. Therefore, it should be understood that the above-described embodiments are illustrative in all aspects and not restrictive. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.
100 pronunciation similarity measuring device
110 receiver
120 speech recognition unit
130 processor
140 display unit
210 Learning Objective Data
220 first speech processing data
230 First degree of similarity
240 second voice processing data
250 second similarity
260 Final similarity
300 Similarity measure window
310 target learning display unit
320 Reference voice display
330 User Voice Display
340 Final similarity indicator
350 Learning evaluation display
Claims (9)
Processing the reference speech data with a speech recognition algorithm to generate first speech processing data, and processing the user speech data with a speech recognition algorithm to generate second speech processing data;
Comparing the first speech processing data and the second speech processing data with learning target word data to calculate first similarity and second similarity in which at least one of pronunciation, accentuation, stress, and speed is evaluated; And
And comparing the first similarity with the second similarity to measure the final similarity of the reference speech data and the user speech data.
Further comprising the step of: receiving learning target word data and reference speech data corresponding to the learning target word data.
Wherein the step of calculating the first similarity degree and the second degree of similarity comprises dividing the first speech processing data and the second speech processing data on a phoneme basis and comparing the first and second speech processing data with the learning target word data.
Further comprising the step of providing the final similarity to the user as a score.
Processing the reference speech data with a speech recognition algorithm to generate first speech processing data, and processing the user speech data with a speech recognition algorithm to generate second speech processing data, wherein the first speech processing data and the second A speech recognition unit for comparing first speech processing data with learning target speech data and calculating a first similarity and a second similarity in which at least one of pronunciation, accentuation, stress and speed is evaluated; And
And a processor for comparing the first similarity with the second similarity to measure final similarity between the reference speech data and the user speech data.
Wherein the receiving unit receives the learning target speech data and the reference speech data corresponding to the learning target speech data.
Wherein the speech recognition unit includes dividing the first speech processing data and the second speech processing data on a phoneme basis and comparing the divided speech processing data with the learning target speech data.
And an output unit for providing the final similarity to the user as a score.
Processing the reference speech data with a speech recognition algorithm to generate first speech processing data, processing the user speech data with a speech recognition algorithm to generate second speech processing data,
Comparing the first speech processing data and the second speech processing data with learning target word data to calculate a first similarity degree and a second similarity degree in which one or more of pronunciation, accentuation,
And comparing the first similarity with the second similarity to measure a final similarity of the reference voice data and the user voice data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150052579A KR20160122542A (en) | 2015-04-14 | 2015-04-14 | Method and apparatus for measuring pronounciation similarity |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150052579A KR20160122542A (en) | 2015-04-14 | 2015-04-14 | Method and apparatus for measuring pronounciation similarity |
Publications (1)
Publication Number | Publication Date |
---|---|
KR20160122542A true KR20160122542A (en) | 2016-10-24 |
Family
ID=57256740
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150052579A KR20160122542A (en) | 2015-04-14 | 2015-04-14 | Method and apparatus for measuring pronounciation similarity |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR20160122542A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886968A (en) * | 2017-12-28 | 2018-04-06 | 广州讯飞易听说网络科技有限公司 | Speech evaluating method and system |
CN109036404A (en) * | 2018-07-18 | 2018-12-18 | 北京小米移动软件有限公司 | Voice interactive method and device |
KR20190099988A (en) * | 2018-02-19 | 2019-08-28 | 주식회사 셀바스에이아이 | Device for voice recognition using end point detection and method thereof |
KR20190139056A (en) * | 2018-06-07 | 2019-12-17 | 오승종 | System for providing language learning services based on speed listening |
WO2020027394A1 (en) * | 2018-08-02 | 2020-02-06 | 미디어젠 주식회사 | Apparatus and method for evaluating accuracy of phoneme unit pronunciation |
CN112466335A (en) * | 2020-11-04 | 2021-03-09 | 吉林体育学院 | English pronunciation quality evaluation method based on accent prominence |
KR102261539B1 (en) * | 2020-06-02 | 2021-06-07 | 주식회사 날다 | System for providing artificial intelligence based korean culture platform service |
KR20210079512A (en) * | 2019-12-20 | 2021-06-30 | 주식회사 에듀템 | Foreign language learning evaluation device |
KR20210111503A (en) * | 2020-03-03 | 2021-09-13 | 주식회사 셀바스에이아이 | Method for pronunciation assessment and device for pronunciation assessment using the same |
KR20220036239A (en) * | 2020-09-15 | 2022-03-22 | 주식회사 퀄슨 | Pronunciation evaluation system based on deep learning |
KR20220054964A (en) * | 2020-10-26 | 2022-05-03 | 주식회사 에듀템 | Foreign language pronunciation training and evaluation system |
KR102410644B1 (en) * | 2022-02-16 | 2022-06-22 | 주식회사 알투스 | Method, device and system for providing foreign language education contents service based on voice recognition using artificial intelligence |
KR20230040507A (en) * | 2021-09-16 | 2023-03-23 | 이동하 | Method and device for processing voice information |
-
2015
- 2015-04-14 KR KR1020150052579A patent/KR20160122542A/en not_active Application Discontinuation
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107886968A (en) * | 2017-12-28 | 2018-04-06 | 广州讯飞易听说网络科技有限公司 | Speech evaluating method and system |
KR20190099988A (en) * | 2018-02-19 | 2019-08-28 | 주식회사 셀바스에이아이 | Device for voice recognition using end point detection and method thereof |
KR20190139056A (en) * | 2018-06-07 | 2019-12-17 | 오승종 | System for providing language learning services based on speed listening |
CN109036404A (en) * | 2018-07-18 | 2018-12-18 | 北京小米移动软件有限公司 | Voice interactive method and device |
WO2020027394A1 (en) * | 2018-08-02 | 2020-02-06 | 미디어젠 주식회사 | Apparatus and method for evaluating accuracy of phoneme unit pronunciation |
KR20210079512A (en) * | 2019-12-20 | 2021-06-30 | 주식회사 에듀템 | Foreign language learning evaluation device |
KR20210111503A (en) * | 2020-03-03 | 2021-09-13 | 주식회사 셀바스에이아이 | Method for pronunciation assessment and device for pronunciation assessment using the same |
KR102261539B1 (en) * | 2020-06-02 | 2021-06-07 | 주식회사 날다 | System for providing artificial intelligence based korean culture platform service |
KR20220036239A (en) * | 2020-09-15 | 2022-03-22 | 주식회사 퀄슨 | Pronunciation evaluation system based on deep learning |
KR20220054964A (en) * | 2020-10-26 | 2022-05-03 | 주식회사 에듀템 | Foreign language pronunciation training and evaluation system |
CN112466335A (en) * | 2020-11-04 | 2021-03-09 | 吉林体育学院 | English pronunciation quality evaluation method based on accent prominence |
CN112466335B (en) * | 2020-11-04 | 2023-09-29 | 吉林体育学院 | English pronunciation quality evaluation method based on accent prominence |
KR20230040507A (en) * | 2021-09-16 | 2023-03-23 | 이동하 | Method and device for processing voice information |
KR102410644B1 (en) * | 2022-02-16 | 2022-06-22 | 주식회사 알투스 | Method, device and system for providing foreign language education contents service based on voice recognition using artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR20160122542A (en) | Method and apparatus for measuring pronounciation similarity | |
US7401018B2 (en) | Foreign language learning apparatus, foreign language learning method, and medium | |
CN112397091B (en) | Chinese speech comprehensive scoring and diagnosing system and method | |
US8886534B2 (en) | Speech recognition apparatus, speech recognition method, and speech recognition robot | |
CN101661675B (en) | Self-sensing error tone pronunciation learning method and system | |
KR20150024180A (en) | Pronunciation correction apparatus and method | |
US20090138266A1 (en) | Apparatus, method, and computer program product for recognizing speech | |
US11810471B2 (en) | Computer implemented method and apparatus for recognition of speech patterns and feedback | |
CN108431883B (en) | Language learning system and language learning program | |
US11676572B2 (en) | Instantaneous learning in text-to-speech during dialog | |
WO2021074721A2 (en) | System for automatic assessment of fluency in spoken language and a method thereof | |
JP5105943B2 (en) | Utterance evaluation device and utterance evaluation program | |
Proença et al. | Automatic evaluation of reading aloud performance in children | |
KR20150024295A (en) | Pronunciation correction apparatus | |
CN107610691B (en) | English vowel sounding error correction method and device | |
JPH06110494A (en) | Pronounciation learning device | |
Kabashima et al. | Dnn-based scoring of language learners’ proficiency using learners’ shadowings and native listeners’ responsive shadowings | |
KR20080018658A (en) | Pronunciation comparation system for user select section | |
JP6599914B2 (en) | Speech recognition apparatus, speech recognition method and program | |
JP4296290B2 (en) | Speech recognition apparatus, speech recognition method and program | |
CN110164414B (en) | Voice processing method and device and intelligent equipment | |
US11043212B2 (en) | Speech signal processing and evaluation | |
JP6468584B2 (en) | Foreign language difficulty determination device | |
CN111508523A (en) | Voice training prompting method and system | |
CN112951208B (en) | Method and device for speech recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E601 | Decision to refuse application |