CN105513612A - Language vocabulary audio processing method and device - Google Patents

Language vocabulary audio processing method and device Download PDF

Info

Publication number
CN105513612A
CN105513612A CN201510873025.7A CN201510873025A CN105513612A CN 105513612 A CN105513612 A CN 105513612A CN 201510873025 A CN201510873025 A CN 201510873025A CN 105513612 A CN105513612 A CN 105513612A
Authority
CN
China
Prior art keywords
score
word
pronunciation
sound signal
fluency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510873025.7A
Other languages
Chinese (zh)
Inventor
赖辉浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201510873025.7A priority Critical patent/CN105513612A/en
Publication of CN105513612A publication Critical patent/CN105513612A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/06Foreign languages

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention discloses a language vocabulary audio processing method and device, and the method comprises the steps: collecting an audio signal sent by a user when the user reads a text in a set language; carrying out the audio recognition and processing of the audio signal, and obtaining at least one of fluency score, pronunciation score and volume score of each word; determining the final score of each word according to at least one of fluency score, pronunciation score and volume score of each word; and enabling a word with the final score being less than a preset score to be prompted to the user. According to the embodiment of the invention, the final score can enable the user to locate the word with the lower score in a reading process in the specific language, i.e., the word which is not pronounced accurately.

Description

The audio-frequency processing method of language vocabulary and device
Technical field
The embodiment of the present invention relates to technical field of mobile terminals, particularly relates to a kind of audio-frequency processing method and device of language vocabulary.
Background technology
At present, the status of English study shared by whole national life is more and more higher, and various teaching material, tutoring book and guidance material emerge in an endless stream, and along with the development of science and technology, the electronic equipment for English learning also gets more and more, and product form is more and more diversified.
But, existing English study device mostly only provides the function of listening, seeing, read aspect on the market, such as, read in the process of English word user, mutual due to what lack with English study device, the pronunciation of oneself is just incorrect to cause user to know, thus affects the study of user pronunciation.
Summary of the invention
The embodiment of the present invention provides a kind of audio-frequency processing method and device of language vocabulary, and user can be made in the process of reading English to navigate to the true word of cacoepy.
First aspect, embodiments provides a kind of audio-frequency processing method of language vocabulary, comprising:
Gather the sound signal that user adopts setting languages language read text to send;
Voice recognition processing is carried out to described sound signal, obtains comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score;
The final score of each word is determined according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding;
Described final score is less than the word suggestions of default score to user.
Second aspect, the embodiment of the present invention also provides a kind of apparatus for processing audio of language vocabulary, comprising:
Sound acquisition module, for gathering the sound signal that user adopts setting languages language read text to send, and is sent to obtain sub-module by described sound signal;
Described sub-module, for carrying out voice recognition processing to described sound signal, obtain comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score, determine the final score of each word according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding, and described final score is sent to miscue module;
Described miscue module, for being less than the word suggestions of default score to user by described final score.
The sound signal that the embodiment of the present invention adopts setting languages language read text to send by gathering user; Voice recognition processing is carried out to described sound signal, obtains comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score; The final score of each word is determined according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding; Described final score is less than the word suggestions of default score to user.The embodiment of the present invention is equally applicable to English study, user can be made to navigate to the lower word of score and the true word of cacoepy reading in English process by final score.
Accompanying drawing explanation
The schematic flow sheet of the audio-frequency processing method of the language vocabulary that Fig. 1 provides for the embodiment of the present invention one;
The structural representation of the audio-frequency processing method of the language vocabulary that Fig. 2 provides for the embodiment of the present invention two
The structural representation of the apparatus for processing audio of the language vocabulary that Fig. 3 provides for the embodiment of the present invention three.
Embodiment
Below in conjunction with drawings and Examples, the present invention is described in further detail.Be understandable that, specific embodiment described herein is only for explaining the present invention, but not limitation of the invention.It also should be noted that, for convenience of description, illustrate only part related to the present invention in accompanying drawing but not entire infrastructure.
Embodiment one
The schematic flow sheet of the audio-frequency processing method of the language vocabulary that Fig. 1 provides for the embodiment of the present invention one, the executive agent of the present embodiment, can be the apparatus for processing audio of the language vocabulary that the embodiment of the present invention provides, or be integrated with the terminal device of the apparatus for processing audio of described language vocabulary (such as, smart mobile phone, panel computer etc.), the apparatus for processing audio of this language vocabulary can adopt hardware or software simulating.As shown in Figure 1, specifically comprise:
The sound signal that S11, collection user adopt setting languages language read text to send;
Wherein, described setting languages can be the languages such as English, Chinese, Japanese, French and German.
Concrete, can the apparatus for processing audio of language vocabulary or be integrated with described language vocabulary apparatus for processing audio mobile terminal at least one microphone is installed, obtain by microphone the sound signal that user adopts setting languages language read text to send.
S12, voice recognition processing is carried out to described sound signal, obtain comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score;
Wherein, described fluency score characterizes the skill level that user reads, and described fluency score is higher, then illustrate that user is more skilled to this word.Described pronunciation score characterizes the Grasping level of user to this word, and pronunciation score is higher, illustrates that the Grasping level of user to this word is higher.Described volume score characterizes the volume height sent when user reads this word, and the volume used during the higher explanation user pronunciation of score is better, and volume is too high or too low all can affect this pronunciation of words effect.
S13, determine the final score of each word according at least one score in fluency score corresponding to described each word, pronunciation score and volume score;
Concrete, when each word obtained corresponding must be divided into fluency score time, then using the final score of fluency score as each word; When each word obtained corresponding must be divided into pronunciation score time, then using pronunciation score as the final score of each word; When each word obtained corresponding must be divided into volume score time, then using the final score of volume score as each word; When at least two scores that must be divided in fluency score, pronunciation score and volume score that each word obtained is corresponding, then using the final score of the weighted sum of at least two scores as each word.
S14, described final score is less than the word suggestions of default score to user.
Wherein, described default score is arranged by User Defined, specifically can be set to 60.
Concrete, the final score of described word is less, then illustrate that the pronunciation of user to this word is more inaccurate, then the word being less than 60 points is fed back to user, strengthen study with reminding user.
The sound signal that the present embodiment adopts setting languages language read text to send by gathering user; Voice recognition processing is carried out to described sound signal, obtains comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score; The final score of each word is determined according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding; Described final score is less than the word suggestions of default score to user.The present embodiment can make user in the process of reading setting languages, navigate to the lower word of score and the true word of cacoepy by the final score of each word.
Exemplary, on the basis of above-described embodiment, in order to evaluate the final score of user to this word more accurately, when calculating at least one score in the fluency score corresponding according to described each word, pronunciation score and volume score and determining the final score of each word, simultaneously with reference to fluency score, pronunciation score and these three scores of volume score, specifically can comprise:
Using the final score of the weighted sum of fluency score corresponding for described each word, pronunciation score and volume score as each word.
Wherein, according to fluency score, pronunciation score and these three influence degrees of accurately pronouncing to word of volume score, corresponding weighting coefficient can be set respectively.Such as, the influence degree of accurately pronouncing to word due to fluency score and pronunciation score is higher, and corresponding weighting coefficient can be set to 0.4 and 0.4 respectively, and the weighting coefficient of volume score is set to 0.2, following formula can be adopted to calculate when concrete calculating:
S=s1*0.4+s2*0.4+s3*0.2
Wherein, S is final score, and s1 is fluency score, and s2 is pronunciation score, and s3 is volume score.
Exemplary, on the basis of above-described embodiment, in order to reduce the too high impact on pronunciation of words of volume, before carrying out voice recognition processing to described sound signal, described method also comprises:
The sound signal of time domain is transformed into frequency domain, and the frequency minimizing predetermined threshold value that word frequency being greater than predeterminated frequency is corresponding.
Wherein, described predeterminated frequency can be set to 1.4Khz, and described predetermined threshold value can carry out establishing value according to original frequency, and the frequency such as reducing predetermined threshold value is on the basis of original frequency, move to right two.
Concrete, Fourier can be adopted to change or the sound signal of time domain is transformed into frequency domain by wavelet transformation, the frequency corresponding according to word each in frequency domain, frequency be moved to right two as the final frequency of each word, participate in the calculating that subsequent sound measures point.
Exemplary, on the basis of above-described embodiment, voice recognition processing is carried out to described sound signal, obtains at least one the score comprised in fluency score corresponding to each word, pronunciation score and volume score and specifically comprise:
Calculate the waveform that in described sound signal, each word is corresponding, from database, obtain at least one item in the waveform corresponding with each word corresponding fluency score, score of pronouncing and volume score.
Wherein, the corresponding relation of waveform corresponding to a word orthoepy and corresponding fluency score, pronunciation score and volume score is stored in database in advance through statistics.
In the present embodiment, the shape information that in the sound signal gathered by calculating, each word is corresponding, searches database according to described shape information, the fluency score that namely available each word is corresponding, pronunciation score and volume score.
Exemplary, on the basis of above-described embodiment, the word suggestions that described final score is less than default score is comprised to user:
Prompting user pronounces again to the word that described final score is less than default score, or, provide the accurate pronunciation that described final score is less than the word of default score.
The various embodiments described above equally can by gathering the sound signal that user adopts setting languages language read text to send; Voice recognition processing is carried out to described sound signal, obtains comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score; The final score of each word is determined according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding; Described final score is less than the word suggestions of default score to user, can makes user in the process of reading setting languages, navigate to the lower word of score and the true word of cacoepy by the final score of each word.
The schematic flow sheet of the audio-frequency processing method of the language vocabulary that Fig. 2 provides for the embodiment of the present invention two, the present embodiment is preferred embodiment, as shown in Figure 2, specifically comprises:
S21, gather the sound signal that user adopts setting languages language to follow reading material to send;
S22, the sound signal of time domain is transformed into frequency domain, and frequency corresponding to word frequency being greater than predeterminated frequency reduces predetermined threshold value;
S23, voice recognition processing is carried out to the sound signal that down conversion process is crossed, obtain comprising fluency score corresponding to each word, pronunciation score and volume score;
S24, using the weighted sum of fluency score corresponding for described each word, pronunciation score and volume score as the final score of each word;
S25, described final score is less than the word suggestions of default score to user;
The score that S26, response user click is less than the word of default score, reads aloud the accurate pronunciation of this word.
For detailed description of step each in the present embodiment and the technique effect of correspondence, see the associated description of above-described embodiment, can repeat no more here.
Embodiment three
The structural representation of the English study device that Fig. 3 provides for the embodiment of the present invention three, as shown in Figure 3, specifically comprises: sound acquisition module 31, sub-module 32 and miscue module 33;
The sound signal that described sound acquisition module 31 adopts setting languages language read text to send for gathering user, and described sound signal is sent to obtain sub-module 32;
The described sub-module 32 that obtains is for carrying out voice recognition processing to described sound signal, obtain comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score, determine the final score of each word according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding, and described final score is sent to miscue module 33;
Described miscue module 33 is for being less than the word suggestions of default score to user by described final score.
The apparatus for processing audio of the language vocabulary described in the embodiment of the present invention is for performing the audio-frequency processing method of the language vocabulary described in the various embodiments described above, and the technique effect of its know-why and generation is similar, is not repeated here.
Exemplary, on the basis of above-described embodiment, described sub-module 32 is specifically for using the final score of the weighted sum of fluency score corresponding for described word, pronunciation score and volume score as each word.
Exemplary, on the basis of above-described embodiment, the sound signal of time domain, also for before carrying out voice recognition processing to described sound signal, is transformed into frequency domain by the described sub-module 32 that obtains, and the frequency minimizing predetermined threshold value that word frequency being greater than predeterminated frequency is corresponding.
Exemplary, on the basis of above-described embodiment, the described sub-module 32 that obtains, specifically for calculating the waveform that in described sound signal, each word is corresponding, obtains at least one item in the waveform corresponding with each word corresponding fluency score, score of pronouncing and volume score from database.
Exemplary, on the basis of above-described embodiment, described miscue module 33 is pronounced to the word that described final score is less than default score again specifically for prompting user, or, provide the accurate pronunciation that described final score is less than the word of default score.
The apparatus for processing audio of the language vocabulary described in the various embodiments described above is equally for performing the audio-frequency processing method of the language vocabulary described in the various embodiments described above, and the technique effect of its know-why and generation is similar, is not repeated here.
Note, above are only preferred embodiment of the present invention and institute's application technology principle.Skilled person in the art will appreciate that and the invention is not restricted to specific embodiment described here, various obvious change can be carried out for a person skilled in the art, readjust and substitute and can not protection scope of the present invention be departed from.Therefore, although be described in further detail invention has been by above embodiment, the present invention is not limited only to above embodiment, when not departing from the present invention's design, can also comprise other Equivalent embodiments more, and scope of the present invention is determined by appended right.

Claims (10)

1. an audio-frequency processing method for language vocabulary, is characterized in that, comprising:
Gather the sound signal that user adopts setting languages language read text to send;
Voice recognition processing is carried out to described sound signal, obtains comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score;
The final score of each word is determined according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding;
Described final score is less than the word suggestions of default score to user.
2. method according to claim 1, is characterized in that, determines that the final score of each word comprises according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding:
Using the final score of the weighted sum of fluency score corresponding for described each word, pronunciation score and volume score as each word.
3. method according to claim 1, is characterized in that, before carrying out voice recognition processing, also comprises described sound signal:
The sound signal of time domain is transformed into frequency domain, and the frequency minimizing predetermined threshold value that word frequency being greater than predeterminated frequency is corresponding.
4. the method according to any one of claims 1 to 3, is characterized in that, carries out voice recognition processing to described sound signal, obtains at least one the score comprised in fluency score corresponding to each word, pronunciation score and volume score and comprises:
Calculate the waveform that in described sound signal, each word is corresponding, from database, obtain at least one item in the waveform corresponding with each word corresponding fluency score, score of pronouncing and volume score.
5. the method according to any one of claims 1 to 3, is characterized in that, is comprised by the word suggestions that described final score is less than default score to user:
Prompting user pronounces again to the word that described final score is less than default score, or, provide the accurate pronunciation that described final score is less than the word of default score.
6. an apparatus for processing audio for language vocabulary, is characterized in that, comprising:
Sound acquisition module, for gathering the sound signal that user adopts setting languages language read text to send, and is sent to obtain sub-module by described sound signal;
Described sub-module, for carrying out voice recognition processing to described sound signal, obtain comprising at least one score in fluency score corresponding to each word, pronunciation score and volume score, determine the final score of each word according at least one score in fluency score, pronunciation score and volume score that described each word is corresponding, and described final score is sent to miscue module;
Described miscue module, for being less than the word suggestions of default score to user by described final score.
7. device according to claim 6, is characterized in that, described sub-module specifically for:
Using the final score of the weighted sum of fluency score corresponding for described word, pronunciation score and volume score as each word.
8. device according to claim 6, is characterized in that, described sub-module also for:
Before voice recognition processing is carried out to described sound signal, the sound signal of time domain is transformed into frequency domain, and the frequency minimizing predetermined threshold value that word frequency being greater than predeterminated frequency is corresponding.
9. the device according to any one of claim 6 ~ 8, is characterized in that, described sub-module specifically for:
Calculate the waveform that in described sound signal, each word is corresponding, from database, obtain at least one item in the waveform corresponding with each word corresponding fluency score, score of pronouncing and volume score.
10. the device according to any one of claim 6 ~ 8, is characterized in that, described miscue module specifically for:
Prompting user pronounces again to the word that described final score is less than default score, or, provide the accurate pronunciation that described final score is less than the word of default score.
CN201510873025.7A 2015-12-02 2015-12-02 Language vocabulary audio processing method and device Pending CN105513612A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510873025.7A CN105513612A (en) 2015-12-02 2015-12-02 Language vocabulary audio processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510873025.7A CN105513612A (en) 2015-12-02 2015-12-02 Language vocabulary audio processing method and device

Publications (1)

Publication Number Publication Date
CN105513612A true CN105513612A (en) 2016-04-20

Family

ID=55721538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510873025.7A Pending CN105513612A (en) 2015-12-02 2015-12-02 Language vocabulary audio processing method and device

Country Status (1)

Country Link
CN (1) CN105513612A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683665A (en) * 2016-11-23 2017-05-17 新绎健康科技有限公司 Audio scale analysis method and system
CN108122561A (en) * 2017-12-19 2018-06-05 广东小天才科技有限公司 A kind of spoken voice assessment method and electronic equipment based on electronic equipment
CN109697975A (en) * 2017-10-20 2019-04-30 深圳市鹰硕音频科技有限公司 A kind of Speech Assessment Methods and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740024A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Method for automatic evaluation based on generalized fluent spoken language fluency
US20110036231A1 (en) * 2009-08-14 2011-02-17 Honda Motor Co., Ltd. Musical score position estimating device, musical score position estimating method, and musical score position estimating robot
CN102354495A (en) * 2011-08-31 2012-02-15 中国科学院自动化研究所 Testing method and system of semi-opened spoken language examination questions
JP2012042454A (en) * 2010-08-17 2012-03-01 Honda Motor Co Ltd Position detector and position detecting method
CN104599680A (en) * 2013-10-30 2015-05-06 语冠信息技术(上海)有限公司 Real-time spoken language evaluation system and real-time spoken language evaluation method on mobile equipment
CN104810017A (en) * 2015-04-08 2015-07-29 广东外语外贸大学 Semantic analysis-based oral language evaluating method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101740024A (en) * 2008-11-19 2010-06-16 中国科学院自动化研究所 Method for automatic evaluation based on generalized fluent spoken language fluency
US20110036231A1 (en) * 2009-08-14 2011-02-17 Honda Motor Co., Ltd. Musical score position estimating device, musical score position estimating method, and musical score position estimating robot
JP2012042454A (en) * 2010-08-17 2012-03-01 Honda Motor Co Ltd Position detector and position detecting method
CN102354495A (en) * 2011-08-31 2012-02-15 中国科学院自动化研究所 Testing method and system of semi-opened spoken language examination questions
CN104599680A (en) * 2013-10-30 2015-05-06 语冠信息技术(上海)有限公司 Real-time spoken language evaluation system and real-time spoken language evaluation method on mobile equipment
CN104810017A (en) * 2015-04-08 2015-07-29 广东外语外贸大学 Semantic analysis-based oral language evaluating method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈必红: "《观测过程理论 修订版》", 30 November 2013 *
马育倩: "《模拟导游》", 31 August 2014 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106683665A (en) * 2016-11-23 2017-05-17 新绎健康科技有限公司 Audio scale analysis method and system
CN106683665B (en) * 2016-11-23 2020-04-17 新绎健康科技有限公司 Method and system for analyzing musical scale of audio
CN109697975A (en) * 2017-10-20 2019-04-30 深圳市鹰硕音频科技有限公司 A kind of Speech Assessment Methods and device
CN108122561A (en) * 2017-12-19 2018-06-05 广东小天才科技有限公司 A kind of spoken voice assessment method and electronic equipment based on electronic equipment

Similar Documents

Publication Publication Date Title
CN105304080B (en) Speech synthetic device and method
CN102930866B (en) Evaluation method for student reading assignment for oral practice
CN110648690B (en) Audio evaluation method and server
KR101616909B1 (en) Automatic scoring apparatus and method
CN106531185B (en) voice evaluation method and system based on voice similarity
CN103677729B (en) Voice input method and system
CN103559894B (en) Oral evaluation method and system
CN109817201B (en) Language learning method and device, electronic equipment and readable storage medium
CN103838866B (en) A kind of text conversion method and device
CN100397438C (en) Method for computer assisting learning of deaf-dumb Chinese language pronunciation
CN103559892B (en) Oral evaluation method and system
CN103594087B (en) Improve the method and system of oral evaluation performance
CN106847260A (en) A kind of Oral English Practice automatic scoring method of feature based fusion
CN112908355B (en) System and method for quantitatively evaluating teaching skills of teacher and teacher
CN104575519B (en) The method, apparatus of feature extracting method, device and stress detection
CN106611604A (en) An automatic voice summation tone detection method based on a deep neural network
CN110246507A (en) A kind of recognition methods of voice and device
CN103632668A (en) Method and apparatus for training English voice model based on Chinese voice information
CN103730032A (en) Method and system for controlling multimedia data
CN109147762A (en) A kind of audio recognition method and system
CN109741734A (en) A kind of speech evaluating method, device and readable medium
CN102723077B (en) Method and device for voice synthesis for Chinese teaching
CN105513612A (en) Language vocabulary audio processing method and device
CN111312255A (en) Pronunciation self-correcting device for word and pinyin tones based on voice recognition
CN103559289A (en) Language-irrelevant keyword search method and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160420

RJ01 Rejection of invention patent application after publication