CN109255988A - Interactive learning methods based on incorrect pronunciations detection - Google Patents

Interactive learning methods based on incorrect pronunciations detection Download PDF

Info

Publication number
CN109255988A
CN109255988A CN201810755225.6A CN201810755225A CN109255988A CN 109255988 A CN109255988 A CN 109255988A CN 201810755225 A CN201810755225 A CN 201810755225A CN 109255988 A CN109255988 A CN 109255988A
Authority
CN
China
Prior art keywords
reading
chunking
split
user
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810755225.6A
Other languages
Chinese (zh)
Inventor
王崝骁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Mei Gao Sen Education Technology Co Ltd
Original Assignee
Beijing Mei Gao Sen Education Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mei Gao Sen Education Technology Co Ltd filed Critical Beijing Mei Gao Sen Education Technology Co Ltd
Priority to CN201810755225.6A priority Critical patent/CN109255988A/en
Publication of CN109255988A publication Critical patent/CN109255988A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems

Abstract

The invention discloses a kind of interactive learning methods based on incorrect pronunciations detection, comprising steps of display module of the S1. based on language learner, shows text sentence picture, and play sentence voice;S2. user carries out voice input with reading in the first setting time T seconds of the voice heard, first detection module is provided in language learner simultaneously, it is detected for the voice input to user with reading, if it exceeds not detecting voice input with reading in T seconds, then prompt user again with reading, until detecting that the voice of user is inputted with reading.The present invention improves the learning efficiency of user, improves and detects recognition accuracy to incorrect pronunciations, enhances the function of language learning device.

Description

Interactive learning methods based on incorrect pronunciations detection
Technical field
The present invention relates to speech detection technical fields, more specifically, are related to a kind of language based on incorrect pronunciations detection Learning method.
Background technique
The learning characteristic of language learner combination different language, and using computer communication technology etc., it not only can be from language Method, hearing, read-write etc. assist user learn language, but also the training that can take an exam, reading training, listen reading training With audiovisual training etc..But existing language learner technology is known the detection of incorrect pronunciations there are user's learning efficiency is low Low, the problems such as data-handling efficiency is low is not spent.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of language learnings based on incorrect pronunciations detection Method improves the learning efficiency of user, improves and detects recognition accuracy to incorrect pronunciations, enhances language learning device Function.
The purpose of the present invention is achieved through the following technical solutions: a kind of language learning based on incorrect pronunciations detection Method, comprising:
S1. it based on the display module of language learner, shows text sentence picture, and plays sentence voice;
S2. user carries out voice input with reading in the first setting time T seconds of the voice heard, while language learner In be provided with first detection module, detected for the voice input to user with reading, if it exceeds T second interior do not detect Voice input then prompts user again with reading, until detecting that the voice of user is inputted with reading with reading;
S3. judgment module is provided in language learner, for whether judging it with reading content to the voice input of user Correctly, if correctly, enter next with read;If mistake, repeat with reading;If same sentence is continuously with read error three It is secondary, then the sentence is split, completes one time with reading;
S4. after user's first pass is with running through, it is provided with the second detection module in language learner, is for detecting It is no to have sentence of the repetition with reading, if there is repeating the sentence with reading, then second time is carried out with reading;
S5. after at second time with running through, to subsequent multipass with reading, it is provided with computing module in language learner, It repeats for calculating separately it with reading statement quantity, if repeating to subtract than a upper repetition with reading statement quantity with reading statement quantity Few ratio is then completed otherwise to continue with reading to learn with reading lower than the ratio of setting.
Further, in step s3, judge whether the voice input of user is correct with reading content with the following method:
First, the standard pronunciation of sentence voice and its chunking group in pre-input step S1;
It is secondary, the input of the voice of user is split as chunking group with reading content, the chunking group be split as chunking, subgroup block, Secondary chunking, more secondary chunking, wherein chunking can split into two or more subgroup blocks, and subgroup block can be split into Two or more secondary chunkings, and so on, voice input is split to always word with reading content, if word is double Syllable word or polysyllabic word, then be split to syllable;
Again, the voice input that will be split as the user of chunking group is checked with reading the standard pronunciation of content and pre-input, thus Detection determines whether wrong pronunciation.
Further, in step s3, sentence is split with the following method:
S31. level-one fractionation is carried out first, sentence splits into multiple level-one chunkings and carries out with reading, if user is correct with reading, Into next chunking with reading, if mistake, this chunking carries out second level fractionation;
S32. the level-one chunking of error is split into multiple second level chunkings, successively carried out with reading, if correctly, entrance is next A chunking if mistake, continues to split, the bottom until being split to delimitation with reading;
S33. if second level chunking judges whether there is the key sound study video or sound of the chunking still with read error Mark study video, if so, then being learnt into viewing video;If it is not, repeatedly with reading, until correct;
S34. the chunking currently split all with read it is correct after, then return to upper level chunking and continue with reading, if mistake, Then continue to split, if correctly, entering next chunking with reading, correctly continuing back at upper level chunking until whole, until Sentence is correct with reading, then enters learning with reading for next sentence.
The beneficial effects of the present invention are:
(1) it the invention discloses the implementation process of the interactive learning methods based on incorrect pronunciations testing mechanism, improves existing The conventional learning method having, including mechanism, the corresponding module detected in language learner to the incorrect pronunciations of user And function logic (including detailed implementation process), provide a set of study solution based on incorrect pronunciations detection, a side Face is identified based on the accurate detection of incorrect pronunciations, can satisfy assessment of the user in study to pronunciation situation, improves study effect Rate;On the other hand, the incorrect pronunciations recognition accuracy for improving speech detection technology, enhances the language in language learning machine equipment Sound detection function, so that language learning machine equipment improves data processing while having incorrect pronunciations accurate detection function Efficiency improves equipment using fluency, makes it have the stronger market competitiveness.
(2) device in the present invention has the detectability of language chunking, subgroup block, secondary chunking, more secondary chunking, at present Language learner in there is not yet similar functions, and disclose its particular technique implementation, provided for those skilled in the art A kind of solution of specific implementation, the solution improve the learning effect of learner, can be used in school, education training Instruct mechanism and personal scene etc., market application value with higher.
(3) training method provided by the invention simplifies the complexity of the existing foreign language teaching in China significantly, solves significantly The degree of difficulty of the existing foreign language teaching in the China Jue Liao solves degree of difficulty of the existing foreign language teaching in China in terms of spoken language significantly, It is expected the foreign language teaching pain spot that termination China is referred to as " Dumb English " or " deaf and dumb voice ".
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention without any creative labor, may be used also for those of ordinary skill in the art To obtain other drawings based on these drawings.
Fig. 1 is method and step flow chart of the invention.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawing, but protection scope of the present invention is not limited to It is as described below.All features disclosed in this specification, or implicit disclosed all methods or in the process the step of, in addition to mutual Other than the feature and/or step of repulsion, it can combine in any way.
Any feature disclosed in this specification (including any accessory claim, abstract and attached drawing), except non-specifically chatting It states, can be replaced by other alternative features that are equivalent or have similar purpose.That is, unless specifically stated, each feature is only It is an example in a series of equivalent or similar characteristics.
Specific embodiments of the present invention are described more fully below, it should be noted that the embodiments described herein is served only for illustrating Illustrate, is not intended to restrict the invention.In the following description, in order to provide a thorough understanding of the present invention, a large amount of spies are elaborated Determine details.It will be apparent, however, to one skilled in the art that: this hair need not be carried out using these specific details It is bright.In other instances, in order to avoid obscuring the present invention, well known circuit, software or method are not specifically described.
As shown in Figure 1, a kind of interactive learning methods based on incorrect pronunciations detection, comprising:
S1. it based on the display module of language learner, shows text sentence picture, and plays sentence voice;
S2. user carries out voice input with reading in the first setting time T seconds of the voice heard, while language learner In be provided with first detection module, detected for the voice input to user with reading, if it exceeds T second interior do not detect Voice input then prompts user again with reading, until detecting that the voice of user is inputted with reading with reading;
S3. judgment module is provided in language learner, for whether judging it with reading content to the voice input of user Correctly, if correctly, enter next with read;If mistake, repeat with reading;If same sentence is continuously with read error three It is secondary, then the sentence is split, completes one time with reading;
S4. after user's first pass is with running through, it is provided with the second detection module in language learner, is for detecting It is no to have sentence of the repetition with reading, if there is repeating the sentence with reading, then second time is carried out with reading;
S5. after at second time with running through, to subsequent multipass with reading, it is provided with computing module in language learner, It repeats for calculating separately it with reading statement quantity, if repeating to subtract than a upper repetition with reading statement quantity with reading statement quantity Few ratio is then completed otherwise to continue with reading to learn with reading lower than the ratio of setting.
Further, in step s3, judge whether the voice input of user is correct with reading content with the following method:
First, the standard pronunciation of sentence voice and its chunking group in pre-input step S1;
It is secondary, the input of the voice of user is split as chunking group with reading content, the chunking group be split as chunking, subgroup block, Secondary chunking, more secondary chunking, wherein chunking can split into two or more subgroup blocks, and subgroup block can be split into Two or more secondary chunkings, and so on, voice input is split to always word with reading content, if word is double Syllable word or polysyllabic word, then be split to syllable;
Again, the voice input that will be split as the user of chunking group is checked with reading the standard pronunciation of content and pre-input, thus Detection determines whether wrong pronunciation.
Further, in step s3, sentence is split with the following method:
S31. level-one fractionation is carried out first, sentence splits into multiple level-one chunkings and carries out with reading, if user is correct with reading, Into next chunking with reading, if mistake, this chunking carries out second level fractionation;
S32. the level-one chunking of error is split into multiple second level chunkings, successively carried out with reading, if correctly, entrance is next A chunking if mistake, continues to split, the bottom until being split to delimitation with reading;
S33. if second level chunking judges whether there is the key sound study video or sound of the chunking still with read error Mark study video, if so, then being learnt into viewing video;If it is not, repeatedly with reading, until correct;
S34. the chunking currently split all with read it is correct after, then return to upper level chunking and continue with reading, if mistake, Then continue to split, if correctly, entering next chunking with reading, correctly continuing back at upper level chunking until whole, until Sentence is correct with reading, then enters learning with reading for next sentence.
Embodiment 1
As shown in Figure 1, a kind of interactive learning methods based on incorrect pronunciations detection, comprising:
S1. it based on the display module of language learner, shows text sentence picture, and plays sentence voice;
S2. user carries out voice input with reading in the first setting time T seconds of the voice heard, while language learner In be provided with first detection module, detected for the voice input to user with reading, if it exceeds T second interior do not detect Voice input then prompts user again with reading, until detecting that the voice of user is inputted with reading with reading;
S3. judgment module is provided in language learner, for whether judging it with reading content to the voice input of user Correctly, if correctly, enter next with read;If mistake, repeat with reading;If same sentence is continuously with read error three It is secondary, then the sentence is split, completes one time with reading;
S4. after user's first pass is with running through, it is provided with the second detection module in language learner, is for detecting It is no to have sentence of the repetition with reading, if there is repeating the sentence with reading, then second time is carried out with reading;
S5. after at second time with running through, to subsequent multipass with reading, it is provided with computing module in language learner, It repeats for calculating separately it with reading statement quantity, if repeating to subtract than a upper repetition with reading statement quantity with reading statement quantity Few ratio is then completed otherwise to continue with reading to learn with reading lower than the ratio of setting.
Remaining technical characteristic in the present embodiment, those skilled in the art can flexibly be selected according to the actual situation With with to meet different specific actual demands.It will be apparent, however, to one skilled in the art that: it need not use These specific details carry out the present invention.In other instances, in order to avoid obscuring the present invention, well known calculation is not specifically described Method, method or system etc. limit within technical protection scope in the claimed technical solution of claims of the present invention.
Embodiment 2
A kind of language learner system based on incorrect pronunciations detection, comprising:
Language learner system is provided with display module, first detection module, judgment module, the second detection module and calculating Module;
Display module for showing text sentence picture, and plays sentence voice;
First detection module carries out voice in the first setting time T seconds of the voice heard for user and inputs with reading, The voice input of user is detected with reading, if it exceeds not detecting that voice input with reading, then prompts user's weight in T seconds Newly with reading, until detecting that the voice of user is inputted with reading;
Judgment module, it is whether correct for judging it for the voice input to user with reading content, if correctly, Into next with reading;If mistake, repeat with reading;If same sentence continuously with read error three times, by the sentence into Row is split, and completes one time with reading;
Second detection module, for after user's first pass is with running through, for detecting whether there is language of the repetition with reading Sentence then carries out second time with reading if there is repeating the sentence with reading;
Computing module, for second time with running through after, to subsequent multipass with read, for calculating separately its repetition With the ratio of reading statement quantity, set if repeating to be lower than than a upper repetition with reading statement quantity reduction ratio with reading statement quantity Fixed ratio is then completed otherwise to continue with reading to learn with reading.
Further, in judgment module, comprising:
Pre-input module, the standard pronunciation of the sentence voice and its chunking group that are shown in display module for pre-input;
Module is split, for the voice input of user to be split as chunking group with reading content, chunking group is split as chunking, Asia Chunking, secondary chunking, more secondary chunking, wherein chunking can split into two or more subgroup blocks, and subgroup block can be torn open It is divided into two or more secondary chunkings, and so on, voice input is split to always word with reading content, if word It is disyllabic word or polysyllabic word, then is split to syllable;
Incorrect pronunciations check module, for will be split as chunking group user voice input with reading content and pre-input Standard pronunciation verification, so that detection determines whether wrong pronunciation.
Further, in judgment module, including the first module, the second module, third module and the 4th module;
First module, progress level-one fractionation first, sentence splits into multiple level-one chunkings and carries out with reading, if user is with reading Correctly, into next chunking with reading, if mistake, this chunking carries out second level fractionation;
The level-one chunking of error is split into multiple second level chunkings by the second module, is successively carried out with reading, if correctly, into Enter next chunking with reading, if mistake, continues to split, the bottom until being split to delimitation;
Third module judges the correct still mistake of second level chunking, if second level chunking judges whether still with read error There are the key sound study video or phonetic symbols learning video of the chunking, if so, then being learnt into viewing video;If not yet Have, then repeatedly with reading, until correct;
4th module, the chunking currently split all with read it is correct after, then return to upper level chunking and continue with reading, if Mistake then continues to split, if correctly, entering next chunking with reading, until all correctly, continuing back at upper level group Block then enters learning with reading for next sentence until sentence is correct with reading.
For the aforementioned method embodiment, for simple description, therefore, it is stated as a series of action combinations, still Those skilled in the art should understand that the application is not limited by the described action sequence, because according to the application, it is a certain A little steps can be performed in other orders or simultaneously.Secondly, those skilled in the art should also know that, it is retouched in specification The embodiment stated belongs to preferred embodiment, necessary to related movement and unit not necessarily the application.
It will be appreciated by those of skill in the art that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and Algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually with hard Part or software mode execute, the specific application and design constraint depending on technical solution.Professional technician can be with Each specific application is come to realize described function using distinct methods, but this realization should not exceed model of the invention It encloses.
Disclosed system, module and method, may be implemented in other ways.For example, device described above Embodiment, only schematically, for example, the division of the unit, can be only a kind of logical function partition, it is practical to realize When there may be another division manner, such as multiple units or components can be combined or can be integrated into another system, or Some features can be ignored or not executed.Another point, shown or discussed mutual coupling or direct-coupling or communication Connection is it may be said that through some interfaces, the indirect coupling or communication connection of device or unit can be electrical property, mechanical or other Form.
The unit that the discrete parts illustrates can be or can not also receive and is physically separated, shown as a unit Component can be or can not receive physical unit, it can and it is in one place, or may be distributed over multiple network lists In member.It can select some or all of unit therein according to the actual needs to realize the purpose of the scheme of the present embodiment.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially right in other words The part of part or the technical solution that the prior art contributes can be embodied in the form of software products, the calculating Machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be individual Computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention.And Storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory The various media that can store program code such as device (Random Access Memory, RAM), magnetic or disk.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in the method for above-described embodiment, being can It is completed with instructing relevant hardware by computer program, the program can be stored in computer-readable storage medium In, the program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic Dish, CD, ROM, RAM etc..
The above is only a preferred embodiment of the present invention, it should be understood that the present invention is not limited to described herein Form should not be regarded as an exclusion of other examples, and can be used for other combinations, modifications, and environments, and can be at this In the text contemplated scope, modifications can be made through the above teachings or related fields of technology or knowledge.And those skilled in the art institute into Capable modifications and changes do not depart from the spirit and scope of the present invention, then all should be in the protection scope of appended claims of the present invention It is interior.

Claims (3)

1. a kind of interactive learning methods based on incorrect pronunciations detection characterized by comprising
S1. it based on the display module of language learner, shows text sentence picture, and plays sentence voice;
S2. user carries out voice input with reading in the first setting time T seconds of the voice heard, while setting in language learner It is equipped with first detection module, is detected for the voice input to user with reading, if it exceeds not detecting voice in T seconds Input then prompts user again with reading, until detecting that the voice of user is inputted with reading with reading;
S3. whether just judgment module is provided in language learner, for judging it with reading content to the voice input of user Really, if correctly, enter next with read;If mistake, repeat with reading;If same sentence is continuously with read error three It is secondary, then the sentence is split, completes one time with reading;
S4. after user's first pass is with running through, the second detection module is provided in language learner, for detecting whether having The sentence with reading is repeated, if there is repeating the sentence with reading, then carries out second time with reading;
S5. after at second time with running through, to subsequent multipass with reading, it is provided with computing module in language learner, is used for It calculates separately it to repeat with reading statement quantity, if repeating to reduce ratio with reading statement quantity than a upper repetition with reading statement quantity Example is then completed otherwise to continue with reading to learn with reading lower than the ratio of setting.
2. the interactive learning methods according to claim 1 based on incorrect pronunciations detection, which is characterized in that in step S3 In, judge whether the voice input of user is correct with reading content with the following method:
First, the standard pronunciation of sentence voice and its chunking group in pre-input step S1;
It is secondary, the voice input of user is split as chunking group with reading content, the chunking group is split as chunking, subgroup block, secondary group Block, more secondary chunking, wherein chunking can split into two or more subgroup blocks, and subgroup block can split into two Or more than two secondary chunkings, and so on, voice input is split to always word with reading content, if word is double-tone section Word or polysyllabic word, then be split to syllable;
Again, the voice input that will be split as the user of chunking group is checked with reading the standard pronunciation of content and pre-input, to detect Determine whether wrong pronunciation.
3. the interactive learning methods according to claim 1 based on incorrect pronunciations detection, which is characterized in that in step S3 In, sentence is split with the following method:
S31. level-one fractionation is carried out first, and sentence splits into multiple level-one chunkings and carries out with reading, if user is correct with reading, enters Next chunking is with reading, if mistake, this chunking carries out second level fractionation;
S32. the level-one chunking of error is split into multiple second level chunkings, successively carried out with reading, if correctly, into next group Block if mistake, continues to split, the bottom until being split to delimitation with reading;
S33. if second level chunking judges whether there is the key sound study video or phonetic symbol of the chunking still with read error Video is practised, if so, then being learnt into viewing video;If it is not, repeatedly with reading, until correct;
S34. the chunking currently split all with read it is correct after, then return to upper level chunking and continue with reading, if mistake, after Continue and split, if correctly, entering next chunking with reading, until all correctly, continuing back at upper level chunking, until sentence It is correct with reading, then enter learning with reading for next sentence.
CN201810755225.6A 2018-07-11 2018-07-11 Interactive learning methods based on incorrect pronunciations detection Pending CN109255988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810755225.6A CN109255988A (en) 2018-07-11 2018-07-11 Interactive learning methods based on incorrect pronunciations detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810755225.6A CN109255988A (en) 2018-07-11 2018-07-11 Interactive learning methods based on incorrect pronunciations detection

Publications (1)

Publication Number Publication Date
CN109255988A true CN109255988A (en) 2019-01-22

Family

ID=65051650

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810755225.6A Pending CN109255988A (en) 2018-07-11 2018-07-11 Interactive learning methods based on incorrect pronunciations detection

Country Status (1)

Country Link
CN (1) CN109255988A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111028566A (en) * 2019-12-12 2020-04-17 广州三人行壹佰教育科技有限公司 Live broadcast teaching method, device, terminal and storage medium
CN111081083A (en) * 2019-07-11 2020-04-28 广东小天才科技有限公司 Method for dictating, reporting and reading and electronic equipment

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510590A (en) * 2002-12-24 2004-07-07 英业达股份有限公司 Language learning system and method with visual prompting to pronunciaton
CN1581130A (en) * 2003-08-15 2005-02-16 彭文富 Interactive language-learning method with speech-sound indentification function
CN101105939A (en) * 2007-09-04 2008-01-16 安徽科大讯飞信息科技股份有限公司 Sonification guiding method
CN101739870A (en) * 2009-12-03 2010-06-16 深圳先进技术研究院 Interactive language learning system and method
CN102054376A (en) * 2009-11-11 2011-05-11 林汉忠 Word memory learning system
CN202446825U (en) * 2011-12-20 2012-09-26 安徽科大讯飞信息科技股份有限公司 Child intelligent voice interaction cellphone toy with listening and repeating function
CN103413468A (en) * 2013-08-20 2013-11-27 苏州跨界软件科技有限公司 Parent-child educational method based on a virtual character
CN105006228A (en) * 2015-06-05 2015-10-28 普天智能照明研究院有限公司 Speech recognition method
CN105070118A (en) * 2015-07-30 2015-11-18 广东小天才科技有限公司 Method of correcting pronunciation aiming at language class learning and device of correcting pronunciation aiming at language class learning
CN205230403U (en) * 2015-12-11 2016-05-11 北华大学 Novel english teaching machine
CN106454491A (en) * 2016-09-30 2017-02-22 天脉聚源(北京)传媒科技有限公司 Method and device for playing voice information in video smartly
CN106504755A (en) * 2016-11-08 2017-03-15 广东小天才科技有限公司 A kind of recognition methodss of incorrect pronunciations and device, user terminal
CN106856066A (en) * 2016-12-20 2017-06-16 杨燕 Word splits memory learning system
CN107067834A (en) * 2017-03-17 2017-08-18 麦片科技(深圳)有限公司 Point-of-reading system with oral evaluation function
CN107067833A (en) * 2017-02-15 2017-08-18 山东顺势教育科技有限公司 A kind of speech recognition and evaluation system
CN107862915A (en) * 2017-10-26 2018-03-30 费非 Multimedia file with read method and apparatus
CN107908674A (en) * 2017-10-26 2018-04-13 费非 Phonetic decision method and device, storage medium and processor
JP2018092042A (en) * 2016-12-05 2018-06-14 株式会社 スプリックス Writing learning support system, writing learning support sever, writing learning support device, writing learning support method and writing light guide support program

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1510590A (en) * 2002-12-24 2004-07-07 英业达股份有限公司 Language learning system and method with visual prompting to pronunciaton
CN1581130A (en) * 2003-08-15 2005-02-16 彭文富 Interactive language-learning method with speech-sound indentification function
CN101105939A (en) * 2007-09-04 2008-01-16 安徽科大讯飞信息科技股份有限公司 Sonification guiding method
CN102054376A (en) * 2009-11-11 2011-05-11 林汉忠 Word memory learning system
CN101739870A (en) * 2009-12-03 2010-06-16 深圳先进技术研究院 Interactive language learning system and method
CN202446825U (en) * 2011-12-20 2012-09-26 安徽科大讯飞信息科技股份有限公司 Child intelligent voice interaction cellphone toy with listening and repeating function
CN103413468A (en) * 2013-08-20 2013-11-27 苏州跨界软件科技有限公司 Parent-child educational method based on a virtual character
CN105006228A (en) * 2015-06-05 2015-10-28 普天智能照明研究院有限公司 Speech recognition method
CN105070118A (en) * 2015-07-30 2015-11-18 广东小天才科技有限公司 Method of correcting pronunciation aiming at language class learning and device of correcting pronunciation aiming at language class learning
CN205230403U (en) * 2015-12-11 2016-05-11 北华大学 Novel english teaching machine
CN106454491A (en) * 2016-09-30 2017-02-22 天脉聚源(北京)传媒科技有限公司 Method and device for playing voice information in video smartly
CN106504755A (en) * 2016-11-08 2017-03-15 广东小天才科技有限公司 A kind of recognition methodss of incorrect pronunciations and device, user terminal
JP2018092042A (en) * 2016-12-05 2018-06-14 株式会社 スプリックス Writing learning support system, writing learning support sever, writing learning support device, writing learning support method and writing light guide support program
CN106856066A (en) * 2016-12-20 2017-06-16 杨燕 Word splits memory learning system
CN107067833A (en) * 2017-02-15 2017-08-18 山东顺势教育科技有限公司 A kind of speech recognition and evaluation system
CN107067834A (en) * 2017-03-17 2017-08-18 麦片科技(深圳)有限公司 Point-of-reading system with oral evaluation function
CN107862915A (en) * 2017-10-26 2018-03-30 费非 Multimedia file with read method and apparatus
CN107908674A (en) * 2017-10-26 2018-04-13 费非 Phonetic decision method and device, storage medium and processor

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111081083A (en) * 2019-07-11 2020-04-28 广东小天才科技有限公司 Method for dictating, reporting and reading and electronic equipment
CN111028566A (en) * 2019-12-12 2020-04-17 广州三人行壹佰教育科技有限公司 Live broadcast teaching method, device, terminal and storage medium

Similar Documents

Publication Publication Date Title
CN106652622B (en) A kind of text training method and device
CN109712446A (en) Interactive learning methods based on new word detection
CN105159924A (en) Learning resource pushing method and system
CN109102824B (en) Voice error correction method and device based on man-machine interaction
CN109461459A (en) Speech assessment method, apparatus, computer equipment and storage medium
CN105070130A (en) Level assessment method and level assessment system
US11068659B2 (en) System, method and computer program product for determining a decodability index for one or more words
CN109741641A (en) Langue leaning system based on new word detection
CN105488142A (en) Student score information input method and system
US10586297B2 (en) Learning progress monitoring system
CN109255988A (en) Interactive learning methods based on incorrect pronunciations detection
CN109147419A (en) Language learner system based on incorrect pronunciations detection
CN108847076A (en) The assessment method of language learner
AU2018229559A1 (en) A Method and System to Improve Reading
KR101984063B1 (en) System for learning the english
CN111079489B (en) Content identification method and electronic equipment
CN110349567A (en) The recognition methods and device of voice signal, storage medium and electronic device
US11908488B2 (en) Assessing reading ability through grapheme-phoneme correspondence analysis
US20190189026A1 (en) Systems and Methods for Automatically Integrating a Machine Learning Component to Improve a Spoken Language Skill of a Speaker
Jo et al. Effective computer‐assisted pronunciation training based on phone‐sensitive word recommendation
Shivakumar et al. AI-ENABLED LANGUAGE SPEAKING COACHING FOR DUAL LANGUAGE LEARNERS.
KR101918839B1 (en) Apparatus and method for providing learning contents using binary principle
CN110459079A (en) Text new word based on voice vocabulary spells training method
Shukla et al. iLeap: A Human-Ai Teaming Based Mobile Language Learning Solution for Dual Language Learners in Early and Special Educations.
Shukla Development of a Human-AI Teaming Based Mobile Language Learning Solution for Dual Language Learners in Early and Special Educations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190122