CN104639742A - Method and device for assisting in learning speaking through mobile terminal - Google Patents

Method and device for assisting in learning speaking through mobile terminal Download PDF

Info

Publication number
CN104639742A
CN104639742A CN201510009052.XA CN201510009052A CN104639742A CN 104639742 A CN104639742 A CN 104639742A CN 201510009052 A CN201510009052 A CN 201510009052A CN 104639742 A CN104639742 A CN 104639742A
Authority
CN
China
Prior art keywords
text
voice
user
pronunciation standard
reach
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510009052.XA
Other languages
Chinese (zh)
Other versions
CN104639742B (en
Inventor
施锐彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201510009052.XA priority Critical patent/CN104639742B/en
Publication of CN104639742A publication Critical patent/CN104639742A/en
Application granted granted Critical
Publication of CN104639742B publication Critical patent/CN104639742B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention is suitable for the technical field of mobile terminals, and provides a method and a device for assisting in learning speaking through a mobile terminal. The method comprises the following steps: when unlocking request information is received, acquiring a text, and displaying the acquired text; receiving voice recorded by a user according to the text, and judging whether or not the voice reaches a sounding standard corresponding to the text; if the voice reaches the sounding standard corresponding to the text, executing unlocking operation. When the unlocking request information is received, a text is displayed on an unlocking interface, the used is promoted to record the voice according to the text, the voice recorded by the user according to the text is received, and unlocking operation is executed when the voice recorded by the user reaches the sounding standard corresponding to the text. According to the characteristic of the need of frequency unlocking during use of the mobile terminal by the user, the user is actively promoted to perform speaking practice by making full use of fragmented time, so that the learning effect and learning efficiency in speaking learning assisted by the mobile terminal are greatly improved.

Description

The method of mobile terminal CAL spoken language and device
Technical field
The invention belongs to technical field of mobile terminals, particularly relate to method and the device of mobile terminal CAL spoken language.
Background technology
Increasing user practises the pronunciation of word or sentence, to improve oracy by the verbal learning software on mobile terminal.But, when user practises spoken language, all need initiatively to open verbal learning software at every turn, that is, only have when user initiatively opens verbal learning software, just can carry out spoken language exercise.The method of existing mobile terminal CAL spoken language cannot utilize chip time, need user to drop into execution that a lot of time ensures verbal learning.Also there are many users to be difficult to adhere to for a long time getting off, cause the interruption of verbal learning.
Summary of the invention
Given this, embodiments provide a kind of method and device of mobile terminal CAL spoken language, cannot utilize chip time in the mode solving existing mobile terminal CAL spoken language, cause results of learning poor and the lower problem of efficiency.
On the one hand, embodiments provide a kind of method of mobile terminal CAL spoken language, comprising:
When receiving unlocking request information, obtain a text, and the described text that display obtains;
Receive user according to the voice of described Characters, and judge whether described voice reach pronunciation standard corresponding to described text;
If described voice reach pronunciation standard corresponding to described text, then perform unlocking operation.
Second aspect, embodiments provides a kind of device of mobile terminal CAL spoken language, comprising:
Text display unit, for when receiving unlocking request information, obtains a text, and the described text that display obtains;
Pronunciation judging unit, for receiving the voice of user according to described Characters, and judges whether described voice reach pronunciation standard corresponding to described text;
Separate lock unit, if reach pronunciation standard corresponding to described text for described voice, then perform unlocking operation.
The beneficial effect that the embodiment of the present invention compared with prior art exists is: the embodiment of the present invention is by when receiving unlocking request information, unlock interface shows a text, user is according to text typing voice in prompting, receive user according to the voice of described Characters, and when the voice of user's typing reach pronunciation standard corresponding to the text, perform unlocking operation, utilize user to use during mobile terminal thus and need the frequent feature unlocked, make full use of chip time to carry out active warning user and carry out spoken language exercise, thus substantially increase results of learning and the learning efficiency of mobile terminal CAL spoken language.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the realization flow figure of the method for the mobile terminal CAL spoken language that the embodiment of the present invention provides;
Fig. 2 be the mobile terminal CAL spoken language that the embodiment of the present invention provides method step S103 described in judge whether described voice reach the specific implementation flow chart of pronunciation standard corresponding to described text;
Fig. 3 is the realization flow figure of the method for the mobile terminal CAL spoken language that another embodiment of the present invention provides;
Fig. 4 is the realization flow figure of the method for the mobile terminal CAL spoken language that another embodiment of the present invention provides;
Fig. 5 is the structured flowchart of the device of the mobile terminal CAL spoken language that the embodiment of the present invention provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
Fig. 1 shows the realization flow figure of the method for the mobile terminal CAL spoken language that the embodiment of the present invention provides, and details are as follows:
In step S101, when receiving unlocking request information, obtain a text, and the described text that display obtains.
As one embodiment of the present of invention, being selected whether spoken language exercise unlock mode to be set to current unlock mode by user, if so, when receiving unlocking request information, from spoken storehouse, obtaining a text.Here, spoken storehouse can store in a memory in the mobile terminal, also can be stored in Cloud Server, in this no limit.Text may be a word, a sentence or a paragraph, in this no limit.After acquisition text, the unlock interface of mobile terminal shows the text of acquisition, and point out user according to the Characters voice of display.
In step s 102, user is received according to the voice of described Characters.
In step s 103, judge whether described voice reach pronunciation standard corresponding to described text, if so, perform step S104; If not, step S105 is performed.
In embodiments of the present invention, after the microphones user by mobile terminal is according to the voice of described Characters, the demonstration pronunciation file that the text is corresponding is obtained from spoken storehouse, and the file that the voice of user's typing and demonstration pronounced compares, to judge whether the voice of user's typing reach pronunciation standard.
In step S104, perform unlocking operation.
If described voice reach pronunciation standard corresponding to described text, then perform unlocking operation.
In step S105, prompting user typing voice again.
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the voice of described user's typing reach pronunciation standard corresponding to described text, perform unlocking operation.
Preferably, point out described in step S105 user again typing voice be specially: after playing demonstration pronunciation file corresponding to described text, prompting user typing voice again.
Need the frequent feature unlocked when the embodiment of the present invention utilizes user to use mobile terminal, make full use of chip time and carry out active warning user and carry out spoken language exercise, thus substantially increase results of learning and the learning efficiency of mobile terminal CAL spoken language.Wherein, chip time is also known as time fragment (time confetti), and refer to unused, scrappy time after routine work, study, these times are not very long, as wait for bus, queue up or etc. the situations such as people time used.
Judge whether described voice reach the specific implementation flow chart of pronunciation standard corresponding to described text described in the method step S103 that Fig. 2 shows the mobile terminal CAL spoken language that the embodiment of the present invention provides, with reference to Fig. 2:
In step s 201, extract the phoneme in described voice, and generate the first phoneme list according to the time order and function order of the phoneme in described voice;
In step S202, extract the phoneme in described text, and generate the second phoneme list according to the time order and function order of the phoneme in described text;
In step S203, calculate the matching degree of described voice and described text according to described first phoneme list and described second phoneme list;
In step S204, if described matching degree is greater than preset value, then judge that described voice reach pronunciation standard corresponding to described text;
In step S205, if described matching degree is less than or equal to described preset value, then judge that described voice do not reach pronunciation standard corresponding to described text.
Such as, the phoneme extracted from the voice of user's typing is b, u and g, then generate the first phoneme list bug.Text is book, be then buk according to the second phoneme list of text generation.From the first phoneme list and the second phoneme list, the first phoneme list and the second phoneme list have two phonemes to match, and a phoneme does not mate.Calculate the matching degree of voice and text according to the first phoneme list and the second phoneme list, obtaining matching degree is 66.67%.Suppose that preset value is 75%, then judge that the voice of user's typing do not reach pronunciation standard corresponding to text.
Fig. 3 shows the realization flow figure of the method for the mobile terminal CAL spoken language that another embodiment of the present invention provides.With reference to Fig. 3:
In step S301, when receiving unlocking request information, obtain a text, and the described text that display obtains;
In step s 302, user is received according to the voice of described Characters;
In step S303, judge whether described voice reach pronunciation standard corresponding to described text, if so, perform step S304; If not, step S305 is performed;
In step s 304, unlocking operation is performed;
In step S305, judge whether the typing number of times of described user reaches preset times, if so, perform step S304; If not, step S306 is performed;
In step S306, prompting user typing voice again.
As one embodiment of the present of invention, when the voice of user's typing do not reach pronunciation standard corresponding to text, judge whether the typing number of times of user reaches preset times, if so, performs unlocking operation; If not, play demonstration pronunciation file, and point out user typing voice again.Here, the typing number of times of user refers to after this receives unlocking request information, the number of times of user's typing voice.Preset times can be 3, in this no limit.
Fig. 4 shows the realization flow figure of the method for the mobile terminal CAL spoken language that another embodiment of the present invention provides, with reference to Fig. 4:
In step S401, receive the spoken class information of user's input, determine the pronunciation standard of the text combination belonging to described text according to described spoken class information, and determine according to the pronunciation standard that described text combines the pronunciation standard that described text is corresponding;
In step S402, when receiving unlocking request information, obtain a text, and the described text that display obtains;
In step S403, receive user according to the voice of described Characters;
In step s 404, judge whether described voice reach pronunciation standard corresponding to described text, if so, perform step S405; If not, step S406 is performed;
In step S405, perform unlocking operation;
In step S406, judge whether the typing number of times of described user reaches preset times, if so, perform step S405; If not, step S407 is performed;
In step S 407, user's typing voice are again pointed out.
In embodiments of the present invention, according to the pronunciation standard of the spoken class information determination text combination of user's input.For the combination of different texts, different spoken grades can be set.Such as, spoken grade comprises A and B, and the difficulty of spoken grade A is higher, and the difficulty of spoken grade B is lower.Wherein, a text combination can comprise multiple text, such as, can comprise multiple word text, multiple sentence text or multiple paragraph text.
Preferably, after performing unlocking operation described in step S405, described method also comprises:
Judge whether all texts in the text combination belonging to described text all reach the pronunciation standard of described text combination, if so, point out described user to choose the combination of another text or heighten the pronunciation standard of described text combination.
As one embodiment of the present of invention, when all texts in the text combination belonging to described text all reach the pronunciation standard of text combination, upper once receive unlocking request information time, prompting user chooses the combination of another text and carries out verbal learning, or prompting user heightens the pronunciation standard of the text combination described in described text.Such as, the current spoken grade of the text combination belonging to described text is B, and when all texts in text combination all reach the pronunciation standard of text combination, prompting user selects spoken grade A, to improve the pronunciation standard of text combination.It should be noted that, the pronunciation standard that all texts in text combination all reach text combination refers to, user all carried out verbal learning for all texts in text combination, and user all reaches pronunciation standard corresponding to the text for the voice of each Characters.
Should be understood that in embodiments of the present invention, the size of the sequence number of above-mentioned each process does not also mean that the priority of execution sequence, and the execution sequence of each process should be determined with its function and internal logic, and should not form any restriction to the implementation process of the embodiment of the present invention.
The embodiment of the present invention is by when receiving unlocking request information, unlock interface shows a text, user is according to text typing voice in prompting, receive user according to the voice of described Characters, and when the voice of user's typing reach pronunciation standard corresponding to the text, perform unlocking operation, utilize user to use during mobile terminal thus and need the frequent feature unlocked, make full use of chip time to carry out active warning user and carry out spoken language exercise, thus substantially increase results of learning and the learning efficiency of mobile terminal CAL spoken language.
Fig. 5 shows the structured flowchart of the device of the mobile terminal CAL spoken language that the embodiment of the present invention provides, and this device may be used for the method for the mobile terminal CAL spoken language described in service chart 1 to Fig. 4.For convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.With reference to Fig. 5:
The device of described mobile terminal CAL spoken language comprises:
Text display unit 51, for when receiving unlocking request information, obtains a text, and the described text that display obtains;
Pronunciation judging unit 52, for receiving the voice of user according to described Characters, and judges whether described voice reach pronunciation standard corresponding to described text;
Separate lock unit 53, if reach pronunciation standard corresponding to described text for described voice, then perform unlocking operation.
Preferably, described pronunciation judging unit 52 comprises:
First extracts subelement 521, for extracting the phoneme in described voice, and generates the first phoneme list according to the time order and function order of the phoneme in described voice;
Second extracts subelement 522, for extracting the phoneme in described text, and generates the second phoneme list according to the time order and function order of the phoneme in described text;
Computation subunit 523, for calculating the matching degree of described voice and described text according to described first phoneme list and described second phoneme list;
Judgment sub-unit 524, if be greater than preset value for described matching degree, then judges that described voice reach pronunciation standard corresponding to described text; If described matching degree is less than or equal to described preset value, then judge that described voice do not reach pronunciation standard corresponding to described text.
Alternatively, described solution lock unit 53 also for:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the voice of described user's typing reach pronunciation standard corresponding to described text, perform unlocking operation.
Preferably, described solution lock unit 53 also for:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the typing number of times that the voice of described user's typing reach pronunciation standard corresponding to described text or described user reaches preset times, perform unlocking operation.
Preferably, described device also comprises:
Pronunciation standard determining unit 54, for receiving the spoken class information of user's input, determine the pronunciation standard of the text combination belonging to described text according to described spoken class information, and determine according to the pronunciation standard that described text combines the pronunciation standard that described text is corresponding.
Preferably, described device also comprises:
Text combination or pronunciation standard adjustment unit 55, for judging whether all texts in the text combination belonging to described text all reach the pronunciation standard of described text combination, if so, point out described user to choose the combination of another text or heighten the pronunciation standard of described text combination.
The embodiment of the present invention is by when receiving unlocking request information, unlock interface shows a text, user is according to text typing voice in prompting, receive user according to the voice of described Characters, and when the voice of user's typing reach pronunciation standard corresponding to the text, perform unlocking operation, utilize user to use during mobile terminal thus and need the frequent feature unlocked, make full use of chip time to carry out active warning user and carry out spoken language exercise, thus substantially increase results of learning and the learning efficiency of mobile terminal CAL spoken language.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with the combination of electronic hardware or computer software and electronic hardware.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
Those skilled in the art can be well understood to, and for convenience and simplicity of description, the device of foregoing description and the specific works process of unit, with reference to the corresponding process in preceding method embodiment, can not repeat them here.
In several embodiments that the application provides, should be understood that disclosed apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.
If described function using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part of the part that technical scheme of the present invention contributes to prior art in essence in other words or this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection range of claim.

Claims (10)

1. a method for mobile terminal CAL spoken language, is characterized in that, comprising:
When receiving unlocking request information, obtain a text, and the described text that display obtains;
Receive user according to the voice of described Characters, and judge whether described voice reach pronunciation standard corresponding to described text;
If described voice reach pronunciation standard corresponding to described text, then perform unlocking operation.
2. the method for claim 1, is characterized in that, the described pronunciation standard judging whether described voice reach described text corresponding comprises:
Extract the phoneme in described voice, and generate the first phoneme list according to the time order and function order of the phoneme in described voice;
Extract the phoneme in described text, and generate the second phoneme list according to the time order and function order of the phoneme in described text;
The matching degree of described voice and described text is calculated according to described first phoneme list and described second phoneme list;
If described matching degree is greater than preset value, then judge that described voice reach pronunciation standard corresponding to described text; If described matching degree is less than or equal to described preset value, then judge that described voice do not reach pronunciation standard corresponding to described text.
3. the method for claim 1, is characterized in that, described judge whether described voice reach pronunciation standard corresponding to described text after, described method also comprises:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the voice of described user's typing reach pronunciation standard corresponding to described text, perform unlocking operation.
4. the method for claim 1, is characterized in that, described judge whether described voice reach pronunciation standard corresponding to described text after, described method also comprises:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the typing number of times that the voice of described user's typing reach pronunciation standard corresponding to described text or described user reaches preset times, perform unlocking operation.
5. the method as described in any one of Claims 1-4, is characterized in that, described when receiving unlocking request information, before obtaining a text, described method also comprises:
Receive the spoken class information of user's input, determine the pronunciation standard of the text combination belonging to described text according to described spoken class information, and determine according to the pronunciation standard that described text combines the pronunciation standard that described text is corresponding.
6. a device for mobile terminal CAL spoken language, is characterized in that, comprising:
Text display unit, for when receiving unlocking request information, obtains a text, and the described text that display obtains;
Pronunciation judging unit, for receiving the voice of user according to described Characters, and judges whether described voice reach pronunciation standard corresponding to described text;
Separate lock unit, if reach pronunciation standard corresponding to described text for described voice, then perform unlocking operation.
7. device as claimed in claim 6, it is characterized in that, described pronunciation judging unit comprises:
First extracts subelement, for extracting the phoneme in described voice, and generates the first phoneme list according to the time order and function order of the phoneme in described voice;
Second extracts subelement, for extracting the phoneme in described text, and generates the second phoneme list according to the time order and function order of the phoneme in described text;
Computation subunit, for calculating the matching degree of described voice and described text according to described first phoneme list and described second phoneme list;
Judgment sub-unit, if be greater than preset value for described matching degree, then judges that described voice reach pronunciation standard corresponding to described text; If described matching degree is less than or equal to described preset value, then judge that described voice do not reach pronunciation standard corresponding to described text.
8. device as claimed in claim 6, is characterized in that, described solution lock unit also for:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the voice of described user's typing reach pronunciation standard corresponding to described text, perform unlocking operation.
9. device as claimed in claim 6, is characterized in that, described solution lock unit also for:
If described voice do not reach pronunciation standard corresponding to described text, then point out described user typing voice again, until after the typing number of times that the voice of described user's typing reach pronunciation standard corresponding to described text or described user reaches preset times, perform unlocking operation.
10. the device as described in any one of claim 6 to 9, is characterized in that, described device also comprises:
Pronunciation standard determining unit, for receiving the spoken class information of user's input, determine the pronunciation standard of the text combination belonging to described text according to described spoken class information, and determine according to the pronunciation standard that described text combines the pronunciation standard that described text is corresponding.
CN201510009052.XA 2015-01-06 2015-01-06 The spoken method and device of mobile terminal CAL Active CN104639742B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510009052.XA CN104639742B (en) 2015-01-06 2015-01-06 The spoken method and device of mobile terminal CAL

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510009052.XA CN104639742B (en) 2015-01-06 2015-01-06 The spoken method and device of mobile terminal CAL

Publications (2)

Publication Number Publication Date
CN104639742A true CN104639742A (en) 2015-05-20
CN104639742B CN104639742B (en) 2018-04-10

Family

ID=53218029

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510009052.XA Active CN104639742B (en) 2015-01-06 2015-01-06 The spoken method and device of mobile terminal CAL

Country Status (1)

Country Link
CN (1) CN104639742B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654785A (en) * 2016-03-18 2016-06-08 上海语知义信息技术有限公司 Personalized spoken foreign language learning system and method
CN106648539A (en) * 2017-01-03 2017-05-10 广东小天才科技有限公司 Unlocking method and electronic terminal
CN107230173A (en) * 2017-06-07 2017-10-03 南京大学 A kind of spoken language exercise system and method based on mobile terminal
CN109032707A (en) * 2018-07-19 2018-12-18 深圳乐几科技有限公司 Terminal and its verbal learning method and apparatus
CN109637286A (en) * 2019-01-16 2019-04-16 广东小天才科技有限公司 A kind of Oral Training method and private tutor's equipment based on image recognition
CN112116832A (en) * 2019-06-19 2020-12-22 广东小天才科技有限公司 Spoken language practice method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130130211A1 (en) * 2011-11-21 2013-05-23 Age Of Learning, Inc. Computer-based language immersion teaching for young learners
CN103730032A (en) * 2012-10-12 2014-04-16 李志刚 Method and system for controlling multimedia data
CN103885688A (en) * 2014-03-27 2014-06-25 深圳市国华光电科技有限公司 Screen unlocking method and device
CN104598122A (en) * 2014-11-29 2015-05-06 深圳市金立通信设备有限公司 Terminal
CN104598791A (en) * 2014-11-29 2015-05-06 深圳市金立通信设备有限公司 Voice unlocking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130130211A1 (en) * 2011-11-21 2013-05-23 Age Of Learning, Inc. Computer-based language immersion teaching for young learners
CN103730032A (en) * 2012-10-12 2014-04-16 李志刚 Method and system for controlling multimedia data
CN103885688A (en) * 2014-03-27 2014-06-25 深圳市国华光电科技有限公司 Screen unlocking method and device
CN104598122A (en) * 2014-11-29 2015-05-06 深圳市金立通信设备有限公司 Terminal
CN104598791A (en) * 2014-11-29 2015-05-06 深圳市金立通信设备有限公司 Voice unlocking method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654785A (en) * 2016-03-18 2016-06-08 上海语知义信息技术有限公司 Personalized spoken foreign language learning system and method
CN106648539A (en) * 2017-01-03 2017-05-10 广东小天才科技有限公司 Unlocking method and electronic terminal
CN106648539B (en) * 2017-01-03 2020-06-05 广东小天才科技有限公司 Unlocking method and electronic terminal
CN107230173A (en) * 2017-06-07 2017-10-03 南京大学 A kind of spoken language exercise system and method based on mobile terminal
CN109032707A (en) * 2018-07-19 2018-12-18 深圳乐几科技有限公司 Terminal and its verbal learning method and apparatus
CN109637286A (en) * 2019-01-16 2019-04-16 广东小天才科技有限公司 A kind of Oral Training method and private tutor's equipment based on image recognition
CN112116832A (en) * 2019-06-19 2020-12-22 广东小天才科技有限公司 Spoken language practice method and device

Also Published As

Publication number Publication date
CN104639742B (en) 2018-04-10

Similar Documents

Publication Publication Date Title
US10269346B2 (en) Multiple speech locale-specific hotword classifiers for selection of a speech locale
CN104639742A (en) Method and device for assisting in learning speaking through mobile terminal
EP3121809B1 (en) Individualized hotword detection models
JP6771805B2 (en) Speech recognition methods, electronic devices, and computer storage media
CN105931644B (en) A kind of audio recognition method and mobile terminal
CN105976812B (en) A kind of audio recognition method and its equipment
KR101735212B1 (en) Method and device for voiceprint identification
EP2700071B1 (en) Speech recognition using multiple language models
US9390711B2 (en) Information recognition method and apparatus
CN104992704B (en) Phoneme synthesizing method and device
CN105096940A (en) Method and device for voice recognition
JP6556575B2 (en) Audio processing apparatus, audio processing method, and audio processing program
WO2013003772A3 (en) Speech recognition using variable-length context
CN104036774A (en) Method and system for recognizing Tibetan dialects
US10140976B2 (en) Discriminative training of automatic speech recognition models with natural language processing dictionary for spoken language processing
TW200900967A (en) Multi-mode input method editor
JP2019061662A (en) Method and apparatus for extracting information
CN105336324A (en) Language identification method and device
US20170084267A1 (en) Electronic device and voice recognition method thereof
CN103853703A (en) Information processing method and electronic equipment
JP2015206906A (en) Speech retrieval method, speech retrieval device, and program for speech retrieval device
CN104462912B (en) Improved biometric password security
CN109256125B (en) Off-line voice recognition method and device and storage medium
WO2020206975A1 (en) Method for calculating number of syllables in unit time and related apparatus
JP2015045689A (en) Method for evaluating voice recognition result about voice recognition system, computer and computer program for the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant