CN109032707A - Terminal and its verbal learning method and apparatus - Google Patents
Terminal and its verbal learning method and apparatus Download PDFInfo
- Publication number
- CN109032707A CN109032707A CN201810796365.8A CN201810796365A CN109032707A CN 109032707 A CN109032707 A CN 109032707A CN 201810796365 A CN201810796365 A CN 201810796365A CN 109032707 A CN109032707 A CN 109032707A
- Authority
- CN
- China
- Prior art keywords
- user
- pronunciation
- learned
- key
- spoken
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000001755 vocal effect Effects 0.000 title claims abstract description 77
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000004590 computer program Methods 0.000 claims description 7
- 238000004321 preservation Methods 0.000 claims description 6
- 239000000284 extract Substances 0.000 claims description 3
- 230000006399 behavior Effects 0.000 claims 1
- 238000002360 preparation method Methods 0.000 abstract description 5
- 238000010586 diagram Methods 0.000 description 7
- 238000003825 pressing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 3
- 241000234295 Musa Species 0.000 description 2
- 235000018290 Musa x paradisiaca Nutrition 0.000 description 2
- 230000006698 induction Effects 0.000 description 2
- 230000003252 repetitive effect Effects 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 210000005182 tip of the tongue Anatomy 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 238000005266 casting Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 235000013311 vegetables Nutrition 0.000 description 1
- 210000001260 vocal cord Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4482—Procedural
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
- G06Q50/205—Education administration or guidance
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/04—Electrically-operated educational appliances with audible presentation of the material to be studied
Abstract
The embodiment of the present application proposes a kind of terminal and its verbal learning method and apparatus, comprising: obtains when operating first key when user's preparation enters verbal learning, the first key information that first key generates;And the spoken language to be learned pushed by the terminal is played according to first key information, and instruction information is generated according to first key information, with the spoken step for guiding user that broadcast is followed to learn the push, after the first key of user's operation, terminal will push spoken to user, and issue instruction information guidance user study, it is easy to operate, particularly with the such user of children, their know-hows, operational capacity are limited, it guides them to learn by terminal, verbal learning can be readily entered.
Description
Technical field
The present invention relates to verbal learning technical fields, more particularly to a kind of terminal and its verbal learning method and apparatus.
Background technique
People can learn spoken language, such as English with APP at present.User will learn to need first to open a terminal when spoken language, so
After open corresponding APP, the word for then also wanting user that oneself is selected to be learnt operates more complicated.Particularly with children
Such user, know-how, operational capacity are limited, they are ignorant of operating APP at the terminal, it is difficult to enter spoken language by APP
Study.
Summary of the invention
Based on this, it is necessary to provide a kind of verbal learning method of terminal.
A kind of verbal learning method of terminal, the terminal are equipped with the first key, which comprises
Obtain the first key information generated when first key is operated;
The spoken language to be learned pushed by the terminal is played according to first key information, and according to first key
Information generates instruction information, with the spoken language for guiding user that broadcast is followed to learn the push.
The verbal learning method of above-mentioned terminal, after the first key of user's operation, terminal will push it is spoken to user, concurrently
Instruction information guidance user study out, easy to operate, particularly with the such user of children, their know-hows, operational capacity have
Limit guides them to learn, can readily enter verbal learning by terminal.
First key is fingerprint recognition touch key-press in one of the embodiments, the method also includes:
Obtain the fingerprint feature information of user user when touching first key;
It is associated with the fingerprint feature information of the user and spoken language that the user had learnt, and saves the finger of the user
Line characteristic information;
If detecting, the user is exited after verbal learning, gets touching when user enters prepared verbal learning once more
Fingerprint feature information when first key is touched, and detects the fingerprint of the fingerprint feature information and preservation that obtain again
Characteristic information is consistent, then plays spoken language pushed by the terminal, in addition to the spoken language that the user had learnt.
In one of the embodiments, the method also includes:
Obtaining user follows broadcast to the spoken user pronunciation issued to be learned, extracts institute from the user pronunciation
The vocal print feature information of user is stated to characterize the identity of the user;
The spoken language to be learned that the vocal print feature information of the user had learnt with the user is associated, and is protected
Deposit the vocal print feature information of the user;
If detecting, the user is exited after verbal learning, gets user's hair that user follows broadcast to issue once more
Sound, and the vocal print feature information matches of the user of the vocal print feature information extracted again and preservation then play and remove institute
State the spoken language other than associated spoken language.
In one of the embodiments, the method also includes:
Obtaining user follows broadcast to the spoken user pronunciation issued to be learned;
Spoken original standard pronunciation to be learned is provided, will original standard pronunciation and the user pronunciation into
Row matching, to obtain the goodness of fit of the user pronunciation Yu original standard pronunciation, according to the goodness of fit to the user
Pronunciation is scored;
Appraisal result is fed back into user, and is determined whether to play next spoken language to be learned according to appraisal result.
The terminal is equipped with the second key in one of the embodiments,
Described the step of obtaining the user pronunciation that user follows broadcast to the spoken sending to be learned, is: using obtaining
While family manipulates the second key information that second key generates, the user pronunciation is obtained.
It is described in one of the embodiments, to be determined whether to play next spoken step to be learned according to appraisal result
Suddenly include:
If corresponding score of the user pronunciation is lower than preset fraction, the spoken language to be learned is played again to guide
User reads aloud the spoken language to be learned again, and the number until playing the spoken language to be learned reaches preset times or described
The corresponding scoring of spoken user pronunciation to be learned reaches preset fraction, just plays next spoken language to be learned.
It is described in one of the embodiments, that spoken original standard pronunciation to be learned is provided, it will be described original
Standard pronunciation is matched with the user pronunciation, to obtain the goodness of fit of the user pronunciation Yu original standard pronunciation,
The step of being scored according to the goodness of fit the user pronunciation is executed by cloud.
If the corresponding scoring of the user pronunciation in one of the embodiments, is lower than preset fraction, institute is played again
Stating spoken step to be learned includes:
If corresponding score of the user pronunciation is lower than preset fraction, incorrect pronunciations are extracted from the user pronunciation,
And provide the spoken orthoepy Shape of mouth to be learned according to the incorrect pronunciations and play to user, to instruct to use
Pronounce again to the spoken language to be learned according to the orthoepy Shape of mouth at family.
Also propose a kind of verbal learning device of terminal, described device includes:
First key information obtains module, for obtaining the first key information generated when first key is operated;
Guiding module, for playing the spoken language pushed by the terminal according to first key information, and according to first key
Information generates instruction information, to guide user that broadcast is followed to learn the spoken language to be learned.
Also proposing a kind of terminal, the terminal is intelligent robot, and the intelligent robot includes memory and processor,
Computer program is stored in the memory, when the computer program is executed by the processor, so that the processor
The step of executing the verbal learning method of terminal described in any embodiment as above.
Detailed description of the invention
Fig. 1 is the flow diagram of the verbal learning method of the terminal in one embodiment;
Fig. 2 is the terminal structure schematic diagram in one embodiment;
Fig. 3 is the flow diagram of the verbal learning method of the terminal in another embodiment;
Fig. 4 is the structural schematic diagram of the verbal learning device of the terminal in one embodiment.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used in the restriction present invention.
Fig. 1 is the flow diagram of the verbal learning method of terminal in one embodiment.Referring to Fig. 1, the present embodiment
Verbal learning method can be used for Oral English Practice or the study of other languages, can be applied to terminal, and terminal, which can be, has language
The intelligent robot of sound interactive function, is also possible to handheld terminal, such as smart phone or tablet computer, and terminal is equipped with first and presses
Key.The verbal learning method includes:
Step 102, the first key information generated when the first key is operated is obtained.
User can operate the first key when preparation enters verbal learning.
First key can be touch key-press, specifically can be capacitive induction touch key-press or the first key can also
To be physical button.When first key information can be the first key and be user-operably, the level information of generation.First key is
The mode of physical button, the first key of operation is pressing, and the first key is touch key-press, and the mode for operating the first key then can be with
It is to touch.
Step 104, the spoken language to be learned pushed by terminal is played according to the first key information, and is believed according to the first key
Breath generates instruction information, to guide user that broadcast is followed to learn spoken language to be learned.
The spoken language to be learned of terminal push can be English word or other languages words.Spoken language to be learned can be with
It is stored in the memory of terminal;Spoken language to be learned can be saved by type, can will be to be learned by taking English word as an example
English word is classified as color word, vegetables word, animal word, furniture word etc..Spoken language to be learned is also possible to English
Sentence or other languages sentences.
Instruction information can be voice messaging.It specifically can be according to the first key information and by the to be learned of broadcasting
Spoken classification generates the instruction information.For example, English word to be learned is " Red ", user presses the first key, terminal then root
According to the generation of the classification of the first key information and " Red ", " you are good, and, now into English study, we start to learn color word for you
Pronunciation " voice indicate information.
The spoken language to be learned of terminal push can be the spoken language that user did not learnt, can repeat to learn to avoid user in this way
It practises.It detects that user enters verbal learning using the first key, then can obtain subscriber identity information, by subscriber identity information and be somebody's turn to do
The spoken language that user learnt is associated, and the spoken language to be learned of terminal push is the spoken language in addition to being associated with spoken language.One end
End may have multiple users to be used to learn spoken language, the identity of the same user is associated with the spoken language that it learnt, can allow makes
With terminal study, spoken each user is independent of each other.
The subscriber identity information of acquisition can be the fingerprint feature information of user.It first presses in one of the embodiments,
Key is fingerprint recognition touch key-press, the verbal learning method of the terminal further include: obtains user user when touching the first key
Fingerprint feature information, and the finger print information of association user and user had learnt before detecting that user exits verbal learning
Spoken language, and save the fingerprint feature information of user.If detecting, user is exited after verbal learning, gets user once more
Fingerprint feature information when touching the first key when into preparation verbal learning, and detect the fingerprint feature information obtained again
It is consistent with the fingerprint feature information of preservation, then play spoken language pushed by terminal, in addition to the spoken language that user had learnt.
The first key of the present embodiment is also equipped with fingerprint identification function simultaneously, when user touches the first key with finger, terminal
User fingerprints information is obtained by the first key, and saves the finger print information of characterization user identity, then exits spoken language in user
The spoken language that association user finger print information and user learnt before learning.If the user touches first after exiting verbal learning again
Key preparation is again introduced into verbal learning, then plays the spoken language that the user A did not learn.Such as user A exits verbal learning
Afterwards, enter verbal learning after 1 hour again, then when user's preparation is again introduced into spoken language, when touching the first key, so that it may
The identity of user A is identified, then the spoken word played will not be the spoken word that user A learnt before 1 hour, in this way
User A would not repetitive learning word.
If specifically pronunciation of the user to the spoken language is not detected in preset time, then determines user after playing spoken language
Verbal learning is exited, it is subsequent to close terminal.
The subscriber identity information of acquisition can be the vocal print feature information of user.Terminal in one of the embodiments,
Verbal learning method further include:
Obtaining user follows broadcast to the spoken user pronunciation issued to be learned, and the vocal print of user is extracted from user pronunciation
Characteristic information is to characterize the identity of user.
The spoken language to be learned that the vocal print feature information of user learnt with played and user is associated, and
Save the vocal print feature information of user;
If detecting, user is exited after verbal learning, gets the user pronunciation that user follows broadcast to issue once more,
And the vocal print feature information matches of the user for the vocal print feature information and preservation extracted again, then it plays in addition to associated spoken language
Spoken language.
The present embodiment can identify that the vocal print feature information of user, such as user A are exited when user follows broadcasting phonation
After verbal learning, enter verbal learning after 1 hour again, then after one spoken word of terminal plays, after user A pronunciation, energy
The identity of user A is determined according to the vocal print feature information of user A, then the spoken word of follow-up play, will not be user 1 hour it
The preceding spoken word learnt.User A in this way would not repetitive learning word.
In other embodiments, referring to Fig. 2, the verbal learning method of the present embodiment further include:
Step 202, obtaining user follows broadcast to the spoken user pronunciation issued to be learned.
After having played spoken language to be learned, user follows the spoken progress pronunciation exercises to be learned of broadcasting, then obtains
User follows the pronunciation when verbal learning to be learned of broadcasting.
Specifically, user can be obtained by way of recording to spoken pronunciation to be learned.It can use long-pressing end
The mode of key is held to record user to spoken pronunciation to be learned.
In one embodiment, terminal is equipped with the second key, obtains user and broadcast is followed to issue spoken language to be learned
User pronunciation the step of be: when obtaining user and manipulating the second key, while the second key information of generation, obtain user
Pronunciation.
Wherein, the second key can be touch key-press, and the mode of the second key of operation then can be touch, the second key tool
Body can be capacitive induction touch key-press or the second key is also possible to physical button, operate the mode of the second key then
It is pressing.When second key information can be the second key of manipulation, the level information of the second key generation.First key and second
Key can be also possible to two keys independent of each other with the same key.As shown in figure 3, being one has the first key 310
With the structural schematic diagram of the intelligent robot 300 of the second key 320.
The present embodiment is to obtain user in real time to spoken sending to be learned in user while the second key of long-pressing
User pronunciation.During obtaining user pronunciation, the unrelated sound of other and spoken language to be learned may be recorded, therefore the present embodiment
While setting user's long-pressing key, user pronunciation is recorded.In this way, user pronunciation can be obtained accurately, other unrelated sound are reduced
The interference of sound.
Step 204, spoken original standard pronunciation to be learned is provided, by original standard pronunciation and user pronunciation progress
Match, to obtain the goodness of fit of user pronunciation Yu original standard pronunciation, is scored according to the goodness of fit user pronunciation.
Specifically, the settable goodness of fit is higher, and score is higher.The goodness of fit is different, and scoring is also different.It can be according to the goodness of fit
Segmentation scoring is carried out to the pronunciation of user, for example, fit like a glove scoring be it is outstanding, the goodness of fit 80%~90% scoring be it is good, kiss
Right 70%~80% scoring be it is medium, the scoring of the goodness of fit 60%~70% is passed, and the scoring of the goodness of fit 0%~60% is does not conform to
Lattice.
Illustrate so that spoken language to be learned is English word to be learned as an example, user will practice repeatedly the same English word
The case where, it may all score each pronunciation in this way, processing load in this way can be bigger.
If obtaining user in one of the embodiments, continuously to issue as more than twice same English word to be learned
User pronunciation, then the verbal learning method of terminal includes the steps that each secondary user pronunciation of comparison in the present embodiment;
If the step of difference of each pronunciation is less than preset value, and original standard pronunciation is matched with user pronunciation is
The user pronunciation for extracting word first time is matched with the original standard pronunciation of the English word;If the English word
The step of there are different user pronunciations in each secondary user pronunciation, then are matched original standard pronunciation with user pronunciation is to mention
It takes in each secondary user pronunciation of the English word, the most user pronunciation of same subscriber pronunciation quantity, for the original mark of the word
Quasi- pronunciation is matched.
User may be different to the goodness of fit of pronunciation with the original pronunciation of standard of the same English word, since and coincideing
Degree is different, and user also can be different to the corresponding scoring of each pronunciation of the same English word.If to the same English word
It is each pronunciation all go to score, the processing load of terminal will be made bigger.The present embodiment, user are more to the same English word
The case where secondary continuous pronunciation, selects a user pronunciation and scores, and can avoid increasing processing load.
Being for spoken language to be learned includes the sentence of more than two words, such as English sentence includes multiple English words,
Then step 204 is to extract the user pronunciation of each word, user pronunciation and original standard pronunciation to each word in sentence
It is matched, is successively scored according to the user pronunciation of each word and the goodness of fit of the original pronunciation of respective standard, then will
Scoring feeds back to user according to the sequence of each word user pronunciation.
Such as user practices " State Intellectual Property Office ", including 4 English words, then divides
It is other to the user pronunciation of State, the user pronunciation of Intellectual, the user pronunciation of Property and the user of Office
Pronunciation is matched with corresponding original standard pronunciation respectively, is then scored according to the corresponding goodness of fit, then by State
Appraisal result, the appraisal result of Intellectual, the appraisal result of the appraisal result of Property and Office sequence
Feed back to user.
Spoken original standard pronunciation to be learned is provided in one of the embodiments, by original standard pronunciation and is used
Family pronunciation is matched, and to obtain the goodness of fit of user pronunciation Yu original standard pronunciation, is carried out according to the goodness of fit to user pronunciation
The step of scoring, is executed by cloud.Specifically spoken original standard pronunciation to be learned is provided using cloud, it will
Original standard pronunciation is matched with user pronunciation, to obtain the goodness of fit of user pronunciation Yu original standard pronunciation, according to identical
Degree scores to user pronunciation.
Step 206, appraisal result is fed back into user, and according to appraisal result determines whether to play next to be learned
It is spoken.
Specifically, the mode for appraisal result being fed back to user can be with voice broadcasting modes.Such as calculate user couple
The scoring of " thanks " is 89 points, then plays voice " you are 89 points to the pronunciation of thanks " to user.
Determined whether to play next spoken step packet to be learned according to appraisal result in one of the embodiments,
It includes:
If the corresponding scoring of user pronunciation is lower than preset fraction, spoken language to be learned is played again to guide user again
Spoken language to be learned is read aloud, until spoken user pronunciation number to be learned reaches preset times, is just played next wait learn
The spoken language of habit.
Specifically, the case where being a word for spoken language to be learned, if the corresponding scoring of user pronunciation is lower than default
Score then plays the word to guide user that broadcast is followed to read aloud the word again, until word spoken language to be learned again
User pronunciation it is corresponding scoring reach preset fraction, just play next word to be learned spoken language to be learned.If to be learned
The corresponding scoring of spoken user pronunciation be not up to preset fraction always, but play spoken number to be learned and reach default
Number can also play next spoken language to be learned.
User such as children, if practice is repeatedly also unqualified, may will affect learning initiative when learning spoken.If
It sets same word broadcasting time and then plays next word up to preset times, spoken enthusiasm can be learnt to avoid strike user.
Specifically, if spoken broadcasting time to be learned reaches preset times, and if last time to play this to be learned
The corresponding scoring of spoken user pronunciation be lower than preset fraction, then feedback is higher than the score of corresponding scoring to user, and broadcasts
Put next spoken language to be learned.The mode of feedback specifically can be with voice.
For example, preset times are 4 times, preset fraction is 80 points, word " banana " broadcasting time 4 times, Er Tong
To the pronunciation of word " banana " or 60 points when playing the 4th time, it is lower than 80 points, in order not to hit the product of children for learning English
Polarity and self-confidence, the score for feeding back to children can be 82 points higher than true score, pretend children and passed through the word
Study, plays next word.Child can be so encouraged to go study actively, actively, self-confident spoken.
It specifically, is the sentence situation for including more than two words, the spoken language to be learned to this for spoken language to be learned
In each word user pronunciation scoring, it is understood that there may be score is higher than preset fraction, and there is also scores to be lower than preset fraction
, then the corresponding word of user pronunciation that will be less than preset fraction extracts, and replays, and instruction user practices again, directly
To the word for being lower than preset fraction, user pronunciation reaches preset fraction, or plays the number of the word and reach preset times,
Play next uttered sentence to be learned.Such as preset fraction is 90 points, user is 98 points to the pronunciation scoring of State,
The scoring of Intellectual is 66 points, and the scoring of Property is 76 points, and the pronunciation scoring of Office is 100 points.Then mention
The two words of Intellectual and Property are taken and replay, until scoring of the user to the pronunciation of the two words
Reach preset times higher than preset fraction, or to the broadcasting time of the two words, just plays next spoken language to be learned.
Specifically, if the corresponding scoring of user pronunciation is lower than preset fraction, spoken step to be learned is played again
Include:
If the corresponding scoring of user pronunciation is lower than preset fraction, incorrect pronunciations are extracted from user pronunciation, and according to mistake
Accidentally pronunciation identification user provides spoken orthoepy Shape of mouth to be learned to spoken pronunciation mouth shape defect to be learned
And user is played to, to instruct user to pronounce again to spoken language to be learned according to orthoepy Shape of mouth.
Such as thanks, standard pronunciation should beUser may send out intoIt is evident that user is
By θ hair at s, identify that user is that accidentally hair, then can be with language at s because sharp ability of not lisping is by θ according to user's incorrect pronunciations
The orthoepy accent information of sound casting " the tip of the tongue stretches out slightly, and upper lower tooth gently bites the tip of the tongue, supplies gas, and vocal cords do not vibrate " θ.
The verbal learning method of above-mentioned terminal, after the first key of user's operation, terminal will push it is spoken to user, concurrently
Instruction information guidance user study out, easy to operate, particularly with the such user of children, their know-hows, operational capacity have
Limit guides them to learn, can readily enter verbal learning by terminal.
Above-mentioned verbal learning method scores to user pronunciation also according to standard pronunciation, then feeds back appraisal result
To user, such user can know pronunciation whether standard, can also guide user how orthoepy;If user pronunciation is corresponding
Scoring be lower than preset fraction, but spoken broadcasting time reaches preset times, can also play next spoken language, will not hit user
Confidence.And the corresponding scoring of user pronunciation is lower than preset fraction, can also feed back the score for being higher than corresponding scoring to user, pretend
User has passed through the study of the word, plays next word, and child can so be encouraged to go study positive, actively, self-confident
It is spoken.
It should be understood that although each step in the flow chart of Fig. 1 is successively shown according to the instruction of arrow, this
A little steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly state otherwise herein, these steps
It executes there is no the limitation of stringent sequence, these steps can execute in other order.Moreover, at least part in Fig. 1
Step may include that perhaps these sub-steps of multiple stages or stage are executed in synchronization to multiple sub-steps
It completes, but can execute at different times, the execution sequence in these sub-steps or stage, which is also not necessarily, successively to be carried out,
But it can be executed in turn or alternately at least part of the sub-step or stage of other steps or other steps.
The embodiment of the present application also proposes a kind of verbal learning device of terminal.Fig. 4 is the mouth of the terminal in one embodiment
The structural schematic diagram of language learning device, the device in Fig. 4 include:
First key information obtains module 410, for obtaining the first key generated when first key is operated letter
Breath;
Guiding module 420 for playing the spoken language pushed by terminal according to the first key information, and is believed according to the first key
Breath generates instruction information, to guide user that broadcast is followed to learn spoken language to be learned.
The division of modules is only used for for example, in other embodiments in the verbal learning device of above-mentioned terminal,
The verbal learning device of terminal can be divided into different modules as required, to complete the verbal learning device of above-mentioned terminal
All or part of function.
The specific restriction of verbal learning device about the present embodiment terminal may refer to above for verbal learning side
The restriction of method, details are not described herein.Modules in the verbal learning device of above-mentioned terminal can fully or partially through software,
Hardware and combinations thereof is realized.Above-mentioned each module can be embedded in the form of hardware or independently of the processor in computer equipment
In, it can also be stored in a software form in the memory in computer equipment, in order to which processor calls execution above each
The corresponding operation of module.
The embodiment of the present application also provides a kind of computer readable storage mediums.One or more is executable comprising computer
The non-volatile computer readable storage medium storing program for executing of instruction, when computer executable instructions are executed by one or more processors,
So that processor executes the step of verbal learning method of the terminal in any embodiment as above.
A kind of computer program product comprising instruction, when run on a computer, so that computer executes as above
The verbal learning method of terminal in any embodiment.
The embodiment of the present application also provides a kind of terminal, which is intelligent robot, and intelligent robot includes memory
And processor, store computer program in memory, when computer program is executed by processor so that processor execute it is as above
The step of verbal learning method of terminal in any embodiment.
Each technical characteristic of embodiment described above can be combined arbitrarily, for simplicity of description, not to above-mentioned reality
It applies all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, all should be considered as described in this specification.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention
Range.Therefore, the scope of protection of the patent of the invention shall be subject to the appended claims.
Claims (10)
1. a kind of verbal learning method of terminal, which is characterized in that the terminal is equipped with the first key, which comprises
Obtain the first key information generated when first key is operated;
The spoken language to be learned pushed by the terminal is played according to first key information, and is believed according to first key
Breath generates instruction information, to guide user that broadcast is followed to learn the spoken language to be learned.
2. the method according to claim 1, wherein first key be fingerprint recognition touch key-press, it is described
Method further include:
Obtain the fingerprint feature information of user user when touching first key;
It is associated with the fingerprint feature information of the user and spoken language that the user had learnt, and the fingerprint for saving the user is special
Reference breath;
If detecting, the user is exited after verbal learning, is got when user enters prepared verbal learning once more and is touched institute
Fingerprint feature information when the first key is stated, and detects the fingerprint characteristic of the fingerprint feature information and preservation that obtain again
Information is consistent, then plays spoken language pushed by the terminal, in addition to the spoken language that the user had learnt.
3. the method according to claim 1, wherein the method also includes:
Obtaining user follows broadcast to the spoken user pronunciation issued to be learned, extracts the use from the user pronunciation
The vocal print feature information at family;
The spoken language to be learned that the vocal print feature information of the user had learnt with the user is associated, and saves institute
State the vocal print feature information of user;
If detecting, the user is exited after verbal learning, gets the user pronunciation that user follows broadcast to issue once more,
And the vocal print feature information matches of the user for the vocal print feature information and preservation extracted again, then it plays except described associated
Spoken language other than spoken language.
4. the method according to claim 1, wherein the method also includes:
Obtaining user follows broadcast to the spoken user pronunciation issued to be learned;
Spoken original standard pronunciation to be learned is provided, by original standard pronunciation and user pronunciation progress
Match, to obtain the goodness of fit of the user pronunciation Yu original standard pronunciation, according to the goodness of fit to the user pronunciation
It scores;
Appraisal result is fed back into user, and is determined whether to play next spoken language to be learned according to appraisal result.
5. according to the method described in claim 4, it is characterized in that, the terminal be equipped with the second key,
Described the step of obtaining the user pronunciation that user follows broadcast to the spoken sending to be learned, is: obtaining user behaviour
While controlling the second key information that second key generates, the user pronunciation is obtained.
6. according to the method described in claim 4, it is characterized in that, it is described according to appraisal result determine whether to play it is next to
The spoken step of study includes:
If corresponding score of the user pronunciation is lower than preset fraction, the spoken language to be learned is played again to guide user
The spoken language to be learned is read aloud again, reaches preset times or described wait learn until playing the spoken number to be learned
The corresponding scoring of the spoken user pronunciation of habit reaches preset fraction, just plays next spoken language to be learned.
7. according to the method described in claim 4, it is characterized in that,
It is described that spoken original standard pronunciation to be learned is provided, will original standard pronunciation and the user pronunciation into
Row matching, to obtain the goodness of fit of the user pronunciation Yu original standard pronunciation, according to the goodness of fit to the user
The step of pronunciation is scored is executed by cloud.
8. according to the method described in claim 6, it is characterized in that, if the corresponding scoring of the user pronunciation is lower than default point
Number, then playing the spoken step to be learned again includes:
If corresponding score of the user pronunciation is lower than preset fraction, incorrect pronunciations, and root are extracted from the user pronunciation
There is provided the spoken orthoepy Shape of mouth to be learned according to the incorrect pronunciations and play to user, with instruct user by
Pronounce again to the spoken language to be learned according to the orthoepy Shape of mouth.
9. a kind of verbal learning device of terminal, which is characterized in that described device includes:
First key information obtains module, for obtaining the first key information generated when first key is operated;Guidance
Module, for playing the spoken language pushed by the terminal according to first key information, and according to first key information
Instruction information is generated, to guide user that broadcast is followed to learn the spoken language to be learned.
10. a kind of terminal, which is characterized in that the terminal is intelligent robot, and the intelligent robot includes memory and place
Device is managed, computer program is stored in the memory, when the computer program is executed by the processor, so that the place
Manage the step of device executes the verbal learning method such as terminal described in any item of the claim 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810796365.8A CN109032707A (en) | 2018-07-19 | 2018-07-19 | Terminal and its verbal learning method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810796365.8A CN109032707A (en) | 2018-07-19 | 2018-07-19 | Terminal and its verbal learning method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109032707A true CN109032707A (en) | 2018-12-18 |
Family
ID=64643500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810796365.8A Pending CN109032707A (en) | 2018-07-19 | 2018-07-19 | Terminal and its verbal learning method and apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109032707A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111681502A (en) * | 2020-04-03 | 2020-09-18 | 郭胜 | Intelligent word memorizing system and method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202816170U (en) * | 2012-09-11 | 2013-03-20 | 安徽科大讯飞信息科技股份有限公司 | English learning device based on voice interaction |
CN103413550A (en) * | 2013-08-30 | 2013-11-27 | 苏州跨界软件科技有限公司 | Man-machine interactive language learning system and method |
CN104639742A (en) * | 2015-01-06 | 2015-05-20 | 广东小天才科技有限公司 | Method and device for assisting in learning speaking through mobile terminal |
CN104680859A (en) * | 2015-02-13 | 2015-06-03 | 绵阳点悟教育科技有限公司 | Independent study system and detection method |
US20150269942A1 (en) * | 2014-03-21 | 2015-09-24 | Wells Fargo Bank, N.A. | Enhanced fraud detection |
CN106057023A (en) * | 2016-06-03 | 2016-10-26 | 北京光年无限科技有限公司 | Intelligent robot oriented teaching method and device for children |
CN107194836A (en) * | 2017-04-26 | 2017-09-22 | 辽宁科技大学 | A kind of Multifunctional English teaching management device and its teaching method |
CN107301792A (en) * | 2017-05-25 | 2017-10-27 | 合肥矽智科技有限公司 | A kind of intelligence accompanies child's early education platform |
CN108039180A (en) * | 2017-12-11 | 2018-05-15 | 广东小天才科技有限公司 | A kind of achievement of childrenese expression practice learns method and microphone apparatus |
-
2018
- 2018-07-19 CN CN201810796365.8A patent/CN109032707A/en active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202816170U (en) * | 2012-09-11 | 2013-03-20 | 安徽科大讯飞信息科技股份有限公司 | English learning device based on voice interaction |
CN103413550A (en) * | 2013-08-30 | 2013-11-27 | 苏州跨界软件科技有限公司 | Man-machine interactive language learning system and method |
US20150269942A1 (en) * | 2014-03-21 | 2015-09-24 | Wells Fargo Bank, N.A. | Enhanced fraud detection |
CN104639742A (en) * | 2015-01-06 | 2015-05-20 | 广东小天才科技有限公司 | Method and device for assisting in learning speaking through mobile terminal |
CN104680859A (en) * | 2015-02-13 | 2015-06-03 | 绵阳点悟教育科技有限公司 | Independent study system and detection method |
CN106057023A (en) * | 2016-06-03 | 2016-10-26 | 北京光年无限科技有限公司 | Intelligent robot oriented teaching method and device for children |
CN107194836A (en) * | 2017-04-26 | 2017-09-22 | 辽宁科技大学 | A kind of Multifunctional English teaching management device and its teaching method |
CN107301792A (en) * | 2017-05-25 | 2017-10-27 | 合肥矽智科技有限公司 | A kind of intelligence accompanies child's early education platform |
CN108039180A (en) * | 2017-12-11 | 2018-05-15 | 广东小天才科技有限公司 | A kind of achievement of childrenese expression practice learns method and microphone apparatus |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111681502A (en) * | 2020-04-03 | 2020-09-18 | 郭胜 | Intelligent word memorizing system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108961856A (en) | Verbal learning method and apparatus | |
US8843372B1 (en) | Natural conversational technology system and method | |
CN110085261A (en) | A kind of pronunciation correction method, apparatus, equipment and computer readable storage medium | |
EP0986802B1 (en) | Reading and pronunciation tutor | |
Fung | Robots heart with | |
CN109960809B (en) | Dictation content generation method and electronic equipment | |
JP5756555B1 (en) | Utterance evaluation apparatus, utterance evaluation method, and program | |
CN106448288A (en) | Interactive English learning system and method | |
CN106658129A (en) | Emotion-based terminal control method and apparatus, and terminal | |
CN109410664A (en) | A kind of pronunciation correction method and electronic equipment | |
CN106297784A (en) | Intelligent terminal plays the method and system of quick voice responsive identification | |
CN109039647A (en) | Terminal and its verbal learning method | |
CN109669661A (en) | A kind of control method and electronic equipment of dictation progress | |
CN109753583A (en) | One kind searching topic method and electronic equipment | |
CN106782606A (en) | For the communication and interaction systems and its method of work of Dao Jiang robots | |
CN109388705A (en) | A kind of text intent classifier method | |
Wilson et al. | Comparing word, character, and phoneme n-grams for subjective utterance recognition. | |
CN110164020A (en) | Ballot creation method, device, computer equipment and computer readable storage medium | |
CN109032707A (en) | Terminal and its verbal learning method and apparatus | |
CN109033448A (en) | A kind of study bootstrap technique and private tutor's equipment | |
US10971148B2 (en) | Information providing device, information providing method, and recording medium for presenting words extracted from different word groups | |
CN110046354A (en) | Chant bootstrap technique, device, equipment and storage medium | |
Schuller et al. | Incremental acoustic valence recognition: an inter-corpus perspective on features, matching, and performance in a gating paradigm | |
CN108245886A (en) | Game interactive learning methods and system based on voice control | |
KR102498172B1 (en) | Method of interactive foreign language learning by voice talking each other using voice recognition function and TTS function |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181218 |
|
RJ01 | Rejection of invention patent application after publication |