CN106354767A - Practicing system and method - Google Patents
Practicing system and method Download PDFInfo
- Publication number
- CN106354767A CN106354767A CN201610694985.1A CN201610694985A CN106354767A CN 106354767 A CN106354767 A CN 106354767A CN 201610694985 A CN201610694985 A CN 201610694985A CN 106354767 A CN106354767 A CN 106354767A
- Authority
- CN
- China
- Prior art keywords
- animation
- model
- unit
- mentioned
- pronunciation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/70—Information retrieval; Database structures therefor; File system structures therefor of video data
- G06F16/74—Browsing; Visualisation therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/632—Query formulation
- G06F16/634—Query by example, e.g. query by humming
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/065—Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention provides a practicing system which comprises a storage unit which can store a model animation for assisting user in learning, an animation playing unit which can play the model animation stored in the storage unit, and a control unit which can control playing operation of the animation playing unit, wherein the model animation played by the animation playing unit comprises a picture displaying a pronunciation mode and sound information corresponding to the picture. In addition, the invention further provides a method for learning assistance.
Description
Technical field
The present invention relates to a kind of technology for assisting user's exercise, particularly to a kind of exercise system and exercising method.
Technical background
Generally by the way of sound, in traditional class of languages teaching software, to express the pronunciation of a word or phonetic symbol,
User by learning the pronunciation of word or phonetic symbol to the simulation of the sound heard.
According to this traditional class of languages teaching software learning method so that user can only be by the sound heard
Practise, imitate, the word that now user oneself sends or the pronunciation of phonetic symbol mostly rely on individual subscriber to the cognition of sound with
And the ability to model of itself, so different user reads identical word in study or final pronunciation error may be very during phonetic symbol
Greatly.And user depends on the pronunciation that this traditional class of languages teaching software learns to there may be mistake, and oneself is often discovered
Less than so the experience that this traditional class of languages teaching software brings to user is poor.User is if it is intended to read to certain simultaneously
Word or phonetic symbol are it may be necessary to take a long time to grasp.
Content of the invention
The scope of the present invention, only by appended claims defined, is not subject to this section content of the invention in any degree
Statement limited.
In order to overcome above-mentioned technical problem, the present invention provides a kind of (1) exercise system, comprising: memory element, can store
The model animation of auxiliary user learning;Animation broadcast unit, can play the described model animation that described memory element is stored;
Control unit, can control the broadcasting operation of described animation broadcast unit;Wherein, what described animation broadcast unit was play is described
Model animation includes showing the picture of sound producing pattern and the acoustic information corresponding with described picture.
(2) exercise system according to claim (1) it is characterised in that: described control unit can be by display screen
It is divided into the multiple display interfaces including the first display interface and the second display interface;Wherein, described first display interface is used for broadcasting
Put described model animation, described second display interface is used for showing imitation pronunciation image or video during user learning.
(3) exercise system according to claim (1) or (2) it is characterised in that: described model animation be human body head
The 3d model animation of the 3d model animation in portion, the 3d model animation of human body head perspective or human body head section;Described control is single
The instruction based on user for the unit can in the 3d model animation of the 3d model animation of described human body head and described human body head perspective or
Switch between the 3d model animation of described human body head section.
(4) exercise system according to claim (2) it is characterised in that: described control unit can be according to terminal
On equipment, the position of photographic head is adjusting the described first display interface and described second display interface position in described display screen
Put so that described second display interface is always positioned at the one end near described photographic head in described display screen.
(5) exercise system according to claim (1) or (2) is it is characterised in that also include: recognition unit, is used for
Identify the basic phonetic symbol information that content to be learnt is comprised, and form the inquiry corresponding with content to be learnt and refer to
Order;Described control unit is based on described query statement and controls described animation broadcast unit to play described model animation.
(6) exercise system according to claim (5) it is characterised in that: described recognition unit can also identify institute
The basic phonetic symbol of content to be learnt;Described control unit can control described animation broadcast unit to play reflection content to be learnt
The model animation of overall pronunciation information or reflect content to be learnt in each root target pronunciation information model animation.
(7) exercise system according to claim (6) it is characterised in that: described control unit is according to described identification
The content to be learnt that unit is identified each basis phonetic symbol, transfer with described each basis phonetic symbol corresponding for determining basis
The parameter of phonetic symbol pronunciation, and content to be learnt is marked on according to each root based on animation broadcast unit described in described state modulator
In model animation described in played in order.
(8) exercise system according to claim (7) it is characterised in that described parameter include with face, tongue,
The corresponding timeline information of tooth and/or oral cavity and positional information.
(9) exercise system according to claim (1) or (2) is it is characterised in that also include: operating unit, user
Transfer the instruction of corresponding pronunciation model by operating described operating unit input;Described control unit is transferred accordingly according to described
Pronunciation model instruction, control described animation broadcast unit to transfer corresponding pronunciation model from described memory element, and broadcast
Put described pronunciation model.
(10) exercise system according to claim (1) or (2) it is characterised in that: described animation broadcast unit energy
Enough described model animation is play with different broadcasting speeds.
The present invention also provides a kind of (11) exercising method, comprising: stored in content to be learnt and memory element
Model animation set up mapping relations;Extract described model animation based on described mapping relations from described memory element;Play
The described model animation being extracted from described memory element;Wherein, the described model animation play includes showing sound producing pattern
Picture and the acoustic information corresponding with described picture.
(12) exercising method according to claim (11) it is characterised in that: display screen is divided into and shows including first
Show multiple display interfaces of interface and the second display interface;Wherein, described first display interface is used for playing described model animation,
Described second display interface is used for showing imitation pronunciation image or video during user learning.
(13) exercising method according to claim (11) or (12) it is characterised in that: described model animation is behaved
The 3d model animation of the 3d model animation of body head, the 3d model animation of human body head perspective or human body head section;Based on use
The instruction at family can be in the 3d model animation of the 3d model animation of described human body head and described human body head perspective or described human body
Switch between the 3d model animation of head section.
(14) exercising method according to claim (12) it is characterised in that: according to photographic head on terminal unit
Position is adjusting the described first display interface and described second display interface position in described display screen so that described second
Display interface is always positioned at the one end near described photographic head in described display screen.
(15) exercising method according to claim (11) or (12) is it is characterised in that also include: identification will be learned
The basic phonetic symbol information that the content practised is comprised, and form the query statement corresponding with content to be learnt;Based on described
Query statement controls described animation broadcast unit to play described model animation.
(16) exercising method according to claim (15) it is characterised in that: in described identification step additionally it is possible to
Identify the basic phonetic symbol of content to be learnt;Play in step in described animation, play and reflect that the entirety of content to be learnt is sent out
The model animation of message breath or the model animation reflecting each root target pronunciation information in content to be learnt.
(17) exercising method according to claim (16) it is characterised in that: according to identified to learn in
The each basis phonetic symbol holding, transfers parameter for determine basic phonetic symbol pronunciation corresponding with described each basis phonetic symbol, and is based on institute
State the model animation described in played in order that parameter is marked in content to be learnt according to each root.
(18) exercise system according to claim (17) is it is characterised in that described parameter includes and face, tongue
Head, the corresponding timeline information of tooth and/or oral cavity and positional information.
(18) exercising method according to claim (11) or (12) is it is characterised in that also include: is adjusted according to user
The instruction of the corresponding pronunciation model taking, transfers corresponding pronunciation model, and plays described pronunciation model.
(19) exercising method according to claim (11) or (12) it is characterised in that: described animation play step
In rapid, described model animation can be play with different broadcasting speeds.
A kind of exercise system and the method passing through this systematic learning language pronouncing that the present invention provides, this system can be broadcast
Put the model animation of human body head, user can by the change of shape of mouth during pronunciation of words in observing and nursing animation, tongue and/
Or tooth be located change in location carry out learning pronunciation.While playing animation, system also plays word or phonetic symbol pair accordingly
The sound answered is so that user can be by video cartoon and the method combining of pronouncing, the shape of mouth, tongue when imitating animation pronunciation
And/or the position of tooth, the pronunciation of study word or phonetic symbol, thus allow users to more preferably learn faster new word or phonetic symbol
Pronunciation.
Brief description
Fig. 1 is the structural representation of exercise system involved in the present invention;
Fig. 2 is of the schematic diagram that exercise system involved in the present invention carries out CAL;
Fig. 3 is of the schematic diagram that exercise system involved in the present invention carries out CAL;
Fig. 4 is that the involved exercise system of the present invention carries out one of split screen display available;
Fig. 5 is that the involved exercise system of the present invention carries out one of split screen display available;
Fig. 6 is the user involved in the present invention workflow diagram by exercise system;
Fig. 7 is another workflow diagram that user involved in the present invention passes through exercise system;
Fig. 8 is the playing flow figure of model animation involved in the present invention;
Fig. 9 is the flow process that phonetic symbol cartoon making unit involved in the present invention synchronously generates the model animation of word/phonetic symbol
Figure;
Figure 10 is another workflow diagram that user involved in the present invention passes through exercise system;
Figure 11 is another example of the model animation involved by embodiment of the present invention;
Figure 12 is another workflow diagram that model animation involved in the present invention is play;
Figure 13 is another workflow diagram that model animation involved in the present invention is play.
Specific embodiment
Illustrate the present invention below according to accompanying drawing illustrated embodiment.This time disclosed embodiment can consider in all sides
Face is illustration, without limitation.
Fig. 1 is the structural representation of exercise system involved in the present invention.As shown in figure 1, described exercise system 10 includes
Control unit 11, memory element 12, recognition unit 13, input block 14, animation broadcast unit 15, shooting unit 16 and operation are single
Unit 17.Wherein, described exercise system 10 be used for terminal unit, described terminal unit can for mobile phone, pad, e-dictionary or
The equipment such as person pc end.
Above-mentioned control unit 11, memory element 12, recognition unit 13, input block 14, animation broadcast unit 15, shooting are single
Unit 16 is connected by bus 18 with operating unit 17, to realize being in communication with each other between each unit.
It is single that above-mentioned control unit 11 can control said memory cells 12, recognition unit 13, input block 14, animation to play
The action of each unit such as unit 15, shooting unit 16 and operating unit 17.
Above-mentioned input block 14 is used for inputting user's content to be learnt.Said memory cells 12 are used for storage auxiliary and use
The model animation of family study.Above-mentioned recognition unit 13 is used for identifying user and passes through to be learnt interior of above-mentioned input block 14 input
Hold, and the model animation being stored in content to be learnt for user and said memory cells 12 is set up mapping relations, formed
Query statement.Above-mentioned animation broadcast unit 15 is used for playing the model animation being stored in said memory cells 12.Above-mentioned animation
The action that broadcast unit 15 plays above-mentioned model animation can include the display of picture of model animation and relative with above-mentioned picture
The sound of the pronunciation answered.Above-mentioned shooting unit 16 is caught when stating, for user, the study content that input block 14 inputted in the study
Catch the pronunciation situation (including position and its change of nozzle type, tongue and/or the tooth of user etc.) of user.Aforesaid operations unit 17
For inputting relevant operational order.
Above-mentioned control unit 11 can control the broadcasting speed of above-mentioned animation broadcast unit 15.Above-mentioned animation broadcast unit 15
In the model animation play, nozzle type, tongue and/or tooth when user can see word orthoepy by this model animation
Deng position and its change so that user is it could be visually observed that the sound producing pattern of this word is such that it is able to accurately and in time
Correct the sound producing pattern of oneself, improve learning efficiency.
For example, as shown in Fig. 2 as user learning English word silver, user can be play single by above-mentioned animation
The pronunciation model animation with regard to English word silver that unit 15 is play, based on shown each phonetic symbol in this model animation
During pronunciation corresponding orthoepy, the position of nozzle type, tongue and/or tooth etc. and its change carry out imitating contact.As Fig. 2 a-
Shown in 2e, when the pronunciation model animation with regard to English word silver that above-mentioned animation broadcast unit 15 is play, can be by
During the orthoepy of basic phonetic symbol s-i-l-v-r of silver, the position of nozzle type, tongue and/or tooth etc. and its change are drilled
Show so that user, to can intuitively view and emulate each root target sound producing pattern, can be carried out more efficiently in order to user
Imitate pronunciation.As shown in Fig. 2 f-2g, after having learnt above-mentioned each root target sound producing pattern, user is allowed to use different speed
Imitate the entirety pronunciation of English word silver.
In the above-described embodiment it is preferable that understanding that a certain root target is sent out in detail when user needs more to understand
The instruction of this root target sound producing pattern, above-mentioned control list during sound pattern, can be transferred by aforesaid operations unit 17 input
Unit 11 transfers from said memory cells 12 according to the instruction of user input to be needed more to understand that understand in detail is somebody's turn to do with user
The model animation of root target sound producing pattern, and played out by above-mentioned animation broadcast unit 15.For example, as shown in figure 3, on
The pronunciation model stating plinth phonetic symbol can be model animation it is particularly possible to be 3d model animation.This 3d model animation includes human body head
The animation of portion's section or the 3d model animation of human body head perspective, can more clear this basic phonetic symbol shown in detail correctly send out
The position of oval shape during sound, tongue position and tooth and its their change in phonation.
In the above-described embodiment it is preferable that after above-mentioned exercise system 10 is activated, above-mentioned control unit 11 can be
Split screen display available is carried out on the display screen of above-mentioned terminal unit.For example, as shown in figure 4, above-mentioned control unit 11 sets in above-mentioned terminal
Two interfaces are divided on standby display screen shown, the wherein first display interface 151 is used for showing above-mentioned animation broadcast unit
The model animation that 15 are play, the second display interface 152 is used for showing that above-mentioned shooting unit 16 passes through taking the photograph of above-mentioned terminal unit
As 153 user being captured imitates image or the video of pronunciation, wherein above-mentioned second display interface 152 is located at above-mentioned display
Near one end of above-mentioned photographic head 153 in screen.
In the above-described embodiment it is preferable that in order to obtain better Consumer's Experience, user can be above-mentioned by changing
The placement location (for example turn upside down above-mentioned terminal unit) of terminal unit, to change above-mentioned first display interface 151 and above-mentioned
The position of the second display interface 152.For example, as shown in figure 5, above-mentioned terminal unit is turned upside down by user, make with photographic head
Down, the induction installation of above-mentioned terminal unit (no figure depending on) senses the position change of above-mentioned terminal unit, and will for 153 one end
Above-mentioned position change information is sent to above-mentioned control unit 11, and above-mentioned control unit 11 is according to the position change letter being received
Breath, adjust the first shown display interface 151 and the second display interface 152 on the display screen of above-mentioned terminal unit so that on
State the second display interface 152 and be still located on one end near above-mentioned photographic head 153 in above-mentioned display screen, and can normally be shown
Show.
In the above-described embodiment, split screen display available is carried out by the first display interface 151 and the second display interface 152, make
Obtain user just can compare in real time during learning pronunciation in the model animation that above-mentioned first display interface 151 shows
Really in the sound producing pattern of pronunciation and above-mentioned second display interface 152 sound producing pattern of oneself such that it is able in time, effectively find out
Both sound producing patterns distinguishing and correcting oneself.
In the above-described embodiment, aforesaid operations unit 17 is to have functional operation button/icon (no figure regards).Example
As the operation button/icon of described operating unit 17 can include starting/suspend, rotate or amplify animation, searches for, transfers mould
Type animation, speed governing broadcasting, switching model and/or repeat playing animation.When user clicks on corresponding button/icon, above-mentioned control
Unit 11 processed can execute operation according to the corresponding unit of the function control of respective keys/icon.For example, when user clicks on this speed
When degree adjusts broadcasting button, control unit 11 can be adjusted to the broadcasting rhythm of animation and audio frequency according to the setting of user,
Adjust the model animation of each word and the broadcasting speed of pronunciation, to reach customer satisfaction system speed;When user clicks on switching mould
During type button, control unit 11 allows hand over animation model, such as the switching between (a) model in Fig. 3 and (b) model, or
Both persons switching and 3d model animation (not shown) of human body head perspective between respectively.
Fig. 6 is the workflow diagram that user involved in the present invention passes through exercise system 10.As shown in fig. 6, user first
Start the exercise system 10 (step s1) on terminal unit.After user starts above-mentioned exercise system 10, user input is waited to be wanted
The content of study.For example, this exercise system 10 is used for the word that studies English, user passes through above-mentioned input block 14 input and wanted
The content of the English word of study.Above-mentioned control unit 11 judges whether user has inputted content (step to be learnt
s2).If user does not input content (step s2: no) to be learnt, above-mentioned exercise system 10 continues waiting for user input institute
Content to be learnt.If user has inputted content (step s2: yes) to be learnt, above-mentioned recognition unit 13 identifies above-mentioned institute
Content to be learnt, and the model animation being stored in above-mentioned content to be learnt and said memory cells 12 is set up mapping
Relation, forms query statement (step s3).Above-mentioned control unit 11 allows above-mentioned animation broadcast unit 15 based on above-mentioned query statement
Extract the model animation being stored in said memory cells 12 and play out (step s4).
Fig. 7 is another workflow diagram that user involved in the present invention passes through exercise system 10.As shown in fig. 7, first
User starts the exercise system 10 (step s11) on terminal unit.After user starts above-mentioned exercise system 10, above-mentioned control unit
11 carry out split screen display available (step s12) on the display screen of above-mentioned terminal unit.For example, above-mentioned control unit 11 is in above-mentioned terminal
On the display screen of equipment by display interface be divided into for show model animation that above-mentioned animation broadcast unit 15 is play first
Display interface 151 and for showing the user that above-mentioned shooting unit 16 is captured by the photographic head 153 of above-mentioned terminal unit
Imitate the image of pronunciation or the second display interface 152 of video.
Above-mentioned control unit 11 judges whether user have adjusted the position (step s13) of above-mentioned terminal unit, runs up and down
Above-mentioned terminal unit.If above-mentioned control unit 11 judge user have adjusted above-mentioned terminal unit position (step s13:
It is), above-mentioned control unit 11 adjusts above-mentioned first display interface 151 and above-mentioned second display interface 152 so that above-mentioned second is aobvious
Show that interface 152 remains adjacent to one end (step of the photographic head 153 of above-mentioned terminal unit in the display screen of above-mentioned terminal unit
s14).If above-mentioned control unit 11 judges that user does not adjust above-mentioned terminal unit (step s13: no), it is directly entered step
s15.
Above-mentioned control unit 11 judges whether user has inputted content (step s15) to be learnt.If user is not
Input content (step s15: no) to be learnt, above-mentioned exercise system 10 continues waiting for user input content to be learnt.
If user has inputted content (step s15: yes) to be learnt, above-mentioned recognition unit 13 identifies above-mentioned to be learnt interior
Hold, and the model animation being stored in above-mentioned content to be learnt and said memory cells 12 is set up mapping relations (step
s16).Above-mentioned animation broadcast unit 15 extracts, according to above-mentioned mapping relations, the model animation being stored in said memory cells 12 and exists
(step s17) is played out in above-mentioned first display interface 151.Above-mentioned shooting unit 16 is by the shooting by above-mentioned terminal unit
153 users being captured imitate the images of pronunciation or video shows in above-mentioned second display interface 152 (step s18).
Fig. 8 is the playing flow figure of the model animation in step s4 involved in the present invention or step s17.As shown in figure 8,
Above-mentioned animation broadcast unit 15 plays the model animation (step s21) related to above-mentioned content to be learnt.Above-mentioned control list
Unit 11 judges that user is directed to a certain root target clearer pronunciation model (as shown in Figure 3) (step the need of obtaining
s22).Play in the model animation process related to above-mentioned content to be learnt in above-mentioned animation broadcast unit 15, Yong Huke
To operate the operation button/icon in aforesaid operations unit 17 as needed, suspend above-mentioned animation broadcast unit 15 and model is moved
Draw broadcasting, and determine the need for as needed some root target in above-mentioned content to be learnt is pronounced into
Row is clear to be learnt in detail.Clearly detailed if necessary to carry out to some root target pronunciation in above-mentioned content to be learnt
Carefully learn, by operating corresponding operation button or icon in aforesaid operations unit 17, transfer this root target pronunciation mould
The model animation of formula, obtains clearer pronunciation model.For example, user needs clearly to understand " l " in " silver "
The detailed sound producing pattern of root target, can be transferred by operating corresponding operation button or icon in aforesaid operations unit 17
It is somebody's turn to do the model animation of " l " root target sound producing pattern, preferably 3d illustraton of model (as shown in Figure 3).
If above-mentioned control unit 11 judges that user needs to obtain is directed to the clearer pronunciation model of a certain root target
(such as 3d illustraton of model) (step s22: yes), above-mentioned animation broadcast unit 15 obtains this root of storage in memory element 12
The clearer pronunciation model of target (step s23), and more clear in this root target of display screen display of above-mentioned terminal unit
The pronunciation model (step s24) of Chu.User is over after this clearer pronunciation model of root target in study, can be by behaviour
Make to operate button or icon accordingly in aforesaid operations unit 17 so that above-mentioned animation broadcast unit 15 continues the base before playing
Plinth phonetic symbol pronunciation model animation.Then, above-mentioned control unit 11 judges whether that receiving the basic phonetic symbol pronunciation model of continuation broadcasting moves
The instruction (step s25) drawn.
If above-mentioned control unit 11 do not receive continue to play basic phonetic symbol pronunciation model animation instruction (step s25:
No), above-mentioned animation broadcast unit 15 is waited for.If above-mentioned control unit 11 receives continuation and plays basic phonetic symbol pronunciation
The instruction (step s25: yes) of model animation, enters step s26.
If above-mentioned control unit 11 judges that user does not need to obtain is directed to the clearer pronunciation mould of a certain root target
Type (step s22: no), then be directly entered step s26.
In step s26, above-mentioned control unit 11 judges whether currently played basic phonetic symbol is that user will learn
Last basic phonetic symbol in content.If above-mentioned control unit 11 judges that currently played basic phonetic symbol is not user institute
Last basic phonetic symbol (step s26: no) in content to be learnt.For example, as shown in figs. 2 a-e, if above-mentioned control list
Unit 11 judges that currently played basic phonetic symbol " i " (referring to Fig. 2 b) is not that user will learn last basis in content
Phonetic symbol, above-mentioned animation broadcast unit 15 continues to play the next basic phonetic symbol " l " that user will learn in content (referring to figure
2c).
If it is last in content that above-mentioned control unit 11 judges that currently played basic phonetic symbol is that user will learn
One basic phonetic symbol (step s26: yes), above-mentioned animation broadcast unit 15 plays the overall pronunciation model that user will learn content
Animation (step s27).For example, as shown in Fig. 2 e-2g, if above-mentioned control unit 11 judges currently played basic phonetic symbol
" r " (referring to Fig. 2 e) is last basic phonetic symbol that user will learn in content, and above-mentioned animation broadcast unit 15 is play and used
Family will learn the overall pronunciation model animation (referring to Fig. 2 f-2g) of content " silver ".
In the above-described embodiment, illustrate the overall pronunciation model animation of a word, but the invention is not restricted to this,
Can also be the overall pronunciation model animation of a phrase, or the overall pronunciation model animation of a sentence.
In step s27, above-mentioned animation broadcast unit 15 can be play user using different broadcasting speeds and will learn
The overall continuous pronunciation model animation (referring to Fig. 2 f-2g) of content.
Fig. 9 is the flow process that phonetic symbol cartoon making unit involved in the present invention synchronously generates the model animation of word/phonetic symbol
Figure.As shown in figure 9, all root target mistakes in the phonetic symbol audio repository of pre-production said memory cells 12 and phonetic symbol animation library
The pronunciation information of transient and model animation figure (step s31).For example, it is assumed that there being 8 basic phonetic symbols in said memory cells 12,
Described 8 root target animation figures corresponding digital 1 to 8, pronunciation corresponding 1 ' to 8 ', then root target described in pre-production
The model animation figure of all transient process be respectively 1 to 2,1 to 3 ..., 1 to 8,2 to 1,2 to 3 ..., 2 to 8 ..., 8 to 7, institute
The pronunciation stating all transient process of root target is respectively 1 ' to 2 ', 1 ' to 3 ' ..., 1 ' to 8 ', 2 ' to 1 ', 2 ' to 3 ' ...,
2 ' to 8 ' ..., 8 ' to 7 ', and the model animation figure of above-mentioned transient process and audio frequency are stored respectively in the phonetic symbol of memory element 12
In animation library and phonetic symbol audio repository.The word needing learning pronunciation is changed into corresponding word phonetic symbol (step s32).According to step
The pronunciation information of all root target transient process making in rapid s31 and model animation figure, according to word phonetic symbol order
Search the root target transition change in phonetic symbol animation library and phonetic symbol audio repository, and played in order.By above-mentioned according to word sound
The model animation figure of mark played in order and corresponding pronunciation information are stored in (step s33) in said memory cells 12.For example, when
When word is silver, this word can be converted into word phonetic symbol [silvr] (as Fig. 2 a) by system, then according to word phonetic symbol
[silvr] searches the model of the basic phonetic symbol of storage and its transitive state in said memory cells 12 one by one, and played in order sound
Mark s-i-l-v-r, whenever reading a root timestamp, can especially show the phonetic symbol of reading, as shown in figs. 2 a-e.Wherein, scheme
2 (a) is the animated image sectional drawing that system reads during phonetic symbol [s], and in the images, word silver, also prominent aobvious is read in display
Show the phonetic symbol [s] reading, also show the remaining phonetic symbol [silvr] not read simultaneously and read corresponding during this phonetic symbol [s]
Mouth shape;The system that is respectively Fig. 2 b-2e reads animated image when phonetic symbol [i], [l], [v], [r].The list generating in step s33
The broadcasting speed of the model of word is slower, based on phonetic symbol playing process one by one;On the basis of the model that step s33 generates, right
The model of the word generating in described step s33 carries out model training, generates the continuous broadcasting of word phonetic symbol and broadcasting speed is different
Model, be stored simultaneously in (step s34) in described memory element.In generating the continuous model process play of word phonetic symbol,
Needs carry out multiple coupling learning and calibration;System can arrange several broadcasting speed grades according to word length, sequentially broadcasts
Put it is preferable that each word collectively generates the model of four to five broadcasting speeds.
During study is imitated, user can also selection operation unit 17 as needed operation button controlling rotation
Turn, amplify or repeat playing animation, so that user more clearly sees model animation clearly, so that study is more convenient.
In exercise system involved in the present invention and exercising method, be stored with the memory element 12 of system word phonetic symbol
The model play one by one and the continuous model play of phonetic symbol, user, need to be according to different the moving of broadcasting speed in learning process
Picture is learnt one by one, but is not limited only to this.User can play the adjustment of speed button by the regulation of clicking operation unit 17
The broadcasting speed of word learning model, is learnt for this specific broadcasting speed, is allowed to meet the learning demand of user.
Exercise system involved in the present invention and exercising method, can be applied not only to CAL English, can also answer
For the various language of CAL Chinese, Japanese, French, German etc..
In the above-described embodiment, as shown in fig. 7, user start above-mentioned exercise system 10 after, above-mentioned control unit 11
Split screen display available is carried out on the display screen of above-mentioned terminal unit.But, the invention is not restricted to this.Such as above-mentioned control unit 11 exists
The step carrying out split screen display available on the display screen of above-mentioned terminal unit can identify in above-mentioned recognition unit 13 and above-mentioned will learn
Content when carry out, be specifically illustrated with reference to Figure 10.
Figure 10 is another workflow diagram that user involved in the present invention passes through exercise system 10.As shown in Figure 10, first
First user starts the exercise system 10 (step s41) on terminal unit.After user starts above-mentioned exercise system 10, above-mentioned control list
Unit 11 judges whether user has inputted content (step s42) to be learnt.If user does not input content to be learnt
(step s42: no), above-mentioned exercise system 10 continues waiting for user input content to be learnt.If user inputs and is wanted
The content (step s42: yes) of study, above-mentioned recognition unit 13 identifies above-mentioned content to be learnt, and will learn above-mentioned
Content and said memory cells 12 in the model animation that stored set up mapping relations, form query statement (step s43).
Above-mentioned control unit 11 carries out split screen display available (step s44) on the display screen of above-mentioned terminal unit.For example, above-mentioned
Control unit 11 on the display screen of above-mentioned terminal unit, display interface is divided into for showing above-mentioned animation broadcast unit 15 institute
The first display interface 151 of model animation of playing and for showing the shooting by above-mentioned terminal unit for the above-mentioned shooting unit 16
153 users being captured imitate the images of pronunciation or the second display interface 152 of video.
Above-mentioned control unit 11 judges whether user have adjusted the position (step s45) of above-mentioned terminal unit, runs up and down
Above-mentioned terminal unit.If above-mentioned control unit 11 judge user have adjusted above-mentioned terminal unit position (step s45:
It is), above-mentioned control unit 11 adjusts above-mentioned first display interface 151 and above-mentioned second display interface 152 so that above-mentioned second is aobvious
Show that interface 152 remains adjacent to one end (step of the photographic head 153 of above-mentioned terminal unit in the display screen of above-mentioned terminal unit
s46).If above-mentioned control unit 11 judges that user does not adjust above-mentioned terminal unit (step s45: no), it is directly entered step
s47.
Above-mentioned control unit 11 allows above-mentioned animation broadcast unit 15 extract said memory cells 12 according to above-mentioned query statement
Middle stored model animation plays out (step s47) in above-mentioned first display interface 151.Above-mentioned shooting unit 16 will be led to
Cross that the user that the photographic head 153 of above-mentioned terminal unit captured imitates the image of pronunciation or video show that above-mentioned second shows boundary
In face 152 (step s48).
In the above-described embodiment it is preferable that above-mentioned recognition unit 13 can identify each root of above-mentioned study content
Mark, above-mentioned animation broadcast unit 15 can obtain and above-mentioned basis phonetic symbol according to the basic phonetic symbol that above-mentioned recognition unit 13 is identified
Corresponding model animation and pronunciation information, and unit can be designated as with root and carry out broadcasting of model animation and pronunciation information
Put, or carry out the broadcasting of model animation and pronunciation information in units of word.So so that the dictionary system of the present invention need not
Store model animation and the pronunciation information of each word, and pass through to identify basic phonetic symbol, obtain the model corresponding with basic phonetic symbol
Animation and pronunciation information are combined, to play the model animation corresponding with content to be learnt and pronunciation information, thus having
Save to effect the storage resource in dictionary system and the expense making dictionary system.
Figure 11 is another example of the model animation involved by embodiment of the present invention.Taking English word " black " as a example
It is described, and the word of other English words or other language can be processed in the same way.As shown in figure 11, scheme
11 (a)-(d) corresponds to each root target pronunciation model respectively, and Figure 11 (a)-(d) corresponds to each root target respectively and pronounce in detail
Model.As user input word " black ", this word identification can be word phonetic symbol [blak] by above-mentioned recognition unit 13, that is,
Correspondence each basis phonetic symbol [b]-[l]-[a]-[k].Above-mentioned recognition unit 13 is based on above-mentioned recognition result, is formed and above-mentioned each basis
The corresponding query statement of phonetic symbol, and above-mentioned query statement is sent to control unit 11, above-mentioned control unit 11 is receiving
After corresponding query statement, transfer corresponding model from memory element 19.For example, if the query statement of control unit 11 reception
Corresponding is then to be associated and transfer from memory element corresponding this basic phonetic symbol with the model of Figure 11 (a) during basic phonetic symbol [b]
Pronunciation model, if the query statement that receives of system control unit is corresponding when being basic phonetic symbol [l], by this basic phonetic symbol
It is associated with the model of Figure 18 (b) and transfers corresponding pronunciation model from memory element, if looking into of receiving of system control unit
Asking instruction corresponding is then to be associated this basic phonetic symbol with the model of Figure 11 (c) during basic phonetic symbol [a] and adjust from memory element
Take corresponding pronunciation model, if the query statement that receives of system control unit is corresponding when being basic phonetic symbol [k], by this base
Plinth phonetic symbol is associated with the model of Figure 11 (d) and transfers corresponding pronunciation model from memory element.Then, by the Figure 11 being transferred
A the model of ()-(d) is sent to above-mentioned animation broadcast unit 15, play out according to the order that each root is marked in this word.
If user needs clearly to understand sending out in detail of the basic phonetic symbol [b], [l], [a] or [k] in " black "
Sound pattern, can be sent to above-mentioned system control unit by operating corresponding operation button or icon in aforesaid operations unit 17
Transfer the instruction of the detailed sound producing pattern of basic phonetic symbol [b], [l], [a] or [k], said system control unit is based on above-mentioned instruction
This basic phonetic symbol is associated with the model of Figure 11 (a), (b), (c) or (d) and transfers corresponding detailed pronunciation from memory element
Model.
Wherein, calling above-mentioned basis phonetic symbol pronunciation model and/or also can transfer while detailed pronunciation model corresponding
Voice data is so that the broadcasting of the broadcasting of model and sound is synchronously carried out.
Figure 12 is another workflow diagram that model animation involved in the present invention is play.As shown in figure 12, control unit
11 be connected to play instruction after (step s201), above-mentioned control unit 11 allows above-mentioned recognition unit 13 identifying user by above-mentioned input
Each basis phonetic symbol information in the study content that unit 14 is accepted, and form the query statement being associated with each basis phonetic symbol
(step s202), and send above-mentioned query statement (step s203) to above-mentioned control unit 11.On above-mentioned control unit 11 receives
After stating query statement, above-mentioned query statement is processed, send out related to respective memory unit 19 for above-mentioned query statement
The data of sound model is associated, and transfers the data (step of the related pronunciation model being stored in said memory cells 19
s204).Above-mentioned control unit 11 pushes the data (step s205) of above-mentioned related pronunciation model to above-mentioned animation broadcast unit 15,
Above-mentioned animation broadcast unit 15 plays accordingly each root target pronunciation successively according to the order that each root is marked on study content
Model (step s206).
In the above-described embodiment, corresponding to each word and/or root target above-mentioned model animation, above-mentioned root
Mark model and/or above-mentioned detailed pronunciation model are pre-production and are stored in corresponding memory element.But, in the present invention
In, do not pass through pre-production and store word and/or root target above-mentioned model animation, above-mentioned basis phonetic symbol model and/
Or above-mentioned detailed pronunciation model, but after carrying out 3d modeling (as shown in Figure 3) for human body head in advance, by setting with each
The corresponding parameter instruction for determining the phonetic symbol pronunciation of each basis of basic phonetic symbol, each organ on control 3d model is (for example
Face, tongue, tooth, oral cavity etc.) movement locus within a specified time, demarcated with each parameter in parameter instruction and carry out
Certain root target pronunciation when on certain time shaft each organ position.Each ginseng in each root target parameter instruction
Number includes timeline information corresponding with organs such as face, tongue, tooth, oral cavities and positional information (for example includes face to open
Size, tongue position, tooth open position and/or oral cavity space size of putting etc. determine the information of pronunciation).Wherein,
Each root target parameter can be set and stored in memory element 19 in advance.To be described below with reference to Figure 13.
Figure 13 is another workflow diagram that model animation involved in the present invention is play.As shown in figure 13, control unit
11 be connected to play instruction after (step s301), above-mentioned control unit 11 transfers 3d model (with reference to Fig. 7) (step from memory element 19
Rapid s302), and allow each base in the study content that above-mentioned recognition unit 13 identifying user accepted by above-mentioned input block 14
Plinth phonetic symbol information, and form the query statement (step s303) being associated with each basis phonetic symbol.Send to above-mentioned control unit 11
Above-mentioned query statement (step s304).After above-mentioned control unit 11 receives above-mentioned query statement, at above-mentioned query statement
Reason, transfers the parameter (step s305) of the model related to each basis phonetic symbol being stored in memory element 19.Above-mentioned control list
First above-mentioned animation broadcast unit 15 of 11 state modulator based on the model transferred is marked on the suitable of study content according to each root
Sequence plays accordingly each root target 3d model (step s306) successively.
For example, each model in Figure 11 can not be pre-production and is stored in memory element 19, but according to pin
The picture that the parameter of the corresponding model set by the phonetic symbol of each basis is play on animation broadcast unit 15.Specifically, pin
To the basic phonetic symbol [b], [l], [a] and [k] in " black ", be respectively provided with during these basic phonetic symbols pronunciations with face, tongue
The corresponding timeline information of the organs such as head, tooth, oral cavity and positional information, and the parameter as model.When animation is play
When unit 15 plays the pronunciation animation of word " black ", only need to transfer the mould related to basic phonetic symbol [b], [l], [a] and [k]
The parameter of type simultaneously plays out with reference to 3d model (as shown in Figure 3), need not be both needed to pre-production for each basis phonetic symbol and move
Draw model and stored, thus more further saving the storage resource in system and manufacturing cost.
If additionally, user needs clearly to understand the basic phonetic symbol [b], [l], [a] or [k] in " black "
In detail sound producing pattern, can by operating in aforesaid operations unit 17 corresponding operation button or icon to (a) in Fig. 3 and
Two 3d models switching in (b), or both respectively and human body head perspective 3d model animation (not shown) between cutting
Change.
The scope of the present invention is not limited by the explanation of implementation below, only shown in the scope of claims, and
Including all deformation having with right in the same meaning and right.
Claims (10)
1. a kind of exercise system, comprising:
Memory element, can store the model animation of auxiliary user learning;
Animation broadcast unit, can play the described model animation that described memory element is stored;
Control unit, can control the broadcasting operation of described animation broadcast unit;
Wherein, the described model animation that described animation broadcast unit is play include show sound producing pattern picture and with described picture
The corresponding acoustic information in face.
2. exercise system according to claim 1 it is characterised in that:
Display screen can be divided into the multiple display interfaces including the first display interface and the second display interface by described control unit;
Wherein, described first display interface is used for playing described model animation, and described second display interface is used for showing that user learns
Imitation pronunciation image during habit or video.
3. exercise system according to claim 1 and 2 it is characterised in that:
Described model animation is the 3d model animation of human body head, human body head is had an X-rayed 3d model animation or human body head section
3d model animation;
The instruction based on user for the described control unit can be had an X-rayed in the 3d model animation of described human body head and described human body head
3d model animation or the 3d model animation of described human body head section between switch over.
4. exercise system according to claim 2 it is characterised in that:
Described control unit can adjust described first display interface and described according to the position of photographic head on terminal unit
Two display interfaces in the position in described display screen so that described second display interface be always positioned in described display screen close
One end of described photographic head.
5. exercise system according to claim 1 and 2 is it is characterised in that also include:
Recognition unit, for identifying the basic phonetic symbol information that content to be learnt is comprised, and forms interior with to be learnt
Hold corresponding query statement;
Described control unit is based on described query statement and controls described animation broadcast unit to play described model animation.
6. exercise system according to claim 5 it is characterised in that:
Described recognition unit can also identify the basic phonetic symbol of content to be learnt;
Described control unit can control described animation broadcast unit to play the overall pronunciation information reflecting content to be learnt
Model animation or the model animation reflecting each root target pronunciation information in content to be learnt.
7. exercise system according to claim 6 it is characterised in that:
Each basis phonetic symbol of the content to be learnt that described control unit is identified according to described recognition unit, transfers each with described
The corresponding parameter for determining basic phonetic symbol pronunciation of basic phonetic symbol, and it is based on animation broadcast unit described in described state modulator
It is marked on the model animation described in played in order in content to be learnt according to each root.
8. exercise system according to claim 7 it is characterised in that described parameter include with face, tongue, tooth and/
Or the corresponding timeline information in oral cavity and positional information.
9. exercise system according to claim 1 and 2 is it is characterised in that also include: operating unit, and user passes through operation
The instruction of corresponding pronunciation model is transferred in described operating unit input;
Described control unit, according to the described instruction transferring corresponding pronunciation model, controls described animation broadcast unit to deposit from described
Transfer corresponding pronunciation model in storage unit, and play described pronunciation model.
10. exercise system according to claim 1 and 2 it is characterised in that:
Described animation broadcast unit can play described model animation with different broadcasting speeds.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610694985.1A CN106354767A (en) | 2016-08-19 | 2016-08-19 | Practicing system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610694985.1A CN106354767A (en) | 2016-08-19 | 2016-08-19 | Practicing system and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106354767A true CN106354767A (en) | 2017-01-25 |
Family
ID=57843604
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610694985.1A Pending CN106354767A (en) | 2016-08-19 | 2016-08-19 | Practicing system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106354767A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274736A (en) * | 2017-08-14 | 2017-10-20 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN108566519A (en) * | 2018-04-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Video creating method, device, terminal and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1851779A (en) * | 2006-05-16 | 2006-10-25 | 黄中伟 | Multi-language available deaf-mute language learning computer-aid method |
CN101290720A (en) * | 2008-06-17 | 2008-10-22 | 李伟 | Visualized pronunciation teaching method and apparatus |
WO2009066963A2 (en) * | 2007-11-22 | 2009-05-28 | Intelab Co., Ltd. | Apparatus and method for indicating a pronunciation information |
-
2016
- 2016-08-19 CN CN201610694985.1A patent/CN106354767A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1851779A (en) * | 2006-05-16 | 2006-10-25 | 黄中伟 | Multi-language available deaf-mute language learning computer-aid method |
WO2009066963A2 (en) * | 2007-11-22 | 2009-05-28 | Intelab Co., Ltd. | Apparatus and method for indicating a pronunciation information |
CN101290720A (en) * | 2008-06-17 | 2008-10-22 | 李伟 | Visualized pronunciation teaching method and apparatus |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274736A (en) * | 2017-08-14 | 2017-10-20 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN107274736B (en) * | 2017-08-14 | 2019-03-12 | 牡丹江师范学院 | A kind of interactive Oral English Practice speech sound teaching apparatus in campus |
CN108566519A (en) * | 2018-04-28 | 2018-09-21 | 腾讯科技(深圳)有限公司 | Video creating method, device, terminal and storage medium |
US11257523B2 (en) | 2018-04-28 | 2022-02-22 | Tencent Technology (Shenzhen) Company Limited | Video production method, computer device, and storage medium |
CN108566519B (en) * | 2018-04-28 | 2022-04-12 | 腾讯科技(深圳)有限公司 | Video production method, device, terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108492817A (en) | A kind of song data processing method and performance interactive system based on virtual idol | |
KR100900085B1 (en) | Language learning control method | |
CN109584648A (en) | Data creation method and device | |
CN108648535A (en) | A kind of tutoring system and its operation method based on the mobile terminals VR technology | |
CN108520650A (en) | A kind of intelligent language training system and method | |
CN106991094A (en) | Foreigh-language oral-speech is spoken learning system, method and computer program | |
US20100299137A1 (en) | Storage medium storing pronunciation evaluating program, pronunciation evaluating apparatus and pronunciation evaluating method | |
CN103080991A (en) | Music-based language-learning method, and learning device using same | |
CN104021326B (en) | A kind of Teaching Methods and foreign language teaching aid | |
CN102663925A (en) | Method and system for tongue training for language training of hearing-impaired children | |
WO2014151884A2 (en) | Device, method, and graphical user interface for a group reading environment | |
CN103377568A (en) | Multifunctional child somatic sensation educating system | |
CN101488121A (en) | Language learning method | |
CN109300469A (en) | Simultaneous interpretation method and device based on machine learning | |
CN108231066A (en) | Speech recognition system and method thereof and vocabulary establishing method | |
CN112053595B (en) | Computer-implemented training system | |
CN106354767A (en) | Practicing system and method | |
US8629341B2 (en) | Method of improving vocal performance with embouchure functions | |
Wik | The Virtual Language Teacher: Models and applications for language learning using embodied conversational agents | |
CN106373174A (en) | Model animation play system, and dictionary query system and method | |
Liu et al. | An interactive speech training system with virtual reality articulation for Mandarin-speaking hearing impaired children | |
Duffy | Shaping musical performance through conversation | |
JP4651981B2 (en) | Education information management server | |
JP3569278B1 (en) | Pronunciation learning support method, learner terminal, processing program, and recording medium storing the program | |
CN111311713A (en) | Cartoon processing method, cartoon display device, cartoon terminal and cartoon storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170125 |