CN106373174A - Model animation play system, and dictionary query system and method - Google Patents

Model animation play system, and dictionary query system and method Download PDF

Info

Publication number
CN106373174A
CN106373174A CN201610695282.0A CN201610695282A CN106373174A CN 106373174 A CN106373174 A CN 106373174A CN 201610695282 A CN201610695282 A CN 201610695282A CN 106373174 A CN106373174 A CN 106373174A
Authority
CN
China
Prior art keywords
animation
model
content
unit
pronunciation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610695282.0A
Other languages
Chinese (zh)
Inventor
杰瑞·超·戴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Co Ltd
Original Assignee
First Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Co Ltd filed Critical First Co Ltd
Priority to CN201610695282.0A priority Critical patent/CN106373174A/en
Publication of CN106373174A publication Critical patent/CN106373174A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
    • G09B5/065Combinations of audio and video presentations, e.g. videotapes, videodiscs, television systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2213/00Indexing scheme for animation
    • G06T2213/12Rule based animation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Electrically Operated Instructional Devices (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides a model animation play system. The system comprises an input unit used for inputting to-be-queried contents, a display unit used for displaying information related to the to-be-queried contents on a display interface, an animation play instruction unit which sends an animation play instruction through an animation play identifier displayed on the display interface together with the information related to the to-be-queried contents, and an animation play unit which plays a model animation assisting a user to learn the to-be-queried contents according to the animation play instruction. In addition, the invention provides a dictionary query system and a dictionary query method.

Description

A kind of model system of animation play, dictionary enquiry system and method
Technical field
The present invention relates to a kind of technology for animation broadcasting, particularly to a kind of model system of animation play and its Application in dictionary enquiry system and querying method.
Technical background
In traditional system of animation play, need to store corresponding animation and the pronunciation information of pre-production, playing The association carrying out model animation and pronunciation information is needed then to play during animation again.So so that traditional system of animation play Need the storage model animation of making and pronunciation information in advance, thus needing huge memory space, the requirement to storage resource Higher.Meanwhile, also result in making related system costly.
In addition, in traditional online dictionary enquiry system, generally adopting audio file in the information of this word to simulate The mode of sound expressing the pronunciation of a word or phonetic symbol, user by the simulation of the sound heard come CAL word Or the pronunciation of phonetic symbol.As shown in figure 1, when user needs the pronunciation simulating " silver " this word, by click sound iconThe simulated sound of this word just can be heard.
However, pronunciation supplementary mode based on this traditional online dictionary enquiry software learning word so that user only Can carry out learning, imitate by the sound heard, the word that now user oneself sends or the pronunciation of phonetic symbol mostly rely on Individual subscriber is to the cognition of sound and the ability to model of itself, so different user is when identical word or phonetic symbol are read in study Final pronunciation error may be very big.And user depends on the pronunciation that this traditional class of languages teaching software learns there may be Mistake, and oneself is often detectable, so the experience that this traditional class of languages teaching software brings to user is poor.Simultaneously User is if it is intended to read certain word or phonetic symbol are grasped it may be necessary to take a long time.
Content of the invention
The scope of the present invention, only by appended claims defined, is not subject to this section content of the invention in any degree Statement limited.
In order to overcome above-mentioned technical problem, the present invention provides a kind of (1) model system of animation play, comprising: animation is play Unit, plays the model animation corresponding with content to be inquired about;Control unit, can control described animation broadcast unit Play operation;Wherein, described model animation is model animation or the reflection institute of the overall pronunciation information that reflection will inquire about content One of model animation of each root target pronunciation information in content to be inquired about.
(2) the model system of animation play according to claim (1) it is characterised in that: described control unit according to Each basis phonetic symbol of content to be inquired about, transfers the ginseng for determine basic phonetic symbol pronunciation corresponding with described each basis phonetic symbol Number, and it is marked on the played in order in content to be inquired about according to each root based on animation broadcast unit described in described state modulator Described model animation.
(3) the model system of animation play according to claim (2) it is characterised in that: described parameter includes and mouth Bar, tongue, the corresponding timeline information of tooth and/or oral cavity and positional information.
(4) the model system of animation play according to claim (1)~(3) any one it is characterised in that: described Model animation is the 3d model animation of human body head, human body head is had an X-rayed 3d model animation or the 3d model of human body head section Animation;The instruction based on user for the described control unit can be had an X-rayed in the 3d model animation of described human body head, described human body head 3d model animation and the 3d model animation of described human body head section between switch over.
(5) the model system of animation play according to claim (2) it is characterised in that: described model animation is play System also includes memory element, stores described model animation, corresponding pronunciation information and described parameter.
(6) a kind of dictionary enquiry system, comprising: input block, for inputting content to be inquired about;Display unit, uses In the information related to content to be inquired about of display on display interface;Animation play instruction unit, by described display The animation that on interface, the information related to content to be inquired about together shows is play mark and is assigned animation play instruction;Animation is broadcast Put unit, the model animation corresponding with content to be inquired about is play according to described animation play instruction;Control unit, can Control the broadcasting operation of described animation broadcast unit.
(7) the dictionary enquiry system according to claim (6) is it is characterised in that also include: recognition unit, is used for The content to be inquired about of identifying user input simultaneously forms the query statement corresponding with content to be inquired about;Described display unit base Obtain information corresponding with content to be inquired about in described query statement to be shown.
(8) the dictionary enquiry system according to claim (7) it is characterised in that: described recognition unit can also be known The basic phonetic symbol of other content to be inquired about;Described control unit can control described animation broadcast unit broadcasting reflection will inquire about In the model animation of the overall pronunciation information of content or reflection content to be inquired about, the model of each root target pronunciation information moves Draw.
(9) the dictionary enquiry system according to claim (8) it is characterised in that: described control unit is according to described The content to be inquired about that recognition unit is identified each basis phonetic symbol, transfer with described each basis phonetic symbol corresponding for determining The parameter of basic phonetic symbol pronunciation, and be marked on and will inquire about according to each root based on animation broadcast unit described in described state modulator Model animation described in played in order in content.
(10) the dictionary enquiry system according to claim (9) it is characterised in that: described parameter include with face, Tongue, the corresponding timeline information of tooth and/or oral cavity and positional information.
(11) the dictionary enquiry system according to claim (6)~(10) any one it is characterised in that: described mould The 3d model animation that type animation is the 3d model animation of human body head, human body head is had an X-rayed or the 3d model of human body head section move Draw;The instruction based on user for the described control unit can be had an X-rayed in the 3d model animation of described human body head, described human body head Switch between the 3d model animation of 3d model animation and described human body head section.
(12) the dictionary enquiry system according to claim (6)~(10) any one is it is characterised in that also wrap Include: operating unit, user passes through the instruction operating described operating unit input to transfer corresponding pronunciation model;Described control unit According to the described instruction transferring corresponding pronunciation model, control described animation broadcast unit to transfer corresponding pronunciation model, and broadcast Put described pronunciation model.
(13) the dictionary enquiry system according to claim (6)~(10) any one it is characterised in that: described dynamic Draw broadcast unit and described overall pronunciation animation can be play with different broadcasting speeds.
(14) the dictionary enquiry system according to claim (9) it is characterised in that:
Described dictionary enquiry system also includes memory element, stores described model animation, corresponding pronunciation information and institute State parameter.
(15) a kind of dictionaries query method, comprising: input content to be inquired about;Display interface shows and will look into The information of the content correlation ask;By moving that related to content to be inquired about information on described display interface together shows Draw broadcasting mark and assign animation play instruction;The mould corresponding with content to be inquired about is play according to described animation play instruction Type animation.
(16) dictionaries query method according to claim (15) is it is characterised in that also include: identifying user input Content to be inquired about and form the query statement corresponding with content to be inquired about;Obtained based on described query statement and wanted The inquiry corresponding information of content is shown.
(17) dictionaries query method according to claim (16) it is characterised in that: in described identification step, also It is capable of identify that the basic phonetic symbol of content to be inquired about;Play in step in described model animation, the model animation of described broadcasting is Reflect the model animation of overall pronunciation information of content to be inquired about or reflect each root target pronunciation in content to be inquired about The model animation of information.
(18) dictionaries query method according to claim (17) it is characterised in that: according to being wanted of described identification Each basis phonetic symbol of inquiry content, transfers the parameter for determine basic phonetic symbol pronunciation corresponding with described each basis phonetic symbol, And it is marked on the model animation described in played in order in content to be inquired about according to each root based on described parameter.
(19) dictionaries query method according to claim (18) it is characterised in that: described parameter include with face, Tongue, the corresponding timeline information of tooth and/or oral cavity and positional information.
(20) dictionaries query method according to claim (15)~(19) any one it is characterised in that: described Model animation is the 3d model animation of human body head, human body head is had an X-rayed 3d model animation or the 3d model of human body head section Animation;Instruction based on user can be in the 3d model animation of the 3d model animation of described human body head, described human body head perspective Switch over and the 3d model animation of described human body head section between.
(21) dictionaries query method according to claim (15)~(19) any one is it is characterised in that also wrap Include: according to the instruction transferring corresponding pronunciation model, transfer corresponding pronunciation model, and play described pronunciation model.
(22) dictionaries query method according to claim (15)~(19) any one it is characterised in that: in institute State animation and play in step, described overall pronunciation animation can be play with different broadcasting speeds.
The present invention provide a kind of model system of animation play can pre-production be stored corresponding animation and Pronunciation information, has so been effectively saved the storage resource in related system and the expense making related system.
In addition, a kind of dictionary enquiry system of present invention offer and querying method, the model of human body head can be play Animation, user can pass through the position that during pronunciation of words in observing and nursing animation, the change of shape of mouth, tongue and/or tooth are located Change carrys out learning pronunciation.While playing animation, system also plays word or the corresponding sound of phonetic symbol accordingly so that user The method that can be combined by video cartoon and pronunciation, the position of the shape, tongue and/or tooth of mouth when imitating animation pronunciation, Study word or the pronunciation of phonetic symbol, thus allow users to more preferably learn the pronunciation of new word or phonetic symbol faster.
Brief description
Fig. 1 is the display interface schematic diagram of existing dictionary enquiry system;
Fig. 2 is the inquiry operation system structure diagram involved by embodiment of the present invention one;
Fig. 3 is the schematic diagram of the display interface of inquiry operation system involved by embodiment of the present invention one;
Fig. 4 is the structured flowchart of the server involved by embodiment of the present invention one;
Fig. 5 is the structural representation of the inquiry operation system on the terminal unit involved by embodiment of the present invention one;
Fig. 6 is that the model animation involved by embodiment of the present invention one plays example;
Fig. 7 is the 3d model animation example involved by embodiment of the present invention one;
Fig. 8 is the flow chart of the dictionary enquiry system involved by embodiment of the present invention one;
Fig. 9 is the model animation playing flow figure of the inquiry system involved by embodiment of the present invention one;
Figure 10 is the playing flow figure of the model animation in step s15 involved by embodiment of the present invention one;
Figure 11 is the model of the phonetic symbol cartoon making unit synchronization generation word/phonetic symbol involved by embodiment of the present invention one The flow chart of animation;
Figure 12 is the flow chart of the dictionary enquiry operating system involved by embodiment of the present invention two;
Figure 13 is the flow chart of the dictionary enquiry system plays model animation involved by embodiment of the present invention two;
Figure 14 is another example of model animation according to the present invention;
Figure 15 is another flow chart that model animation involved in the present invention is play;
Figure 16 is another flow chart that model animation involved in the present invention is play.
Specific embodiment
Illustrate the present invention below according to accompanying drawing illustrated embodiment.This time disclosed embodiment can consider in all sides Face is illustration, without limitation.
Embodiment one:
Fig. 2 is the structural representation of the dictionary enquiry system 1 in present embodiment.As shown in Fig. 2 dictionary enquiry system 1 There is following part: the terminal unit 2 using for user's inquiry and the server 3 being processed according to the querying command of user. Terminal unit server 3 has carried out connecting and can entering row data communication by the communication network such as the Internet or radio communication 4. Described terminal unit 2 can be the equipment such as mobile phone, pad, e-dictionary or pc end.
Fig. 3 is the structural representation at shown dictionary enquiry interface on terminal unit 2.As shown in Fig. 2 above-mentioned terminal sets Dictionary enquiry interface 5 on standby 2 have inquiry importation 6, content display portion 7 and the animation for CAL play by Button 8.User needs the word of inquiry to be inquired about by input in above-mentioned inquiry importation 6, and above-mentioned terminal unit 2 is upper State the related content showing this word in content display portion 7.Meanwhile, above-mentioned terminal unit 2 is in above-mentioned dictionary enquiry interface 5 Display animation broadcast button 8.When user needs the pronunciation learning this word, can be by clicking on above-mentioned animation broadcast button 8 Assign the instruction of the pronunciation learning this word to dictionary enquiry system 1, above-mentioned dictionary enquiry system 1 obtains mould according to above-mentioned instruction Type animation data simultaneously plays the model animation corresponding with this word, for the pronunciation of this word of user learning.Wherein, above-mentioned mould Type animation data or model animation have including Shape of mouths such as tongue, oral cavity, tooth and lip and relative with Shape of mouth The acoustic information answered.Pronunciation content with regard to learning word will be described in detail afterwards.
The structured flowchart of the server 3 in Fig. 4 present embodiment.As shown in figure 4, server 3 is made up of computer, including master Machine 300, input block 308 and display unit 309.Main frame 300 includes cpu301, rom302, ram303, hard disk 304, reads dress Put 305, input and output interfaces 306, image output interface 307 and communication interface 310.
Cpu301 executes the computer program of storage and the computer program downloading to ram302 in rom302.Ram303 uses In the computer program reading storage in rom302 and hard disk 304.When executing these computer programs, ram303 also conduct The work space of cpu301 uses.
Operating system and application program etc. are installed in hard disk 304 and supply the various computer programs of cpu301 execution, execution Required data and model animation data during computer program.That is, being provided with this hard disk 304 makes computer play this reality Apply the computer program of the function of the server of mode.
Reading device 305 is made up of cd driver or dvd driver etc., can read the computer of storage in storage medium Program and data.The input block 308 being made up of keyboard and mouse etc. is connected on input and output interfaces 306, user inputs Part 308 is to server 3 input data.Image output interface 307 is connected to the display unit being made up of crt or liquid crystal display screen etc. 309, export signal of video signal corresponding with view data to display unit 309.Display unit 309 is with the signal of video signal of input as base Plinth display image.In addition, server 3 can be by communication interface 310 and terminal unit 2 transmission data.
The structural representation of the inquiry operation system on terminal unit 2 in Fig. 5 present embodiment.As shown in figure 5, described look into Ask operating system 10 to include control unit 11, communication unit 12, recognition unit 13, input block 14, animation broadcast unit 15, clap Take the photograph unit 16, operating unit 17 and memory element 19.
Above-mentioned control unit 11, communication unit 12, recognition unit 13, input block 14, animation broadcast unit 15, shooting are single Unit 16, operating unit 17 are connected by bus 18 with memory element 19, to realize being in communication with each other between each unit.
It is single that above-mentioned control unit 11 can control above-mentioned communication unit 12, recognition unit 13, input block 14, animation to play The action of each unit such as unit 15, shooting unit 16, operating unit 17 and memory element 19.
Above-mentioned input block 14 is used for accepting the inquiry content (such as word) that above-mentioned inquiry importation 6 is inputted.On State recognition unit 13 for the inquiry content that identifying user is accepted by above-mentioned input block 14, and formed with above-mentioned inquiry in Hold corresponding query statement.Above-mentioned communication unit 12 is communicated with above-mentioned server 3, sends and receives relevant instruction sum According to.Above-mentioned animation broadcast unit 15 is used for playing the model animation corresponding with above-mentioned inquiry content.Above-mentioned animation broadcast unit The action of the 15 above-mentioned model animations of broadcasting can include the display of picture of model animation and the pronunciation corresponding with above-mentioned picture Sound.Above-mentioned shooting unit 16 is stated in the study for user and is caught user's during the study content that input block 14 inputted Pronunciation situation (includes position and its change of nozzle type, tongue, oral cavity and/or the tooth of user etc.).Aforesaid operations unit 17 is used In the relevant operational order of input.Said memory cells 19 are used for storing data and the information of correlation.
Above-mentioned control unit 11 can control the broadcasting speed of above-mentioned animation broadcast unit 15.Above-mentioned animation broadcast unit 15 In the model animation play, nozzle type when user can see word orthoepy by this model animation, tongue, oral cavity and/ Or tooth etc. position and its change so that user it could be visually observed that this word sound producing pattern such that it is able in time, Correct the sound producing pattern of oneself exactly, improve learning efficiency.
For example, as shown in figure 3, as user searching English word " silver ", showing at above-mentioned dictionary enquiry interface 5 Animation broadcast button 8 is also shown while the content closing English word " silver " relevant explanation.If user learning English is single During the pronunciation of word silver, user can by click on above-mentioned animation broadcast button 8 so that above-mentioned terminal unit 2 play with regard to The pronunciation model animation of English word silver, based on correct corresponding to the pronunciation of shown each phonetic symbol in this model animation During pronunciation, the position of nozzle type, tongue, oral cavity and/or tooth etc. and its change carry out imitating contact.As shown in Fig. 6 a-6e, when upper When stating the pronunciation model animation that terminal unit 2 is play with regard to English word silver, can be by the basic phonetic symbol s-i-l- of silver During the orthoepy of v-r, the position of nozzle type, tongue, oral cavity and/or tooth etc. and its change are demonstrated so that user is to can Intuitively view and emulate each root target sound producing pattern, the imitation that can carry out more efficiently in order to user is pronounced.As Fig. 6 f-6g Shown, after having learnt above-mentioned each root target sound producing pattern, allow user use different speed to imitate English word silver Entirety pronunciation.
It is preferable that understanding a certain root target in detail when user needs more to understand in above-mentioned embodiment one The instruction of this root target sound producing pattern, above-mentioned control during sound producing pattern, can be transferred by aforesaid operations unit 17 input Unit 11 according to the instruction of user input pass through above-mentioned communication unit 12 from above-mentioned server 3 transfer with user need clearer The model animation of this root target sound producing pattern understanding in detail, and played out by above-mentioned terminal unit 2.For example, such as Shown in Fig. 7, the pronunciation model of above-mentioned plinth phonetic symbol can be model animation it is particularly possible to be 3d model animation.This 3d model animation Including the animation that can continuously play of human body head section or perspective view, more this basic phonetic symbol shown in detail can understood just Oval shape when really pronouncing, the position of tongue position, lip and tooth and its their change in phonation.
In above-mentioned embodiment one, aforesaid operations unit 17 is to have functional operation button/icon (no figure regards). For example, the operation button/icon of described operating unit 17 can include starting/suspend, rotate or amplify animation, searches for, transfers Model animation, speed governing broadcasting, switching model and/or repeat playing animation.When user clicks on corresponding button/icon, above-mentioned Control unit 11 can execute operation according to the corresponding unit of the function control of respective keys/icon.For example, it is somebody's turn to do when user clicks on When speed adjusts broadcasting button, control unit 11 can be adjusted to the broadcasting rhythm of animation and audio frequency according to the setting of user Whole, adjust the model animation of each word and the broadcasting speed of pronunciation, to reach customer satisfaction system speed;When user clicks on switching During model button, control unit 11 allows hand over animation model, such as the switching between (a) model in Fig. 7 and (b) model, Or both switchings and 3d model animation (not shown) of human body head perspective between respectively.
Fig. 8 is the workflow diagram of the dictionary enquiry system 1 involved by present embodiment one.As shown in figure 8, opening in user Move after this dictionary enquiry system 1 (step s1), take in the inquiry importation 6 at the dictionary enquiry interface 5 of above-mentioned terminal unit 2 Content (step s2) to be inquired about.Above-mentioned input block 14 is accepting the inquiry content that above-mentioned inquiry importation 6 is inputted Afterwards, the inquiry content that above-mentioned recognition unit 13 identifying user is accepted by above-mentioned input block 14, in formation and above-mentioned inquiry Hold corresponding query statement (step s3), and above-mentioned query statement is sent to above-mentioned server 3 by above-mentioned communication unit 12 (step s4).After above-mentioned server 3 receives above-mentioned query statement, by cpu301, above-mentioned query statement is processed, will be above-mentioned Query statement is associated with the related data information in data base's (such as hard disk 304 grade storage device) of above-mentioned server 3, And transfer the related data information (step s5) being stored in the data base of above-mentioned server 3.Wherein, above-mentioned related data information (for example store including the information and model animation data storage information corresponding with inquired about content for explaining inquired about content Address information).Above-mentioned server 3 pushes above-mentioned related data information (step s6) to above-mentioned terminal unit 2.Above-mentioned terminal unit The received above-mentioned related data information (step s7) of 2 displays.Wherein, the interface Zhong Bao shown by above-mentioned terminal unit 2 Include the information for explaining inquired about content and the above-mentioned animation being associated with above-mentioned corresponding model animation data storage information Broadcast button 8 (as shown in Figure 3).As user it should be understood that during the pronunciation of inquired about content, pressed by clicking on above-mentioned animation and playing The model animation data (including model animation and sound) of the pronunciation of inquired about content play by button 8.
Fig. 9 is the workflow diagram of the dictionary enquiry system 1 playing model animation involved by present embodiment one.As Fig. 9 Shown, user passes through shown animation broadcast button 8 in the above-mentioned dictionary enquiry interface 5 click on terminal unit 2 and assigns broadcasting Instruction (step s11), and send play instruction (step s12) to above-mentioned server 3.Above-mentioned server 3 receives above-mentioned broadcasting After instruction, broadcast information according to included in this play instruction, the data base transferring above-mentioned server 3 by cpu301 is (for example Hard disk 304 grade storage device) in the model animation data (step s13) corresponding with inquired about content that stored, and to above-mentioned Terminal unit 2 pushes and content corresponding model animation data (step s14) to be inquired about.Above-mentioned animation broadcast unit 15 obtains Take model animation data corresponding with content to be inquired about, and (step is played out based on accessed model animation data Rapid s15).
Figure 10 is the playing flow figure of the model animation in step s15 involved by present embodiment one.As shown in Figure 10, Above-mentioned animation broadcast unit 15 plays related model animation (step s21) based on received model animation data.Above-mentioned Control unit 11 judges that user is directed to the clearer pronunciation model of a certain root target (as shown in Figure 7) the need of obtaining (step s22).Play in the model animation process of correlation in above-mentioned animation broadcast unit 15, user can operate as needed State the operation button/icon in operating unit 17, suspend the broadcasting to model animation for the above-mentioned animation broadcast unit 15, and according to It is necessary to determine whether to need some root target in the content inquired about to be pronounced to carry out understand to learn in detail.If needed Some root target pronunciation in the content inquired about is carried out with clear study in detail, by operating aforesaid operations unit In 17, corresponding operation button or icon, transfer this from data base's (such as hard disk 304 grade storage device) of above-mentioned server 3 The model animation of root target sound producing pattern, obtains clearer pronunciation model.For example, user needs clearly to understand The detailed sound producing pattern of " l " root target in " silver ", can be operated accordingly by operating in aforesaid operations unit 17 Button or icon, transfer the model animation of this " l " root target sound producing pattern, preferably 3d illustraton of model.Meanwhile, user also may be used With the switching model button switching model as desired by aforesaid operations unit 17, (a) model in such as Fig. 7 and (b) model Between switching, or both respectively and human body head perspective 3d model animation (not shown) between switchings.
If above-mentioned control unit 11 judges that user needs to obtain is directed to the clearer pronunciation model of a certain root target (such as 3d illustraton of model) (step s22: yes), above-mentioned animation broadcast unit 15 transfers this from the data base of above-mentioned server 3 The clearer pronunciation model of root target (step s23), and this basic phonetic symbol of display screen display in above-mentioned terminal unit Clearer pronunciation model (step s24).User is over after this clearer pronunciation model of root target in study, permissible Operate button or icon by operating in aforesaid operations unit 17 accordingly so that above-mentioned animation broadcast unit 15 continues broadcasting Front basic phonetic symbol pronunciation model animation.Then, above-mentioned control unit 11 judges whether that receiving continuation plays basic phonetic symbol pronunciation The instruction (step s25) of model animation.
If above-mentioned control unit 11 do not receive continue to play basic phonetic symbol pronunciation model animation instruction (step s25: No), above-mentioned animation broadcast unit 15 is waited for.If above-mentioned control unit 11 receives continuation and plays basic phonetic symbol pronunciation The instruction (step s25: yes) of model animation, enters step s26.
If above-mentioned control unit 11 judges that user does not need to obtain is directed to the clearer pronunciation mould of a certain root target Type (step s22: no), then be directly entered step s26.
In step s26, above-mentioned control unit 11 judges whether currently played basic phonetic symbol is in user is inquired about Last basic phonetic symbol in appearance.If above-mentioned control unit 11 judges that currently played basic phonetic symbol is not that user is looked into Ask last the basic phonetic symbol (step s26: no) in content.For example, as shown in Fig. 6 a-6e, if above-mentioned control unit 11 Judge that currently played basic phonetic symbol " i " (referring to Fig. 6 b) is not last the basic phonetic symbol in the inquired about content of user, Above-mentioned animation broadcast unit 15 continues to play next basis phonetic symbol " l " (referring to Fig. 6 c) in the inquired about content of user.
If above-mentioned control unit 11 judges that currently played basic phonetic symbol is last in the inquired about content of user Individual basis phonetic symbol (step s26: yes), above-mentioned animation broadcast unit 15 plays the overall pronunciation model animation of the inquired about content of user (step s27).For example, as shown in Fig. 6 e-6g, if above-mentioned control unit 11 judges currently played basic phonetic symbol " r " (ginseng See Fig. 6 e) it is last basic phonetic symbol in the inquired about content of user, above-mentioned animation broadcast unit 15 is play user and is inquired about The overall pronunciation model animation (referring to Fig. 6 f-6g) of content " silver ".
In above-mentioned embodiment one, illustrate the overall pronunciation model animation of a word, but the invention is not restricted to The overall pronunciation model animation of this or a phrase, or the overall pronunciation model animation of a sentence.
In step s27, above-mentioned animation broadcast unit 15 can be inquired about interior using different broadcasting speed broadcasting users The overall continuous pronunciation model animation (referring to Fig. 6 f-6g) holding.
Figure 11 is the flow chart of the model animation of synchronization generation word/phonetic symbol involved by present embodiment one.As Figure 11 Shown, all root target mistakes in phonetic symbol audio repository in the data base of the above-mentioned server of pre-production 3 and phonetic symbol animation library The pronunciation information of transient and model animation figure (step s31).For example, it is assumed that there being 8 bases in the data base of above-mentioned server 3 Phonetic symbol, described 8 root target animation figures corresponding digital 1 to 8, pronunciation corresponding 1 ' to 8 ', then basis described in pre-production The model animation figure of all transient process of phonetic symbol be respectively 1 to 2,1 to 3 ..., 1 to 8,2 to 1,2 to 3 ..., 2 to 8 ..., 8 To 7, the pronunciation of all transient process of described root target is respectively 1 ' to 2 ', 1 ' to 3 ' ..., 1 ' to 8 ', 2 ' to 1 ', 2 ' arrive 3 ' ..., 2 ' to 8 ' ..., 8 ' to 7 ', and the model animation figure of above-mentioned transient process and audio frequency are stored respectively in above-mentioned server In the phonetic symbol animation library and phonetic symbol audio repository of 3 data base.The word needing learning pronunciation is changed into corresponding word phonetic symbol (step s32).Pronunciation information according to all root target transient process making in step s31 and model animation figure, press According to the root target transition change searched in phonetic symbol animation library and phonetic symbol audio repository of word phonetic symbol order, and played in order. The above-mentioned model animation figure according to word phonetic symbol played in order and corresponding pronunciation information are stored in the data of above-mentioned server 3 In storehouse (step s33).For example, when word is for silver, this word identification can be word phonetic symbol by above-mentioned recognition unit 13 [silvr] (as Fig. 2 a), then forms query statement according to word phonetic symbol [silvr], searches the data of above-mentioned server 3 one by one The model of the basic phonetic symbol of storage and its transitive state in storehouse, and phonetic symbol s-i- is play in order by above-mentioned animation broadcast unit 15 L-v-r, whenever reading a root timestamp, can especially show the phonetic symbol of reading, as shown in Fig. 6 a-6e.Wherein, Fig. 6 (a) is System reads animated image sectional drawing during phonetic symbol [s], and in the images, display is read word silver, also highlighted The phonetic symbol [s] reading, corresponding mouth shape when also showing the remaining phonetic symbol [silvr] not read simultaneously and reading this phonetic symbol [s];Figure The system that is respectively 6b-6e reads animated image when phonetic symbol [i], [l], [v], [r].The mould of the word generating in step s33 The broadcasting speed of type is slower, based on phonetic symbol playing process one by one;On the basis of the model that step s33 generates, to described step The model of the word generating in rapid s33 carries out model training, generates the continuous broadcasting of word phonetic symbol and the different model of broadcasting speed, It is stored simultaneously in the data base of described above-mentioned server 3 (step s34).Generating the continuous model process play of word phonetic symbol In, need to carry out multiple coupling learning and calibration;System can arrange several broadcasting speed grades according to word length, sequentially Play it is preferable that each word collectively generates the model of four to five broadcasting speeds.
During study is imitated, user can also selection operation unit 17 as needed operation button controlling rotation Turn, amplify or repeat playing animation or switching animation model, so that user more clearly sees model animation clearly, make Practise more convenient.
In involved dictionary enquiry system and querying method in present embodiment one, the data base of above-mentioned server 3 In be stored with the model that word phonetic symbol plays one by one and the continuous model play of phonetic symbol, user, need to be according in learning process The different animation of broadcasting speed is learnt one by one, but is not limited only to this.User can pass through the regulation of clicking operation unit 17 Play the broadcasting speed that speed button adjusts word learning model, learnt for this specific broadcasting speed, allow to meet The learning demand of user.
Embodiment two:
In embodiment two, dictionary enquiry system 1 does not have above-mentioned server 3, and by the related work(in above-mentioned server 3 Above-mentioned terminal unit 2 can be merged into.That is, above-mentioned terminal unit 2 need not be led to above-mentioned server 3, and by above-mentioned server 3 Middle stored related data and information are stored directly in the memory element 19 of above-mentioned terminal unit 2, above-mentioned server 3 simultaneously Cpu301 corresponding function integrate with above-mentioned control unit 11 and by above-mentioned control unit 11 directly to related data and Information is processed.The other structures of above-mentioned terminal unit 2 and function with identical in embodiment one.Below with regard to embodiment Illustrate with the difference in embodiment one in two.
Figure 12 is the workflow diagram of the dictionary enquiry system 1 involved by embodiment of the present invention two.As shown in figure 12, exist User starts after this dictionary enquiry system 1 (step s101), the inquiry input at the dictionary enquiry interface 5 of above-mentioned terminal unit 2 Part 6 income content (step s102) to be inquired about.Above-mentioned input block 14 is inputted in the above-mentioned inquiry importation 6 of acceptance Inquiry content after, the inquiry content that above-mentioned recognition unit 13 identifying user is accepted by above-mentioned input block 14, formed with The corresponding query statement (step s103) of above-mentioned inquiry content.Above-mentioned control unit 11 is processed to above-mentioned query statement, Above-mentioned query statement is associated with the related data information in said memory cells 19, and transfers in said memory cells 19 The related data information (step s104) being stored.Wherein, above-mentioned related data information is included for explaining inquired about content Information and model animation data storage information corresponding with inquired about content (such as storage address information).Above-mentioned terminal unit 2 Show the above-mentioned related data information (step s105) transferred.Wherein, the interface shown by above-mentioned terminal unit 2 includes Information and the above-mentioned animation associated with above-mentioned corresponding model animation data storage information for explaining inquired about content are broadcast Put button 8 (as shown in Figure 3).As user it should be understood that during the pronunciation of inquired about content, by clicking on above-mentioned animation broadcast button 8 Play the model animation data (including model animation and sound) of the pronunciation of inquired about content.
Figure 13 is the workflow diagram of the dictionary enquiry system 1 playing model animation involved by embodiment of the present invention two. As shown in figure 13, user passes through to click under shown animation broadcast button 8 in the above-mentioned dictionary enquiry interface 5 of terminal unit 2 Reach play instruction (step s110).After above-mentioned control unit 11 receives above-mentioned play instruction, wrapped according in this play instruction The broadcast information containing, transfers the model animation data (step corresponding with inquired about content being stored in said memory cells 19 s120).Above-mentioned animation broadcast unit 15 plays received model animation data (step s130).
Other structures in above-mentioned embodiment two and function with identical in embodiment one, will not be described in detail herein.
Dictionary enquiry system involved in the present invention and querying method, can be applied not only to query English, can also answer For inquiring about the various language of Chinese, Japanese, French, German etc..
It is preferable that above-mentioned recognition unit 13 can identify each base of above-mentioned inquiry content in above-mentioned embodiment one, two Plinth phonetic symbol, above-mentioned animation broadcast unit 15 can obtain and above-mentioned basis according to the basic phonetic symbol that above-mentioned recognition unit 13 is identified The corresponding model animation of phonetic symbol and pronunciation information, and unit can be designated as with root and carry out model animation and pronunciation information Play, or the broadcasting carrying out model animation and pronunciation information in units of word.So so that the related system of the present invention no The model animation making in advance and pronunciation information must be stored, and pass through to identify basic phonetic symbol, obtain corresponding with basic phonetic symbol Model animation and pronunciation information are combined, to play the model animation corresponding with content to be inquired about and pronunciation information, from And it has been effectively saved the storage resource in related system and the expense making related system.
Figure 14 is another example of the model animation involved by embodiment of the present invention one, two.With English word " black " As a example be described, and the word of other English words or other language can be processed in the same way.As Figure 14 institute Show, Figure 14 (a)-(d) corresponds to each root target pronunciation model respectively, it is detailed that Figure 14 (a)-(d) corresponds to each root target respectively Thin pronunciation model.As user input word " black ", this word identification can be word phonetic symbol by above-mentioned recognition unit 13 [blak], that is, correspond to each basis phonetic symbol [b]-[l]-[a]-[k].Above-mentioned recognition unit 13 be based on above-mentioned recognition result, formed with The corresponding query statement of above-mentioned each basis phonetic symbol, and above-mentioned query statement is sent to system control unit (such as embodiment party Control unit 11 in cpu301 or embodiment two in formula one), said system control unit refers to receiving corresponding inquiry After order, transfer corresponding from memory element (memory element 19 in hard disk 304 or embodiment two such as embodiment one) Model.For example, if the query statement that receives of system control unit is corresponding when being basic phonetic symbol [b], by this basic phonetic symbol with The model of Figure 14 (a) is associated and transfers corresponding pronunciation model from memory element, if the inquiry that system control unit receives Instruction is corresponding to be then to be associated this basic phonetic symbol with the model of Figure 14 (b) during basic phonetic symbol [l] and transfer from memory element Corresponding pronunciation model, if the query statement that receives of system control unit is corresponding when being basic phonetic symbol [a], by this basis Phonetic symbol is associated with the model of Figure 14 (c) and transfers corresponding pronunciation model from memory element, if system control unit receives Query statement corresponding be then this basic phonetic symbol to be associated with the model of Figure 14 (d) during basic phonetic symbol [k] and single from storage Unit transfers corresponding pronunciation model.Then, the model of Figure 14 (a)-(d) being transferred is sent to above-mentioned animation broadcast unit 15, play out according to the order that each root is marked in this word.
If user needs clearly to understand sending out in detail of the basic phonetic symbol [b], [l], [a] or [k] in " black " Sound pattern, can be adjusted by operating corresponding switching model button in aforesaid operations unit 17 to send to above-mentioned system control unit Take the instruction of the detailed sound producing pattern of basic phonetic symbol [b], [l], [a] or [k], said system control unit is based on above-mentioned instruction will This basic phonetic symbol is associated with the model of Figure 14 (a), (b), (c) or (d) and transfers corresponding detailed pronunciation mould from memory element Type.
Wherein, calling above-mentioned basis phonetic symbol pronunciation model and/or also can transfer while detailed pronunciation model corresponding Voice data is so that the broadcasting of the broadcasting of model and sound is synchronously carried out.
Figure 15 is another workflow diagram that model animation involved in the present invention is play.As shown in figure 15, system controls Unit (control unit 11 in cpu301 or embodiment two in such as embodiment one) be connected to play instruction after (step S201), said system control unit allows the inquiry that above-mentioned recognition unit 13 identifying user is accepted by above-mentioned input block 14 Each basis phonetic symbol information in content, and being formed and each basis query statement (step s202) of being associated of phonetic symbol, and to above-mentioned System control unit sends above-mentioned query statement (step s203).After said system control unit receives above-mentioned query statement, right Above-mentioned query statement is processed, by above-mentioned query statement and respective memory unit (such as hard disk 304 grade storage device or storage Unit 19) in the data of related pronunciation model be associated, and transfer the related pronunciation mould being stored in said memory cells The data (step s204) of type.Said system control unit pushes above-mentioned related pronunciation model to above-mentioned animation broadcast unit 15 Data (step s205), above-mentioned animation broadcast unit 15 is play accordingly each successively according to the order that each root is marked on inquiry content Root target pronunciation model (step s206).
In the above-described embodiment, corresponding to each word and/or root target above-mentioned model animation, above-mentioned root Mark model and/or above-mentioned detailed pronunciation model are pre-production and are stored in corresponding memory element.But, in the present invention In, do not pass through pre-production and store word and/or root target above-mentioned model animation, above-mentioned basis phonetic symbol model and/ Or above-mentioned detailed pronunciation model, but after carrying out 3d modeling (as shown in Figure 7) for human body head in advance, by setting with each The corresponding parameter instruction for determining the phonetic symbol pronunciation of each basis of basic phonetic symbol, each organ on control 3d model is (for example Face, tongue, tooth, oral cavity etc.) movement locus within a specified time, demarcated with each parameter in parameter instruction and carry out Certain root target pronunciation when on certain time shaft each organ position.Each ginseng in each root target parameter instruction Number includes timeline information corresponding with organs such as face, tongue, tooth and/or oral cavities and positional information (for example includes face Position, tooth open position and/or oral cavity space size that the size opened, tongue are put etc. determines the information of pronunciation).Its In, each root target parameter can be set and stored in corresponding memory element in advance.To enter below with reference to Figure 16 Row description.
Figure 16 is another workflow diagram that model animation involved in the present invention is play.As shown in figure 16, system controls Unit (control unit 11 in cpu301 or embodiment two in such as embodiment one) be connected to play instruction after (step S301), said system control unit transfers 3d mould from memory element (such as hard disk 304 grade storage device or memory element 19) Type (with reference to Fig. 7) (step s302), and allow above-mentioned recognition unit 13 identifying user by looking into that above-mentioned input block 14 is accepted Ask each basis phonetic symbol information in content, and form the query statement (step s303) being associated with each basis phonetic symbol.To above-mentioned System control unit sends above-mentioned query statement (step s304).After said system control unit receives above-mentioned query statement, right Above-mentioned query statement is processed, and transfers institute in respective memory unit (such as hard disk 304 grade storage device or memory element 19) Storage with each basis phonetic symbol correlation model parameters (step s305).Said system control unit is based on the model parameter transferred Above-mentioned animation broadcast unit 15 is controlled to play accordingly each root target successively according to the order that each root is marked on inquiry content 3d model (step s306).
For example, each model in Figure 14 can not be pre-production and stores in the memory unit, but according to being directed to The picture that corresponding model parameter set by the phonetic symbol of each basis is play on animation broadcast unit 15.Specifically, for Basic phonetic symbol [b], [l], [a] and [k] in " black ", be respectively provided with during these basic phonetic symbols pronunciations with face, tongue, The corresponding timeline information of the organs such as tooth, oral cavity and positional information, and the parameter as each basis phonetic symbol model.When dynamic When drawing the pronunciation animation that broadcast unit 15 plays word " black ", only need to transfer and basic phonetic symbol [b], [l], [a] and [k] phase The parameter of model of pass simultaneously plays out with reference to 3d model (as shown in Figure 7), need not be both needed in advance for each basis phonetic symbol Make animation model and stored, thus more further saving the storage resource in system and manufacturing cost.Additionally, If user needs clearly to understand the detailed sound producing pattern of the basic phonetic symbol [b], [l], [a] or [k] in " black ", Can be cut between (a) and (b) in Fig. 7 by operating corresponding operation button or icon in aforesaid operations unit 17 Change, or both switch over respectively and 3d model animation (not shown) of human body head perspective between.
The scope of the present invention is not limited by the explanation of implementation below, only shown in the scope of claims, and Including all deformation having with right in the same meaning and right.

Claims (14)

1. a kind of model system of animation play, comprising:
Animation broadcast unit, plays the model animation corresponding with content to be inquired about;
Control unit, can control the broadcasting operation of described animation broadcast unit;
Wherein, described model animation is the model animation of the overall pronunciation information that reflection will inquire about content or reflects and will inquire about One of model animation of each root target pronunciation information in content.
2. model system of animation play according to claim 1 it is characterised in that:
Described control unit, according to each basis phonetic symbol of content to be inquired about, is transferred and described each basis corresponding being used for of phonetic symbol Determine the parameter of basic phonetic symbol pronunciation, and be marked on according to each root based on animation broadcast unit described in described state modulator and wanted Model animation described in played in order in inquiry content.
3. model system of animation play according to claim 2 it is characterised in that: described parameter include with face, tongue, The corresponding timeline information of tooth and/or oral cavity and positional information.
4. the model system of animation play according to claims 1 to 3 any one it is characterised in that:
Described model animation is the 3d model animation of human body head, human body head is had an X-rayed 3d model animation or human body head section 3d model animation;
The instruction based on user for the described control unit can be had an X-rayed in the 3d model animation of described human body head, described human body head Switch between the 3d model animation of 3d model animation and described human body head section.
5. model system of animation play according to claim 2 it is characterised in that:
Described model system of animation play also includes memory element, stores described model animation, corresponding pronunciation information and institute State parameter.
6. a kind of dictionary enquiry system, comprising:
Input block, for inputting content to be inquired about;
Display unit, for showing the information related to content to be inquired about on display interface;
Animation play instruction unit, is together shown by related to content to be inquired about information on described display interface Animation is play mark and is assigned animation play instruction;
Animation broadcast unit, plays the model animation corresponding with content to be inquired about according to described animation play instruction;
Control unit, can control the broadcasting operation of described animation broadcast unit.
7. dictionary enquiry system according to claim 6 is it is characterised in that also include:
Recognition unit, the content to be inquired about for identifying user input simultaneously forms the inquiry corresponding with content to be inquired about and refers to Order;
Described display unit is based on described query statement acquisition information corresponding with content to be inquired about and is shown.
8. dictionary enquiry system according to claim 7 it is characterised in that:
Described recognition unit can also identify the basic phonetic symbol of content to be inquired about;
Described control unit can control described animation broadcast unit to play the overall pronunciation information reflecting content to be inquired about Model animation or the model animation reflecting each root target pronunciation information in content to be inquired about.
9. dictionary enquiry system according to claim 8 it is characterised in that:
Each basis phonetic symbol of the content to be inquired about that described control unit is identified according to described recognition unit, transfers each with described The corresponding parameter for determining basic phonetic symbol pronunciation of basic phonetic symbol, and it is based on animation broadcast unit described in described state modulator It is marked on the model animation described in played in order in content to be inquired about according to each root.
10. dictionary enquiry system according to claim 9 it is characterised in that:
Described parameter includes and face, tongue, the corresponding timeline information of tooth and/or oral cavity and positional information.
11. dictionary enquiry systems according to claim 6~10 it is characterised in that:
Described model animation is the 3d model animation of human body head or the 3d model animation of human body head section;
The instruction based on user for the described control unit can be in the 3d model animation of described human body head and described human body head section 3d model animation between switch over.
The 12. dictionary enquiry systems according to claim 6~10 any one are it is characterised in that also include: operation is single Unit, user passes through the instruction operating described operating unit input to transfer corresponding pronunciation model;
Described control unit, according to the described instruction transferring corresponding pronunciation model, controls described animation broadcast unit to transfer accordingly Pronunciation model, and play described pronunciation model.
13. dictionary enquiry systems according to claim 6~10 any one it is characterised in that:
Described animation broadcast unit can play described overall pronunciation animation with different broadcasting speeds.
14. dictionary enquiry systems according to claim 9 it is characterised in that:
Described dictionary enquiry system also includes memory element, stores described model animation, corresponding pronunciation information and described ginseng Number.
CN201610695282.0A 2016-08-19 2016-08-19 Model animation play system, and dictionary query system and method Pending CN106373174A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610695282.0A CN106373174A (en) 2016-08-19 2016-08-19 Model animation play system, and dictionary query system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610695282.0A CN106373174A (en) 2016-08-19 2016-08-19 Model animation play system, and dictionary query system and method

Publications (1)

Publication Number Publication Date
CN106373174A true CN106373174A (en) 2017-02-01

Family

ID=57878970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610695282.0A Pending CN106373174A (en) 2016-08-19 2016-08-19 Model animation play system, and dictionary query system and method

Country Status (1)

Country Link
CN (1) CN106373174A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992195A (en) * 2017-12-07 2018-05-04 百度在线网络技术(北京)有限公司 A kind of processing method of the content of courses, device, server and storage medium
CN110213641A (en) * 2019-05-21 2019-09-06 北京睿格致科技有限公司 The micro- class playback method of 4D and device
CN112381913A (en) * 2020-10-20 2021-02-19 北京语言大学 Dynamic pronunciation teaching model construction method based on 3D modeling and oral anatomy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1707550A (en) * 2005-04-14 2005-12-14 张远辉 Establishment of pronunciation and articalation mouth shape cartoon databank and access method thereof
CN101488346A (en) * 2009-02-24 2009-07-22 深圳先进技术研究院 Speech visualization system and speech visualization method
CN101751809A (en) * 2010-02-10 2010-06-23 长春大学 Deaf children speech rehabilitation method and system based on three-dimensional head portrait
CN103258340A (en) * 2013-04-17 2013-08-21 中国科学技术大学 Pronunciation method of three-dimensional visual Chinese mandarin pronunciation dictionary with pronunciation being rich in emotion expression ability

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1707550A (en) * 2005-04-14 2005-12-14 张远辉 Establishment of pronunciation and articalation mouth shape cartoon databank and access method thereof
CN101488346A (en) * 2009-02-24 2009-07-22 深圳先进技术研究院 Speech visualization system and speech visualization method
CN101751809A (en) * 2010-02-10 2010-06-23 长春大学 Deaf children speech rehabilitation method and system based on three-dimensional head portrait
CN103258340A (en) * 2013-04-17 2013-08-21 中国科学技术大学 Pronunciation method of three-dimensional visual Chinese mandarin pronunciation dictionary with pronunciation being rich in emotion expression ability

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王岚 等: "三维说话人头像连续发音动态模拟", 《先进技术研究通报》 *
许芹: "视觉语音合成技术在英语发音辅导中的应用探究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992195A (en) * 2017-12-07 2018-05-04 百度在线网络技术(北京)有限公司 A kind of processing method of the content of courses, device, server and storage medium
CN110213641A (en) * 2019-05-21 2019-09-06 北京睿格致科技有限公司 The micro- class playback method of 4D and device
CN112381913A (en) * 2020-10-20 2021-02-19 北京语言大学 Dynamic pronunciation teaching model construction method based on 3D modeling and oral anatomy
CN112381913B (en) * 2020-10-20 2021-06-04 北京语言大学 Dynamic pronunciation teaching model construction method based on 3D modeling and oral anatomy

Similar Documents

Publication Publication Date Title
US20240153401A1 (en) Facilitating a social network of a group of performers
US5613056A (en) Advanced tools for speech synchronized animation
CN207037604U (en) Interactive instructional system
US10372790B2 (en) System, method and apparatus for generating hand gesture animation determined on dialogue length and emotion
US20110319160A1 (en) Systems and Methods for Creating and Delivering Skill-Enhancing Computer Applications
CN109979497B (en) Song generation method, device and system and data processing and song playing method
JP2021103328A (en) Voice conversion method, device, and electronic apparatus
Rocchesso Explorations in sonic interaction design
CN106373174A (en) Model animation play system, and dictionary query system and method
Wang et al. Computer-assisted audiovisual language learning
JPH11109991A (en) Man machine interface system
KR100880613B1 (en) System and method for supporting emotional expression of intelligent robot and intelligent robot system using the same
CN110059224A (en) Video retrieval method, device, equipment and the storage medium of projector apparatus
JP2003521005A (en) Device for displaying music using a single or several linked workstations
CN106354767A (en) Practicing system and method
Olmos et al. A high-fidelity orchestra simulator for individual musicians’ practice
CN101840640B (en) Interactive voice response system and method
CN110969237B (en) Man-machine virtual interaction construction method, equipment and medium under amphiprotic relation view angle
JP2008032788A (en) Program for creating data for language teaching material
Perkins Interactive sonification of a physics engine
WO2024029135A1 (en) Display rpogram, display method, and display system
WO2023002300A1 (en) Slide playback program, slide playback device, and slide playback method
Thompson Expressive gestures in piano performance
CN117980872A (en) Display program, display method, and display system
Moriaty Collective Controllerism: A Non-Musician's Perspective of Interactive Dance as Controllerist Practice

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170201

RJ01 Rejection of invention patent application after publication