CN1604083A - Method and system for proceeding image synchronous broadcast on hand held electronic devices - Google Patents

Method and system for proceeding image synchronous broadcast on hand held electronic devices Download PDF

Info

Publication number
CN1604083A
CN1604083A CN 200410073202 CN200410073202A CN1604083A CN 1604083 A CN1604083 A CN 1604083A CN 200410073202 CN200410073202 CN 200410073202 CN 200410073202 A CN200410073202 A CN 200410073202A CN 1604083 A CN1604083 A CN 1604083A
Authority
CN
China
Prior art keywords
language
data
image
version
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200410073202
Other languages
Chinese (zh)
Inventor
陈淮琰
赵珺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Besta Xian Co Ltd
Original Assignee
Inventec Besta Xian Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Besta Xian Co Ltd filed Critical Inventec Besta Xian Co Ltd
Priority to CN 200410073202 priority Critical patent/CN1604083A/en
Publication of CN1604083A publication Critical patent/CN1604083A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

This invention relates to a video synchronous play method and its system applied on palm electron apparatus, which comprises the following steps: first to select language and sound data needed by video data; second to select relative language edit from language data base and establish responding relationship of each video data and each language edit; third to select relative dub edit from language data base and establish the play relationship of each language data edit and each dub edit; fourth to integrate the video data and its language data and sound data to display.

Description

On hand-held electronic device, carry out the method and the system thereof of image synchronous playing
One, technical field
The present invention relates to a kind of image playing method and system thereof, especially a kind of method and system thereof that carries out the image synchronous playing on the hand-held electronic device that be applied in.
Two, background technology
Along with the develop rapidly of electronic information, the convenience that hand-held electronic device provided is favored hand-held electronic device day by day.Simultaneously, the requirement of user's function that can provide hand-held electronic device is also more and more high.Therefore, whether following palm type electronic consumer products are guiding with user's demand, and more high-performance, greater functionality are provided, and have become the important chip that hand-held electronic device successfully squeezes into the market.
Present hand-held electronic device mainly comprise e-dictionary, palmtop computer (HandheldPersonal Computer, HPC) and PDA (Personal Digital Assistant) (Personal Digital Assistant, PDA) etc.The common trait of these hand-held electronic devices is, makes e-book (e-Book) when (including animation), and the language of once only making a kind of version shows and the speech play supporting with it.That is to say that the e-book of made is complete making with corresponding language and voice, i.e. during cartoon making, that this animation is required language shows and speech play, dubs as Chinese subtitle and Chinese, typically is entrenched in each animation data.Be entrenched in that Chinese subtitle in this animation data is dubbed with Chinese and the not sum film is the same, with animation data with real-time mode synchronous playing, but mechanically be inserted in the animation data with the pop-up box form, be close to the synchronous effect of static state in the dissatisfied present e-book of user between animation data and supporting language and voice.Simultaneously,, dub, then must make the e-book of a whole set of this language version again as English subtitles and English if existing e-book will be replaced by (including animation) language environment of another kind of version.And when making, need with the animation data in the e-book with do corresponding layout again between supporting language demonstration and speech play, can when e-book is play, go on well.And even the other electronic writing material has bilingual environment even more, if the Chinese language environment that the user needs just to use switches to the english language environment, the user must withdraw from Chinese interface, and re-executes English interface.This mainly is because known e-book is packaged as the one design with animation data, language demonstration, speech play, animation data, language demonstration, speech play are given modularization respectively, the language that the e-book of each version is shared wherein shows and data voice playback.Therefore, demand according to the different users, every suit e-book must be made to measure for different users again, in case and certain part of e-book has replacing, wherein arbitrary part as animation data, language demonstration, speech play, will certainly produce the consequence of pulling one hair and move the whole body, waste time and energy.
In addition, if the user expects e-book and more possesses function of language learning, when being animation display, show with the Chinese subtitle that the user was familiar with on the one hand, and the user's English to be learnt of arranging in pairs or groups is on the other hand dubbed, or other makes up arbitrarily, as Chinese subtitle and Japanese dub, English subtitles and Chinese is dubbed, English subtitles and Japanese are dubbed etc., make e-book become powerful language learning instrument ideally.The e-book that so has function of language learning, if the animation data that in a conventional manner will be wherein and the language of each combination show, speech play is made up, to become the e-book of each required version of user, during making except wasting time and energy, cost also increases thereupon, and the utilization factor again of e-book also reduces greatly.
Three, summary of the invention
The invention solves language that the hand-held electronic device in the background technology has a various combination in making when showing e-book with speech play, need make the e-book of different editions again, and the language of the pairing various combination of animation data in the e-book shows and speech play between can not synchronous playing technical matters.
Technical solution of the present invention is: a kind of method of carrying out the image synchronous playing on hand-held electronic device, its special character is: this method comprises the following steps:
1) the language video data and the data voice playback of the required correspondence of image data chosen in input;
2) in the language database, read corresponding language version according to this language video data of being imported, and set up each image data and to the demonstration corresponding relation of each language version that should image data, and each language version is respectively from the language data of this language database;
3) in the speech data storehouse, read corresponding dub version according to the data voice playback of being imported, and set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data, and each dub version is respectively from the speech data of this speech database;
4) integrate this image data and language data and the speech data corresponding, play with it.
Above-mentioned steps 2) can set up each image data and to the demonstration corresponding relation of each language version that should image data according to temporal information corresponding relation, address information corresponding relation or plural corresponding relation in.
Above-mentioned steps 3) can set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data according to temporal information corresponding relation, address information corresponding relation or plural corresponding relation in.
A kind ofly carry out the image playing system according to above-mentioned player method, its special character is: this system comprises data storage cell, set up the data processor unit and the CPU (central processing unit) of corresponding relation between image database and language database and the speech database respectively, described data storage cell comprises the image database that is used for storing a plurality of image datas, be used for storing the language database of a plurality of and language datas that have different language version relevant and be used for storing a plurality of relevant and have a speech database that the different language version is distinguished the speech data of corresponding different dub versions with image data with image data; Described data processor unit comprises and is used for setting up each image data and the video language of the demonstration corresponding relation of each language version that should image data is set up module and is used for setting up the language data of each language version and module set up in the image voice of the broadcast corresponding relation of each dub version that should language data; Described CPU (central processing unit) comprises that respectively setting up module and image voice by video language sets up module self imaging database, language database and speech database and read corresponding data and accept module and accept the received language video data of module according to this selection to provide the image of image data that module is provided with the selection of carrying out further processing, and described data storage cell and data processor unit insert CPU (central processing unit) respectively.
Above-mentioned CPU (central processing unit) also includes voice selecting handover module and speech selection handover module.
Above-mentioned CPU (central processing unit) also is connected to user's interface.
Above-mentioned CPU (central processing unit) also is connected to display unit.
Above-mentioned CPU (central processing unit) also is connected to broadcast unit.
Image playing system of the present invention and method thereof are with image database, language database and speech database three are independent respectively to be stored, wherein language database comprises as the Chinese language version, english language version and Japanese language version, and speech database comprises as Chinese dub version, English dub version and Japanese dub version, the user can reuse this a little databases, and each edition data that language database and speech database are comprised can be combined to as Chinese subtitle and English and dubs, Chinese subtitle and Japanese are dubbed, English subtitles and Chinese are dubbed, English subtitles and Japanese are dubbed etc.And, the present invention sets up module and sets up real-time synchronous corresponding relation setting up module and image voice by video language respectively between image database, language database and the speech database, when making image play, can reach the three-dimensional audio visual effect as the film between pairing language demonstration of image data and this image data and the speech play.
Four, description of drawings
Fig. 1 is the Organization Chart of image playing system of the present invention;
Fig. 2 is image data in the embodiment of the invention and the synchronous corresponding relation figure between the language data;
Fig. 3 is image data in the embodiment of the invention and the synchronous corresponding relation figure between the speech data.
Five, embodiment
Method flow of the present invention is as follows:
1) the language video data and the data voice playback of the required correspondence of image data chosen in input;
2) in the language database, read corresponding language version according to this language video data of being imported, and set up each image data and to the demonstration corresponding relation of each language version that should image data, and each language version is respectively from the language data of this language database;
3) in the speech data storehouse, read corresponding dub version according to the data voice playback of being imported, and set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data, and each dub version is respectively from the speech data of this speech database;
4) integrate this image data and language data and the speech data corresponding, play with it.
Referring to Fig. 1, image playing method of the present invention can be applicable in the image playing system of the present invention, this image playing system mainly is applicable to hand-held electronic products, as PDA (Personal Digital Assistant) (Personal DigitalAssistant, PDA), palmtop computer (Handheld Personal Computer, HPC), e-dictionary or mobile phone etc.
Image playing system of the present invention comprises user's interface 11, main storage unit 13, CPU (central processing unit) 15, display unit 17 and broadcast unit 19.
User's interface 11 provides the operation-interface between user and image playing system, and by this operation-interface, the user is according to the required image playing system of operating of individual.Be that the user sends instruction to user's interface 11, receive this instruction by user's interface 11.The user chooses the pairing Chinese language of image data by input media 110 and shows and English data voice playback.Wherein, input media 110 can be keyboard, mouse, digital pen etc., also can be other instruction that is used for receiving the user and can and e-book between carry out the electronic installation of interactive operation.
Main storage unit 13 is the internal memory of the required corresponding data of any storage user, ROM (read-only memory) (Read Only Memory for example, ROM) and random access memory (Random Access Memory, RAM), the data that its instruction that is used for depositing user's interface 11 is asked, and the relevant data processor of instruction execution that is further transmitted according to user's interface 11, it comprises data storage element 130 and data processor unit 131.
Data storage element 130 comprises image database 132, language database 134 and speech database 136.Wherein, this image database 132 stores a plurality of image datas, and image data can be a succession of related animation data page or leaf that stores, or is the animation data page or leaf of partial association storage, also can be the animation data page or leaf that exists with archives kenel independently.The user can add new animation data page or leaf to image database 132, thereby increases the content of image.
This language database 134 includes the language data of a plurality of different language versions, as Chinese language version (promptly being equal to " text subtile "), and english language version, Japanese language version, and separate storage between each language version.Therefore, each language version that independently stores can be above-mentioned image data and shares, and reuses each language version for the user.In addition, the language data that the user can add or more renew as language versions such as French, German, Italian, increases the content of language data to language database.
Equally, this speech database 136 includes a plurality of and the different language version speech data of corresponding different dub versions respectively, as Chinese dub version, English dub version, Japanese dub version, and separate storage between each dub version.Therefore, each dub version that independently stores can be above-mentioned image data and shares, and reuses each dub version for the user.In addition, the speech data that the user can add or more renew (promptly be equal to " dub ") to speech database, as Speech version such as French, German, Italian, and the content of increase speech data.
More than three class databases are respectively independent makes, and not related mutually.The replacing that is e-book inner language data does not influence image data and speech data, and the replacing of speech data does not influence image data and language data.Therefore, the user can be corresponding image data select the language data and the speech data of various combination, except some basic combinations, as Chinese subtitle dub with Chinese, English subtitles with English dub, be similar language version combination outside, the user also can be according to specific needs, make e-book have the function of study, and choose arbitrarily combination, as Chinese subtitle and English dub, Chinese subtitle and Japanese is dubbed, English subtitles and Chinese is dubbed, English subtitles and Japanese are dubbed etc.Therefore, identical content in the reusable language database of user and the speech database, and the utilization factor again of e-book is increased.In addition, the language version that uses e-book of the present invention to integrate or more to renew as the language and the speech data of French version, Italian version, and makes e-book of the present invention more possess elasticity, also helps e-book and upgrades new language service.
Data processor unit 131 comprises that video language is set up module 133 and module 135 set up in the image voice.Wherein, it is to set up each image data and to the demonstration corresponding relation of each language version that should image data that video language is set up module 133, image data can corresponding Chinese language show, english language shows, the Japanese language shows, and the demonstration of the language of each image data and each version can reach as the synchronous corresponding relation as the film.Synchronously corresponding relation can be that temporal information is corresponding synchronously, address information is corresponding also can be plural corresponding synchronized relation synchronously, makes between the demonstration of image data and language just like dynamic synchronization corresponding relation as the film.
Module 135 set up in the image voice is to set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data, and the pairing Chinese language of image data shows to have that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.; The pairing english language of image data shows to have that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.; The pairing Japanese language of image data shows to have also that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.Therefore, Chinese is dubbed in the demonstration of the Chinese language in the language database, english language demonstration, the demonstration of Japanese language and the voice language database, English is dubbed, Japanese is dubbed the e-book that is combined to form 9 kinds of versions respectively.Newly-increased in the language database if (or replacing) a kind of language version, or newly-increased in the speech database (or replacing) a kind of Speech version, or each increases (or replacing) a kind of version newly in language database and the speech database, then can obtain the e-book of more versions.Synchronous corresponding relation herein can be that temporal information is corresponding synchronously, address information is corresponding also can be plural corresponding synchronized relation synchronously, makes between the demonstration of image data and language just like dynamic synchronization corresponding relation as the film.Therefore, as long as reach real-time synchronous effect between the broadcast of image data and the pairing voice of image data, other any synchronous corresponding relation is also applicable to image playing system of the present invention.
CPU (central processing unit) 15 is between user's interface 11 and main storage unit 13, being used for receiving the instruction of being transmitted by user's interface 11 handles in inside, and further control above-mentioned main storage unit 13, and control dealing with relationship between these main storage unit 13 interior data storage elements 130 and the data processor unit 131.
Comprise also in the CPU (central processing unit) 15 that module 152 is accepted in selection and image provides module 156.Module 152 image data that user's interface 11 is received is accepted in selection, selected Chinese language video data and English data voice playback is sent in the CPU (central processing unit) 15, after image provides module 156 to receive to select to accept the request of module 152, promptly at image data, Chinese language video data and English data voice playback, call out video languages in the above-mentioned main storage unit 13 respectively and set up the routine processes that module 135 set up in module 133 and image voice; Set up the demonstration corresponding relation that module 133 is set up this image data and Chinese language data by video language, set up the broadcast corresponding relation that module 135 is set up Chinese language data and English dub version by the image voice; Then, image provides module 156 that video language is set up the demonstration mapping table that module 133 set up and is inserted in this image data, the language that reads Chinese version from language database 134 shows, and the broadcast mapping table that module 135 set up set up in the image voice be inserted in this image data, read the broadcast of dubbing of English edition from speech database 136; Then, the pairing Chinese language of this image data and the image data that reads shown and English is dubbed to play to integrate and be transmitted back to image module 156 is provided, make image provide module 156 to receive the image data of integration.And image provides module 156 further to be sent to respective display unit 17 and broadcast unit 19, to export image data and the speech data of being integrated.
CPU (central processing unit) 15 of the present invention also comprises speech selection handover module 153 and voice selecting handover module 155.This speech selection handover module 153, when being used for receiving image data request that the user aligns broadcast and showing with another language, for example, the user is in the time that can not expect, image data is with Chinese during as the language display interface, user's alternative is got with english language as display interface, at this moment, the request that the english language that these CPU (central processing unit) 15 priority processing speech selection handover modules 153 are received shows, and the request that is received is sent to image module 156 is provided, provide module 156 to carry out above-mentioned language display process by image.Like this, when the user needs the switch languages display interface, will can not withdraw from present language display interface earlier as before, select again.Equally, when this voice selecting handover module 155 is used for receiving image data request that the user aligns broadcast and needs play with another voice, when being in the playback interface of English voice as image data A of the present invention, user's alternative is got with the Japanese voice as playback interface, the request of the Japanese speech play that these CPU (central processing unit) 15 priority processing voice selecting handover modules 155 are received, and the request that is received is sent to image module 156 is provided, provide module 156 to carry out above-mentioned speech play by image and handle.Like this,, will can not withdraw from present speech play interface earlier as before, select again when the user need be switched the speech play interface.In addition, selection of the present invention is accepted module 152 and also can be received image data A that the user aligns broadcast in real time and ask to switch another language simultaneously and show and another speech play, and the language that received shown and the request of speech play, at any time offer image module 156 is provided, the running of module 156 each unit of control is provided by image.That is to say that the also function of double as speech selection handover module 153 and voice selecting handover module 155 simultaneously of module 152 is accepted in selection of the present invention.
Display unit 17 is accepted the received language display interface display language data of module 152 according to this selection.Be shown as example with Chinese language, set up module 133 by the video language in the main storage unit 13 and read the Chinese language version from language database 134, and the language version that reads is reached video picture provide in the module 156, show these Chinese language versions to order about display unit 17, promptly receive the image data that provided and the Chinese language version corresponding and shown with image data by display unit 17.Broadcast unit 19 is play speech data according to selecting to accept module 152 received speech play interfaces.With English speech play is example, setting up module 135 by the image voice in the main storage unit 13 reads English and dubs data from speech database 136, and the English that will read is dubbed data and is reached this video picture module 156 is provided, play these English and dub data to order about this broadcast unit 19, promptly receive the image data that provided and the English corresponding and dub and exported with image data by broadcast unit 19.
When using image playing system of the present invention, when the user passes through input media 110, as mouse, choose the animation data page or leaf in the image playing system of the present invention, and check language data and the speech data that whether stores different editions in the image playing system, if not, then image playing system is directly exported the animation data page or leaf with the default value of setting; If when the language data of storage different editions and speech data, then the user sends instruction by user's interface 11, as Chinese language Show Options and English voice Show Options, after user's interface 11 receives this instruction, be about to it and be sent to CPU (central processing unit) 15, selections in the CPU (central processing unit) 15 are accepted module 152 and are received behind the Chinese language Show Options and English speech play option that user's interface 11 transmitted (as if playing from the animation data page, when receiving the instruction that user's interface 11 transmitted, then preferentially receive this instruction by speech selection handover module 153 and voice selecting handover module 155, and carry out following processing), promptly provide 156 pairs of data handling procedures of module unit 131 to call out by image.After data processor unit 131 receives calling, make video language set up module 133 and be responsible for setting up synchronous corresponding relation between the demonstration of this animation data page or leaf and Chinese language, be responsible for setting up synchronous corresponding relation between the broadcast of this animation data page or leaf and English voice and set up module 135 by the image voice.Then, image provides the real-time mapping table between the broadcast of the demonstration of the animation data page or leaf of module 156 with being set up, the pairing Chinese language of animation data page or leaf and English voice to be inserted in this animation data page or leaf, and self imaging database 132 reads the animation data page or leaf respectively, reading the pairing Chinese language of animation data page or leaf from language database 134 shows, read the pairing English speech play of animation data page or leaf from speech data storehouse 136, and exported by display unit 17 and broadcast unit 19 respectively.
Referring to Fig. 2, suppose that a series of image datas are made up of animation data page or leaf A, animation data page or leaf B, animation data page or leaf C etc.Wherein, animation data page or leaf A is made of the data of corresponding [10-12] plural number, and animation data page or leaf B is made of the data of corresponding [15-20] plural number, and animation data page or leaf C is made of the data of corresponding [25-30] plural number.The pairing relation of each animation data page or leaf can be different according to actual state.Can write down different language version information respectively in one hurdle of animation data page or leaf A pairing [10-12] plural number, as representing the Chinese-traditional language version with [cht], representing the Japanese language version with [eng] expression english language version with [jp], and be the relation of (promptly separate) arranged side by side between each language version, the user can be from wherein selecting a language version arbitrarily.It should be noted in addition, the animation data page or leaf is only set up module 133 and the relation of demonstration is set up in language that should the animation data page or leaf by video language, specifies actual language content (that is text subtile), english language content and Japanese language content by animation data page or leaf A.The animation data page or leaf can be reached by the look-up table of setting up the position corresponding relation with the demonstration relation to language that should the animation data page or leaf, or the language of setting up this animation data page or leaf is presented at the path expression in the language database, also can be other any suitable logical relation.
Referring to Fig. 3, image data is set up module 135 with the plural corresponding synchronized relation between speech data is play by the image voice and is set up indirectly.Set up module 135 by the image voice, the language data of each language version is set up corresponding synchronized relation with the broadcast to each dub version that should language data.Particularly, the pairing Chinese-traditional language version of above-mentioned animation data page or leaf A has that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.; The pairing english language version of animation data page or leaf A has that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.; The pairing Japanese language version of animation data page or leaf A also has that Chinese is dubbed, English is dubbed and Japanese is dubbed etc.Therefore, can write down in the hurdle of animation data page or leaf A pairing [10-12] plural number that information dubbed in shared Chinese, information dubbed in English and Japanese is dubbed information, make between image data and the speech data and also set up plural corresponding synchronized relation.Can write down in one hurdle of above-mentioned animation data page or leaf B pairing [15-20] plural number that information dubbed in shared Chinese, information dubbed in English and Japanese is dubbed information, can write down in one hurdle of animation data page or leaf C pairing [25-30] plural number that information dubbed in shared Chinese, information dubbed in English and Japanese is dubbed information, and be relation (being independence) arranged side by side between each dub version, the user can be from wherein selecting one arbitrarily.Equally, the animation data page or leaf is only set up module 135 by the image voice, and playback relationship is set up in voice that should the animation data page or leaf, and content dubbed in actual Chinese, content dubbed in English and Japanese is dubbed content and specified by this animation data page or leaf A hurdle.The animation data page or leaf with can reach by the look-up table of setting up the position corresponding relation speech play relation that should the animation data page or leaf, maybe can set up the path expression of speech play in speech database of this animation data page or leaf, also can be other any suitable logical relation.

Claims (8)

1, a kind of method of carrying out the image synchronous playing on hand-held electronic device, it is characterized in that: this method comprises the following steps:
1) the language video data and the data voice playback of the required correspondence of image data chosen in input;
2) in the language database, read corresponding language version according to this language video data of being imported, and set up each image data and to the demonstration corresponding relation of each language version that should image data, and each language version is respectively from the language data of this language database;
3) in the speech data storehouse, read corresponding dub version according to the data voice playback of being imported, and set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data, and each dub version is respectively from the speech data of this speech database;
4) integrate this image data and language data and the speech data corresponding, play with it.
2, method of carrying out the image synchronous playing on hand-held electronic device according to claim 1 is characterized in that: can set up each image data and to the demonstration corresponding relation of each language version that should image data according to temporal information corresponding relation, address information corresponding relation or plural corresponding relation described step 2).
3, method of carrying out the image synchronous playing on hand-held electronic device according to claim 1 is characterized in that: can set up the language data of each language version and to the broadcast corresponding relation of each dub version that should language data according to temporal information corresponding relation, address information corresponding relation or plural corresponding relation in the described step 3).
4, a kind of player method according to claim 1 carries out the image playing system, it is characterized in that: this system comprises data storage cell, set up the data processor unit and the CPU (central processing unit) of corresponding relation between image database and language database and the speech database respectively, described data storage cell comprises the image database that is used for storing a plurality of image datas, be used for storing the language database of a plurality of and language datas that have different language version relevant and be used for storing a plurality of relevant and have a speech database that the different language version is distinguished the speech data of corresponding different dub versions with image data with image data; Described data processor unit comprises and is used for setting up each image data and the video language of the demonstration corresponding relation of each language version that should image data is set up module and is used for setting up the language data of each language version and module set up in the image voice of the broadcast corresponding relation of each dub version that should language data; Described CPU (central processing unit) comprises that respectively setting up module and image voice by video language sets up module self imaging database, language database and speech database and read corresponding data and accept module and accept the received language video data of module according to this selection to provide the image of image data that module is provided with the selection of carrying out further processing, and described data storage cell and data processor unit insert CPU (central processing unit) respectively.
5, image playing system according to claim 4 is characterized in that: described CPU (central processing unit) also includes voice selecting handover module and speech selection handover module.
6, according to claim 4 or 5 described image playing systems, it is characterized in that: described CPU (central processing unit) also is connected to user's interface.
7, according to the described image playing system of claim 6, it is characterized in that: described CPU (central processing unit) also is connected to display unit.
8, according to the described image playing system of claim 7, it is characterized in that: described CPU (central processing unit) also is connected to broadcast unit.
CN 200410073202 2004-10-27 2004-10-27 Method and system for proceeding image synchronous broadcast on hand held electronic devices Pending CN1604083A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200410073202 CN1604083A (en) 2004-10-27 2004-10-27 Method and system for proceeding image synchronous broadcast on hand held electronic devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200410073202 CN1604083A (en) 2004-10-27 2004-10-27 Method and system for proceeding image synchronous broadcast on hand held electronic devices

Publications (1)

Publication Number Publication Date
CN1604083A true CN1604083A (en) 2005-04-06

Family

ID=34666873

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200410073202 Pending CN1604083A (en) 2004-10-27 2004-10-27 Method and system for proceeding image synchronous broadcast on hand held electronic devices

Country Status (1)

Country Link
CN (1) CN1604083A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1916885B (en) * 2005-08-19 2011-01-12 深圳市朗科科技股份有限公司 Method for synchronous playing image, sound, and text

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1916885B (en) * 2005-08-19 2011-01-12 深圳市朗科科技股份有限公司 Method for synchronous playing image, sound, and text

Similar Documents

Publication Publication Date Title
US4779080A (en) Electronic information display systems
CN1079967C (en) Electronic translation machine
US6370498B1 (en) Apparatus and methods for multi-lingual user access
CN102165437B (en) Information processor and information processing method
KR20070071675A (en) Method for performing multiple language tts process in mibile terminal
CN1918566A (en) A foreign language communication aid
JP2007199410A (en) System supporting editing of pronunciation information given to text
CN102460538A (en) Second language pronunciation and spelling
US20080005100A1 (en) Multimedia system and multimedia search engine relating thereto
CN102063193A (en) Method and device for displaying input results
CN1604083A (en) Method and system for proceeding image synchronous broadcast on hand held electronic devices
US20100045879A1 (en) Matrix display interface for presentation system
CN101271357A (en) Content recording method and device of writing board
CN101315630A (en) Multimedia broadcasting system and method thereof
CN113963681A (en) Speech synthesis method, system and storage medium based on text editor
KR20160140527A (en) System and method for multilingual ebook
KR200265464Y1 (en) Studying system of recording medium printed a digital index code
CN2643402Y (en) Chinese character inquiry device for various foreign languages
CN1167999C (en) Method for converting super medium document into speech sound
CN1466039A (en) Electronic remote controller capable of inputting Chinese and various characters
US7978829B2 (en) Voice file retrieval method
TWI496460B (en) Device and method for providing individualized voice stock information via smart television apparatus
JP2001006295A (en) Information reproducing device and method therefor, as well as information providing medium
JP2008269122A (en) Processing unit dividing device, processing unit dividing method and program
TWI227412B (en) System for proceeding image broadcast and method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication