TW200805250A - Language-auxiliary expression system and method - Google Patents

Language-auxiliary expression system and method Download PDF

Info

Publication number
TW200805250A
TW200805250A TW95124762A TW95124762A TW200805250A TW 200805250 A TW200805250 A TW 200805250A TW 95124762 A TW95124762 A TW 95124762A TW 95124762 A TW95124762 A TW 95124762A TW 200805250 A TW200805250 A TW 200805250A
Authority
TW
Taiwan
Prior art keywords
data
language
scene data
scope
keyword
Prior art date
Application number
TW95124762A
Other languages
Chinese (zh)
Inventor
Chaucer Chiu
Lei Sun
Original Assignee
Inventec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Corp filed Critical Inventec Corp
Priority to TW95124762A priority Critical patent/TW200805250A/en
Publication of TW200805250A publication Critical patent/TW200805250A/en

Links

Abstract

This invention provides a language-auxiliary expression system and method, which is implemented by collecting information peculiar to the user's speech sounds to identify whether the captured sound information contain keywords of the preset characteristics, and retrieving information peculiar to a corresponding scene for display according to the identified characteristic keywords. The invention integrating with a multimedia technique enables the expressed language to be more vivid and comprehensible.

Description

200805250 •九、發明說明·· -【發明所屬之技術領域】 於一 m關於-種多媒體技術,更詳而言之,係關 達系統及方法。相應之场不貝科的語言輔助表 【先前技術】 在語言交流活動中 「联 H / _ 均知於“ 聽」疋人們獲得資訊、獲取知 重要途並透過「說」來輪出資訊,以 4柄式表達自身的·所想 語言交流活動的重要組成部份,一㈣:;广兄無疑- == 吏得所講述的對象表達得活靈活現,棚棚如 跨聽者的興趣’如此即可使得雙方的交流更且 動性,從而達到發言人傳遞資訊之目❸。 、 不夠Γ為使語言表達生動形象,光配合一些文字的講述是 ° ’因為文字畢竟是抽象的,是思想的間接表達,因 話内話的同時’配合—些表情或動作以使講 果,也更生動有趣。例如,在兒音語 :::::養兒童對語言美的感悟上== ==於受到兒童身心發展的限制,使得他們對故 祀難/ 在意義的理解有一定的難度,因此,也就 朗讀過程中透過想像形成鮮明的晝面,語感的培養 心知,創建良好的說話環境,是唤醒 〜體思識’產生強烈表達慾望的方式,且良好的說 19535 5 200805250 ^話環境容易引起聆聽者的情感共鳴,幫助他們正確理解語 -έ所要表達的内容。 再者,多媒體技術由於憑藉其所具有之聲晝並茂、視 I結合、動靜相宜以及感染力強等優點,正迅速地被應用 於越來越多的技術領域中。有鑒於此,若能在兒童聆聽童 _ 話故事或兒童詩歌等語言教材的過程中結合使用多媒體 ▲技術,以利用情景再現的方式使得所要表達的語言資訊更 為形象生動且易於理解,即為本案所需解決之課題。 ⑩【發明内容】 鑒於上述習知技術之缺點,本發明之主要目的在於提 供一種語言辅助表達系統及方法,以令所表達的語言資訊 更為形象生動且易於理解。 為達上述目的及其相關目的,本發明提供一種語言辅 助表達:系統及方法。本發明之語言輔助表達系統係用以接 收語音資訊,據以同步播放相應的場景資料,該系統包括 #用以設定特徵關鍵字資料及其相應之場景資料之設定模 組,用以採集語音資訊並進行識別該語音資訊中所包含之 該設定模組所設定之特徵關鍵字之識別模組;以及用以依 據該識別模組識別出之特徵關鍵字以及該設定模组之設 定資料,提取對應該特徵關鍵字之場景資料並予以播放之 處理模組。 一其中,該場景資料係由動晝資料及圖像資料所組群组 之一者。再者,該處理模組係以切換方式及疊加方式之苴 中一者播放所提取之場景資料,且該識別模組復包括匹配 19535 6 200805250 '單元,係用以依據預設規則分析該識別模組識別出之特徵 '關鍵字,以於該設定模組之設定資料中搜尋出與其配 之特徵關鍵字。 、本發明之語言辅助表達方法係用以接收語音資訊 = 的場景資料,該方法包括:設定特徵關鍵 =貝枓及其相應之場景資料;採集語音資訊並識別該語立 貝=所包含之特徵關鍵字;以及提取與所識別出之特徵 J鍵子相對應的場景資料並予以播放。 之一f中=場景資料係由動晝資料及圖像資料所組群组 =者。再者,該方法係以切換方式及疊加方式 — =播放所提取之場景資料。此外,該方法據 寻出與其相匹配之特徵關鍵字。 、科中技 立資=二案:語言辅助表達系統及方法,係透過採集語 :貝成亚進仃識別,以搜尋出與該語音資訊相對庫 二::以播放’藉此以透過結合多媒體技術的方式使得: 表達的語言資訊更為生動且易於理解。 A使传所 【實施方式】 以下係藉由特定的具體實例說明本發 式,熟悉此技藝之人+开山丄 M方乜方 瞭解本發明之其他優點°兄明書所揭不之内容輕易地 的具體實例加以施行或=效太本發明亦可藉由其他不同 基於不同觀點與應用在應;^本說明書中的各項細節亦可 修飾與變更。 叫離本發明之精神下進行各種 19535 7 200805250200805250 • Nine, invention descriptions···[Technical field to which the invention belongs] In the case of a multimedia technology, more specifically, it is a related system and method. The corresponding field is not the language aid table of Beca [previous technology] In the language exchange activities, "H/_ is known to "listen", people get information, get important knowledge and use "say" to turn out information, 4 handles express their own important part of the language exchange activities, one (four):; Guang brother undoubtedly - == Chad's object is vividly expressed, and the shed is like the interests of the listeners. The exchanges between the two sides are made more dynamic and thus achieve the goal of the speaker to transmit information. Not enough to make the language express vivid image, the light with some words is '° because the text is abstract after all, is the indirect expression of the thought, because the internal words at the same time 'cooperate with some expressions or actions to make fruit, It is also more interesting and interesting. For example, in the children's language::::: raising children's perception of language beauty == == subject to the limitations of children's physical and mental development, making them difficult to understand / understand the meaning, therefore, In the process of reading aloud, through the imagination to form a clear face, the cultivation of the mind, and the creation of a good speaking environment, is a way to arouse the body's desire to express a strong desire, and good to say 19535 5 200805250 ^The environment is easy to cause listening The emotional resonance of the person helps them to understand the content of the language correctly. Furthermore, multimedia technology is rapidly being applied to more and more technical fields due to its advantages such as sound, visual integration, dynamic appeal and strong appeal. In view of this, if the children can listen to the children's _ story or children's poetry and other language textbooks in combination with the use of multimedia ▲ technology, in the use of scene reproduction to make the language information to be more vivid and easy to understand, that is The subject to be solved in this case. 10 SUMMARY OF THE INVENTION In view of the above disadvantages of the prior art, it is a primary object of the present invention to provide a language-assisted expression system and method for making the expressed language information more vivid and easy to understand. To achieve the above objects and related objects, the present invention provides a language assisted expression: system and method. The language-assisted expression system of the present invention is configured to receive voice information and synchronously play corresponding scene data, and the system includes a setting module for setting feature keyword data and corresponding scene data for collecting voice information. And identifying a feature module for identifying a feature keyword set by the setting module included in the voice message; and extracting a pair according to the feature keyword identified by the recognition module and setting data of the setting module A processing module that should feature the scene data of the keyword and play it. One of the scene data is one of a group of moving data and image data. Furthermore, the processing module plays the extracted scene data in one of the switching mode and the superimposing mode, and the identification module includes a matching 19535 6 200805250 'unit for analyzing the identification according to a preset rule. The module identifies the feature 'keyword' to search for the feature keyword matched with the setting data of the setting module. The language-assisted expression method of the present invention is for receiving scene data of voice information=, the method comprises: setting a feature key=beauty and its corresponding scene data; collecting voice information and identifying the language of the language=included features a keyword; and extracting scene material corresponding to the identified feature J key and playing it. One of the f = scene data is composed of dynamic data and image data group =. Furthermore, the method is to play the extracted scene data in a switching manner and a superimposing manner. In addition, the method finds a feature key that matches it. , Science and Technology Finance = Second Case: Language-assisted expression system and method, through the collection language: Becheng Yajin identification, to search for the opposite of the voice information library:: to play 'by this through the integration of multimedia The technical approach makes: The language information expressed is more vivid and easy to understand. A 传 传 [Embodiment] The following is a specific example to illustrate the hair style, the person familiar with the art + Kaishan M Fang Fangfang to understand the other advantages of the present invention, the content of the brother's book is not easy to Specific examples are to be implemented or to be effective. The present invention may also be modified and modified by various other aspects based on different viewpoints and applications; Calling the spirit of the invention to carry out various 19535 7 200805250

如第i圖所示者用以說明本發明之語言辅助表達系 統之基本架構方塊示意圖。如圖所示,本發明之語言輔助 表達系統100係用以接收外部語音資訊,並對所接收之語 音資訊進行識別,藉以搜尋出與該語音資訊相對應之場景 資料予以播放,以使所表達的語言資訊更為形象生動,便 於跨者進行理解及圮憶。另一方面,本發明之語言輔助 表達系統100復可搭配應用於具有多媒體處理功能之電 子設備(未予圖式)中,係如個人電腦(PC)、筆記型電腦 (NB)、掌上型電腦(ppc)、個人數位助理或行動電話 (Cell Phone)等,且該電子設備係組設有用以採集聲:資 訊之聲音採集器2以及用以播放顯示場景資料之顯示曰器、 如第1圖所示’該語言辅助表達系統1〇〇係包括設定 模組110、識別模組120以及處理模組13〇。 該設定模組提供㈣者設定特徵關鍵字資 枓,以及各特徵關鍵字所對應之場景資料,其旦 貧料係可為動畫場景資料或靜態圖像資料,並包括有= 景及靜態圖像的音效資料’且該設定模組 係將該4設H料料於資料庫m巾。 實施例中,該儲存於資料庫1〇1中 、 播放效果(如錢層:填行㈣,料係按照 按照場景資料之景物層次分類訊:崎理杈組130 (請容後詳述)。 刀^息以豐加方式予以播放 該識別模組120則用以採集語音資訊,於本實施例 19535 8 200805250 其可藉由該聲音採集器2 如於演講過程中读讲夾古π 丨曰曰貝汛,係 :過麥克風以採集之發言者的語音資 七高,亦可利用組設於該 貝 (未予圖式)以採隼該電子〜一”的曰頻處理早兀 採隹的往立次,〃、 °又備所輸出的語音資訊,且對所 知-的…貝矾進行識別,以尋找出該 入 單元12Λ田 識別模組120復包括匹配 ::121,其用以依據預設規則,對該識別模請所識The block diagram of the basic architecture of the language-assisted expression system of the present invention is illustrated in Figure i. As shown in the figure, the language assisted expression system 100 of the present invention is configured to receive external voice information and identify the received voice information, thereby searching for scene data corresponding to the voice information to be played, so as to be expressed. The language information is more vivid and vivid, which is convenient for cross-talkers to understand and recall. On the other hand, the language-assisted expression system 100 of the present invention can be used in combination with an electronic device (not shown) having multimedia processing functions, such as a personal computer (PC), a notebook computer (NB), and a palmtop computer. (ppc), personal digital assistant or cell phone (Cell Phone), etc., and the electronic device group is provided with a sound collector 2 for collecting sound: information, and a display device for playing back display scene data, as shown in FIG. The language-assisted expression system 1 includes a setting module 110, an identification module 120, and a processing module 13A. The setting module provides (4) setting feature keyword information, and scene data corresponding to each feature keyword, and the poor material system can be animated scene data or static image data, and includes = scene and static image The sound effect data 'and the setting module is to set the H material to the data bank m towel. In the embodiment, the storage is stored in the database 1〇1, and the playing effect (such as the money layer: filling (4), the material is classified according to the scene level according to the scene data: the Kawasaki group 130 (please refer to later). The recognition module 120 is used to capture the voice information in a rich manner. In this embodiment, 19535 8 200805250, the sound collector 2 can be read by the sound collector 2 during the speech. Bessie, the system: the voice of the speaker who collected the microphone is seven high, and can also use the frequency set in the shell (not shown) to pick up the electrons to the early ones. The voice information outputted by the 次, 〃, ° is also prepared, and the known 矾 矾 矾 , , , , , , Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Λ Set rules, please know the recognition model

鍵字進行分析,以於儲存於資料庫101之設 搜哥出與其相匹配之特徵關鍵字,係如,當從所 木木的語音資訊中識別出、毛大雪、特徵關鍵字時, ρ按照㈣規則,由該資料庫1G1中搜尋出特徵關鍵字 大雪〃與其相匹配。 該處理模組130係用以依據該識職组12〇所識別出 之特徵關鍵字以及該設定模組UG之設^料,提取對應 該特徵關鍵字之場景資料並輸出至顯示器3中予以播 放。另一方面,於本實施例中,使用者復可依照個人喜好 預先:¾擇播放模式,如切換方式或疊加方式等,俾供該處 理楔組130據以播放所提取的場景資料,並於播放過程 中,可透過手動方式來延長或縮短場景資料的顯示時間。 第2圖係用以顯示本發明之語言輔助表達方法之運 作流程圖。如圖所示,首先進行步驟S2〇2,設定特徵關 鍵字貢料係如小雨、大雪等用以描述天氣特徵的關鍵字; 又如行走、跑步、跳躍等用以描述動作特徵的關鍵字;或 是如微笑、難過、生氣等用以描述表情特徵的關鍵字等 19535 9 200805250 的::對各特徵關鍵字所表達的釋意,提供一個或者多 書場景^二與,目關聯’於此’該場景資料可具體為動 資=:::=:=_存於 戶、61夕』甲,該储存於:蒼 照播放效果^旦:中的動晝場景資料係按 式播放場景資料之需长二刀广乂滿足後_加方 φ 而衣接者進至步驟S204。 、V驟S204中,採集所需之語音資訊,兴例夾>、 當老師給學生講故事曰μ舉例來祝, 集老细的66 1立一 °転中,可透過耸音採集器2以採 ΐ的立靜日歧,再者’亦可利馳設於該電子設備 (未予圖式)採集該電子設備所輸出的 、 如廣播電台所播出的語音資1等。彳m 5 + 驟S206。 曰貝A寺。接者進至步 於步驟S 2 0 6中,辦如兮44在l 籲特徵關鍵字,例如,從=_的語音資訊中所包含的 中識別出夕鵝毛大雪,,之妒徵毛大雪/的語音資訊 熊在艱難地行走、二:二如,從” -隻小 走、特徵關鍵字出小熊,以及,行 ^ 接耆進至步驟S208。 搜尋出中’依據該識別結果’自資料庫101中 的八4 /、/、 的特徵關鍵字,具體而言,係依照預設 資U : Φ#㈣別出㈣㈣鍵字進行分析’以自該 貝抖犀1 〇 1中葺种山 、戈出兵/刀析結果相匹配的特徵關鍵字,例 5宁我尋毛大β ^進仃为析,以於資料庫101中搜尋出" 19535 10 200805250 •大雪、特徵_字與其相匹配;又如,對〃^ ” -行走〃進行分析,並於資料庫1〇1中搜尋出,, ‘、k σ 與其相匹配,接著進至步驟S210。 的小熊々 气別S21G中’判斷是否從資料庫1G1中搜尋到盘 識U相匹配的特徵關鍵字,若是,則進行 ,、 否則返回步驟S204。 驟S212, —於步驟S212中,由資料庫1〇1中提取與該 字相對應的場景資料,接著進至步驟S214。 卜 器3::=4中,將所提取出的場景資料輪出至顯示 -定以“益’於本貫施例中’係可依照使用者的直好 換方式或疊加方式等播放模式播 好 當老師念到,,天上下著鶴毛大二貝 =3中係顯示出—正在下雪的 ::: 念至厂一隻小熊在_地績 設定以疊加熊的晝面,且若使用者係預先 =看到有-隻小熊在雪地;行走的書=果此二觀: ,,使用者亦可於播放過程中,按二= =來延長或縮短各場景資料的顯示時間二 面可與講話内容達到同步之效果。 俾使插放晝 過採達系統及方法’係透 行識別分析,資訊進 曰貝0凡中疋否包含有預設之特 19535The key is analyzed to be stored in the database 101, and the matching keyword is matched with the feature keyword, for example, when the voice information of the wood is recognized, the hair is snowy, the feature keyword is selected, (4) Rules, which are searched for by the database 1G1 to match the characteristic keyword Daxue. The processing module 130 is configured to extract scene data corresponding to the feature keyword according to the feature key identified by the knowledge group 12 and the setting module UG, and output the scene data to the display 3 for playing. . On the other hand, in the embodiment, the user can pre-select the play mode, such as the switching mode or the superimposing mode, according to personal preference, and the processing wedge group 130 plays the extracted scene data accordingly. During playback, the display time of the scene data can be extended or shortened manually. Figure 2 is a flow chart showing the operation of the language assisted expression method of the present invention. As shown in the figure, step S2〇2 is first performed to set a keyword of a feature keyword such as light rain or heavy snow to describe a weather feature; and a keyword for describing an action feature such as walking, running, jumping, or the like; Or keywords such as smiles, sadness, anger, etc. used to describe expression features, etc. 19535 9 200805250 :: The interpretation of each feature keyword, providing one or more book scenes 'The scene data can be specifically for the dynamic capital =:::=:=_ deposit in the household, 61 eve 』 A, the storage in: 苍照 playback effect ^ Dan: in the dynamic scene data is played according to the scene data It is necessary to lengthen two knives to satisfy the post-plus square φ and the splicer proceeds to step S204. In the step S204, the required voice information is collected, and the teacher is given a story to the student. For example, the old-fashioned 66 1 standing in the middle can be passed through the towering collector 2 In the case of picking up the static day, it is also possible that the electronic device (not shown) can collect the voice money 1 output by the electronic device, such as broadcast by the radio station.彳m 5 + step S206. Mussel A Temple. The receiver proceeds to step S 2 0 6 , and the 特征 44 is used to identify the feature keyword, for example, the singularity of the snow is recognized from the voice information included in the _, and the 妒 大 大 大// The voice information bear walks in a difficult way, two: two, for example, from "only small walk, feature keyword out of the bear, and, line ^ then proceed to step S208. Search in the 'according to the recognition result' from the database The characteristic keywords of the eight 4 /, /, in 101, specifically, according to the preset capital U: Φ # (four) do not (4) (four) key words for analysis 'from the shell shakes the rhinoceros 1 〇 1 in the mountains, Go to the soldier/knife analysis result matching the characteristic keyword, in example 5, I searched for the large β^ into the analysis, and searched in the database 101 " 19535 10 200805250 • The heavy snow, the feature _ word matched with it; For another example, the 〃^"-walking 〃 is analyzed and searched in the database 〇1, and ', k σ matches it, and then proceeds to step S210. In the small bear S21G, it is judged whether or not the feature key matching the U match is searched from the database 1G1, and if so, the process proceeds to step S204. Step S212, in step S212, the scene material corresponding to the word is extracted from the database 1〇1, and then proceeds to step S214. In the device 3::=4, the extracted scene data is rotated out to the display-set to "benefit in the present embodiment", which can be played according to the user's direct change mode or superimposed mode. Well, when the teacher read it, the sky was up and down. The 2nd department showed that it was snowing::: I went to the factory and a little bear was set up to superimpose the face of the bear, and if used The person in advance = see that there is - only the bear is in the snow; the book that walks = the second view: ,, the user can also extend or shorten the display time of each scene data by pressing two == during the playing process. Can achieve the effect of synchronization with the speech content. 俾 插 插 采 采 采 采 采 采 系统 系统 系统 采 采 采 采 采 识别 识别 识别 识别 识别 识别 识别 识别 识别 识别 识别 识别 识别 0 0 0 0 0 0 0 0

II 200805250 徵關鍵字,且提取出與該特徵關鍵字相㈣㈣景資料予 =播放顯示,藉此以使語言表達更為形象生動,且易於理 解記憶。 上述貝&例僅例不性說明本發明之原理及其功效,而 非用於限制本發明。任何熟習此項技藝之人士均可在不違 =本發明之精神及料下,對上述實施例進行修飾與改 2 口此,本發明之權利保護範圍,應如後述之申請專利 範圍所列。 ❿【圖式簡單說明】 第1圖為一方塊示意圖,其顯示本發明之語 達系統之基本架構方塊示意圖;以及 補助表 第2圖為一運作流程圖,其顯示本發明之語 達方法之運作流程示意圖。 補助表 【主要元件符號說明】 100 語言辅助表達系統 101 資料庫 110 設定模1且 120 識別模組 121 匹配單元 130 處理模組 2 ί音採集器 3 顯示器 S202〜214 步驟II 200805250 Signs the keyword and extracts the (4) (4) scene data from the feature keyword to the play display, so that the language expression is more vivid and easy to understand. The above description of the present invention and its effects are merely illustrative of the present invention and are not intended to limit the invention. Any person skilled in the art can modify and modify the above embodiments without departing from the spirit and scope of the present invention. The scope of the present invention should be as defined in the scope of the patent application. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram showing a basic architectural block diagram of the language system of the present invention; and FIG. 2 is an operational flowchart showing the method of the language of the present invention. Schematic diagram of the operational process. Supplementary Form [Description of Main Components] 100 Language-Assisted Expression System 101 Library 110 Setting Mode 1 and 120 Identification Module 121 Matching Unit 130 Processing Module 2 ί Sound Collector 3 Display S202~214

19535 1219535 12

Claims (1)

200805250 ^十、申請專利範圍: 1.—種語言辅助表達系統,係用以接收語音資訊 同步播放相應的場景資料,該系統包括: 設定模組,係用以設定特徵關鍵字資料 之場景資料; 〃祁應 ::組’係用以採集語音資訊並識別該語音資 °斤"V3之邊设定模組所設定之特徵關鍵字,·以及 處理模組’係用以依據該識別模組識之 2. 關料以及該設定模組之設定資料,提取對應該= 關鍵字之場景資料並予以播放。 寸ί 如申請專利範圍範圍第丨項 中,替旦〜〆 κ…補助表達系統,其 —者貝料係由動晝資料及圖像:#料所組群組之 3· t申睛專利範圍範圍第丨項之語言輔龙 ::該處理模組係以切換方式及疊加方式之夂4 4. 播放所提取之場景資料。 、 者 如申請專利範圍範圍第3項 士 中〜 貞之—輔助表達系統’其 H疋极組復包括將場景資料按照播放效果進行 以:加二,该處理模組按照場景資料之分類訊息 Μ ®加方式予以播放。 、 中申::利乾圍範圍第1項之語言輔助表達系統,发 中遠識別模組復包括匹配單元,仫田、^ 、 极組之設定資料中搜尋出與其相匹配之㈣ 19535 13 5· 200805250 6. 一種語言輔助表達方法,係用以接收語音資訊,據以 同步播放相應的場景資料,該方法包括: 。又疋特徵關鍵字資料及其相應之場景資料; 採集語音資訊並識別該語音f訊中所包含 徵關鍵字;以及 、 提取與所識別出之特徵關鍵字相對應的場景 料並予以播放。 /'、 如申請專·圍範圍第6項之語言辅助表達方法,其 中,該場景資料係由動晝資料及圖像資料所組群组之 一者0 、 8· 如申請專利範圍範圍第6項之語言辅助表達方法,其 中,係以切換方式及疊加方式之其中—者播放所提取 之場景資料。 9·200805250 ^10. Patent application scope: 1. Language-assisted expression system is used to receive voice information and synchronously play corresponding scene data. The system includes: a setting module, which is used to set scene data of feature keyword data; 〃祁 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2. Know the information and the setting data of the setting module, extract the scene data corresponding to the keyword and play it. In the case of the scope of the scope of application for patents, the substituting 〆 〆 ... 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助 补助Language of the Scope Item: Auxiliary: The processing module is switched and superimposed. 4. Play the extracted scene data. For example, if the scope of application for patents is the third item of the scope of the patent, the 辅助--the auxiliary expression system's H-group includes the scene data according to the playback effect: plus two, the processing module sorts the information according to the scene data Μ ® Add mode to play. , Zhongshen:: The language-assisted expression system of the first item of the Leganwei range, the COSCO identification module complex includes the matching unit, and the matching data of the Putian, ^, and Pole groups are searched for matching (4) 19535 13 5· 200805250 6. A language-assisted expression method for receiving voice information, according to which the corresponding scene data is synchronously played, the method comprising: And characterizing the keyword data and corresponding scene data; collecting voice information and identifying the keyword included in the voice message; and extracting and playing the scene material corresponding to the identified feature keyword. /', such as the language-assisted expression method of the sixth item of the application scope, wherein the scene data is one of the groups of the dynamic data and the image data group 0, 8· The language-assisted expression method of the item, wherein the extracted scene material is played in the switching mode and the superimposed manner. 9· 如申請專利範圍範圍第8 中’於該資料設定步驟中 照播放效果進行分類,以 疊加方式予以播放。 項之5吾s辅助表達方法,其 更進一步包括將場景資料按 按照場景資料之分類訊息以 10.如申請專利範圍範圍第6項之語言辅助表達方法,復 包括依據預設規則,分析所識別出之特徵關鍵字,^ 於該設定資料中搜尋出與其相匹配之特徵關鍵字。 19535 14If the scope of the patent application range is 8th, the playback effect is classified in the data setting step, and it is played in a superimposed manner. The method of 5 s auxiliary expression, which further comprises: categorizing the scene data according to the classification information according to the scene data. 10. The language-assisted expression method according to item 6 of the scope of patent application scope, including the identification according to the preset rule The characteristic keyword is generated, and ^ is searched for the characteristic keyword matching the setting data. 19535 14
TW95124762A 2006-07-07 2006-07-07 Language-auxiliary expression system and method TW200805250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW95124762A TW200805250A (en) 2006-07-07 2006-07-07 Language-auxiliary expression system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW95124762A TW200805250A (en) 2006-07-07 2006-07-07 Language-auxiliary expression system and method

Publications (1)

Publication Number Publication Date
TW200805250A true TW200805250A (en) 2008-01-16

Family

ID=44766068

Family Applications (1)

Application Number Title Priority Date Filing Date
TW95124762A TW200805250A (en) 2006-07-07 2006-07-07 Language-auxiliary expression system and method

Country Status (1)

Country Link
TW (1) TW200805250A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063252A (en) * 2010-12-23 2011-05-18 苏州佳世达电通有限公司 Electronic device having reading assisting function and reading assisting method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102063252A (en) * 2010-12-23 2011-05-18 苏州佳世达电通有限公司 Electronic device having reading assisting function and reading assisting method

Similar Documents

Publication Publication Date Title
CN113569088B (en) Music recommendation method and device and readable storage medium
JP5866728B2 (en) Knowledge information processing server system with image recognition system
CN112015949B (en) Video generation method and device, storage medium and electronic equipment
KR100762585B1 (en) Apparatus and method of music synchronization based on dancing
CN101271528B (en) Method and device for outputting image
CN105336329B (en) Voice processing method and system
Aradhye et al. Video2text: Learning to annotate video content
JP4643099B2 (en) A basic entity-relational model for comprehensive audiovisual data signal descriptions
WO2020232796A1 (en) Multimedia data matching method and device, and storage medium
US20100023553A1 (en) System and method for rich media annotation
CN100538823C (en) Language aided expression system and method
CN106462609A (en) Methods, systems, and media for presenting music items relating to media content
CN112270768B (en) Ancient book reading method and system based on virtual reality technology and construction method thereof
Rudinac et al. Learning crowdsourced user preferences for visual summarization of image collections
Maybury Multimedia information extraction: Advances in video, audio, and imagery analysis for search, data mining, surveillance and authoring
US11410706B2 (en) Content pushing method for display device, pushing device and display device
TW200805250A (en) Language-auxiliary expression system and method
CN114363714B (en) Title generation method, title generation device and storage medium
Clark A Postcolonial MIR?
Zhang et al. Analysis of application and creation skills of story-based MV micro video and big multimedia data in music communication
JP2019061428A (en) Video management method, video management device, and video management system
Tragaki Made in Greece: studies in popular music
WO2021120174A1 (en) Data processing method, apparatus, electronic device, and storage medium
JP2011164865A (en) Image-selecting device, image-selecting method, and image-selecting program
Akiyama Nothing Connects Us but Imagined Sound