TW201330600A - Video and image information embedded technology system - Google Patents
Video and image information embedded technology system Download PDFInfo
- Publication number
- TW201330600A TW201330600A TW101101261A TW101101261A TW201330600A TW 201330600 A TW201330600 A TW 201330600A TW 101101261 A TW101101261 A TW 101101261A TW 101101261 A TW101101261 A TW 101101261A TW 201330600 A TW201330600 A TW 201330600A
- Authority
- TW
- Taiwan
- Prior art keywords
- image
- information
- video
- target
- tag
- Prior art date
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
本發明為提供一種影片及影像資訊嵌入科技系統,尤指一種以互動式的方式將與影片/圖像內容中特定物件相關的資訊嵌入,能夠讓使用者自由點擊以獲得更多相關資訊的嵌入系統。The present invention provides a video and video information embedding technology system, in particular, an interactive method for embedding information related to a specific object in a movie/image content, allowing the user to freely click to obtain more relevant information embedding. system.
在日常生活中,人們總是會對許多出現在影片或是圖像內容中的相關資訊而感到好奇,但現有的科技卻無法讓人們進一步了解影片/圖像中特定物件的相關資訊。舉例來說,在觀賞戲劇節目時,人們可能會想要知道穿在男、女主角身上的衣服、褲子、鞋子、甚至是身上的飾品是什麼品牌?要在哪裡購買?此外,在觀看烹飪節目時,人們也可能會想知道那道在節目上示範的佳餚是用什麼特殊的鍋具製作?廚師用的調味料、食材是什麼?能在哪裡買到?In everyday life, people are always curious about the many relevant information that appears in the film or image content, but the existing technology does not allow people to further understand the information about specific objects in the film/image. For example, when watching a drama, people may want to know what brand of clothes, pants, shoes, and even accessories worn on the male and female protagonists? Where can I buy? In addition, when watching a cooking show, people may also want to know what special pots are used to demonstrate the dishes on the show. What are the seasonings and ingredients used by the chef? Where can I buy it?
遺憾的是,現在的科技卻無法能夠滿足人們對影片/圖像內特定物件的好奇心。特定物件的相關資訊也無法輕易地被人們所取得,只能透過被動式的搜尋、檢索、探聽來獲得。透過本發明,將與影片/圖像內容中特定物件的相關資訊嵌入其中,並允許點擊至外部連結。所以當使用者在觀看影片/圖像時,只要將游標移到他們有興趣的特定物件上,所有有關於該物件的資訊便會出現且可供點擊以獲取更多資訊。透過這樣的互動式科技,使用者能以更方便、簡單、且直覺的方式來從影片/圖像中獲得更多資訊。Unfortunately, today's technology can't satisfy people's curiosity about specific objects in movies/images. Information about specific objects cannot be easily obtained by people, and can only be obtained through passive search, retrieval, and snooping. Through the present invention, information related to specific objects in the movie/image content is embedded therein, and clicks to external links are allowed. So when the user is watching the movie/image, just move the cursor to the specific object they are interested in, all the information about the object will appear and can be clicked for more information. Through this interactive technology, users can get more information from movies/images in a more convenient, simple and intuitive way.
在過去,所有與影片/圖像內容相關的外部資訊都無法被直接取得。所以,當使用者對一些影片/圖像中的特定內容、特定物件感到興趣時,他們必須透過間接、被動、分開的方式來取得這些資訊(例如:搜尋引擎)。In the past, all external information related to film/image content could not be obtained directly. Therefore, when users are interested in specific content and specific objects in some movies/images, they must obtain such information in an indirect, passive, and separate way (for example, a search engine).
目前,雖然某些應用程式已經有在圖像中嵌入一些相關的外部資訊[1],但卻有著一些先天的限制:(a)這些外部的資訊都是在事前嵌入的,缺乏彈性;且(b)沒有延伸至影片的應用。At present, although some applications already have some relevant external information embedded in the image [1], they have some innate limitations: (a) these external information are embedded in advance, lack of flexibility; b) There are no applications that extend to the film.
至於影片,目前現有的嵌入技術僅有字幕嵌入或留言嵌入[2],且都缺乏與使用者的互動性(例如:這些外部資訊都無法依使用者的興趣及特殊注意而做出反應)。As for the video, the existing embedded technology only has subtitle embedding or message embedding [2], and lacks interactivity with the user (for example, these external information cannot respond to the user's interest and special attention).
故,本發明之發明人有鑑於上述缺失,乃搜集相關資料,經由多方評估及考量,並以從事於此行業累積之多年經驗,將與影片/圖像內容中特定物件相關的資訊嵌入其中,並允許點擊至外部連結。所以當使用者在觀看影片/圖像時,只要將游標移到他們有興趣的特定物件上,所有有關於該物件的資訊便會出現且可供點擊以獲取更多資訊。透過這樣的互動式科技,使用者能以更方便、簡單、且直覺的方式來從圖像/影片中獲得更多資訊。Therefore, the inventors of the present invention have incorporated the relevant information in view of the above-mentioned deficiencies, and have incorporated and considered various information related to specific objects in the film/image content through multi-party evaluation and consideration, and with years of experience accumulated in the industry. And allow clicks to external links. So when the user is watching the movie/image, just move the cursor to the specific object they are interested in, all the information about the object will appear and can be clicked for more information. Through this interactive technology, users can get more information from images/movies in a more convenient, simple and intuitive way.
本發明提供使用者一個全新的互動方式來獲取或嵌入關於影片/圖像內容中特定目標物的相關外部資訊。本發明可用於廣告植入,互動式外部資訊提供等應用。在本發明中,我們把嵌入的所有外部資訊以標籤的形式顯示,並將之命名為“標籤”。The present invention provides a new and interactive way for a user to obtain or embed relevant external information about a particular object in a movie/image content. The invention can be used for applications such as advertisement implantation and interactive external information provision. In the present invention, we display all the embedded external information in the form of a label and name it "tag".
本發明主要包含兩部分:客戶端(110)和伺服器端(120)。伺服器端包括客戶端-伺服器交互操作介面模組(121),影片/圖像資料庫(122),標籤資料庫(123),影片/圖像內容分析模組(124),影片/圖像-外部資訊關係分析模組(125),以及外部資訊檢索引擎(126)。而客戶端則包含客戶端-伺服器交互操作介面模組(111),影片/圖像內容分析模組(112),標籤嵌入引擎(113),原始影片/圖像資料庫(114),標籤資訊資料庫(115),以及用戶操作介面模組(116)。The invention mainly comprises two parts: a client (110) and a server end (120). The server side includes a client-server interaction interface module (121), a movie/image database (122), a label database (123), a video/image content analysis module (124), and a movie/picture. The image-external information relationship analysis module (125) and the external information retrieval engine (126). The client includes a client-server interworking interface module (111), a video/image content analysis module (112), a tag embedding engine (113), an original movie/image database (114), and a tag. The information database (115) and the user interface module (116).
在伺服器端,客戶端-伺服器交互操作介面模組用於提供客戶端與伺服器兩端之間的交互作用,能完成包括檔案上傳/下載、用戶資料驗證、用戶登入操作、操作指令傳遞、以及其他的一些操作等。影片/圖像資料庫存儲了影片/圖像資料,而標籤資料庫儲存了被製作出的標籤檔案,這些標籤檔案包含了與影片/圖像內容相關的外部資訊。一般而言,每個影片/圖像在標籤資料庫中都有一個對應的標籤檔案。伺服器端的影片/圖像內容分析模組將從客戶端-伺服器交互操作介面模組接受來自於客戶端的操作指令。當該模組被操作指令觸發時,它將對用戶在影片/圖像中選定的目標物進行分割、追蹤、及辨識等動作。另外,外部資訊檢索引擎會從影片/圖像內容分析模組來獲取資訊,並從客戶端-伺服器交互操作介面模組接受來自客戶端的操作指令。當該模組被客戶端操作指令所觸發時,它將從影片/圖像內容分析模組中取得目標物分析結果資訊,並從公開搜尋引擎、標籤資料庫、或其他資料庫中檢索和目標物相關的外部資訊。此外,影片/圖像-外部資訊關係分析模組則是從以下三個其他模組接受數據,分別是:外部資訊檢索引擎、影片/圖像內容分析模組、以及客戶端-伺服器交互操作介面模組。在該模組從客戶端-伺服器交互操作介面模組接收到操作指令以後,它將進一步從影片/圖像內容分析模組接收關於目標物的分析結果資訊,同時從外部資訊檢索引擎接收關於目標物的外部檢索結果資訊;在接收完這些資訊以後,該模組將生成與客戶端指令指定的影片/圖像中的目標物相關聯的標籤資訊。生成的標籤資訊將會利用客戶端-伺服器交互操作介面模組傳遞到客戶端。同時,該標籤資訊也將會存為一個標籤檔案並儲存於伺服器端的標籤資料庫中。On the server side, the client-server interworking interface module is used to provide interaction between the client and the server. It can complete file upload/download, user data verification, user login operation, and operation command transmission. And other operations, etc. The video/image data library stores the video/image data, and the tag database stores the created tag files that contain external information related to the film/image content. In general, each movie/image has a corresponding tag file in the tag library. The video/image content analysis module on the server side will accept the operation instructions from the client from the client-server interaction interface module. When the module is triggered by an operation command, it will split, track, and recognize the target selected by the user in the movie/image. In addition, the external information retrieval engine obtains information from the video/image content analysis module and accepts operational commands from the client from the client-server interaction interface module. When the module is triggered by the client's operating instructions, it will retrieve the target analysis results from the movie/image content analysis module and retrieve and target from the public search engine, tag database, or other database. Object related external information. In addition, the Video/Image-External Information Relationship Analysis Module accepts data from three other modules: external information retrieval engine, video/image content analysis module, and client-server interaction. Interface module. After the module receives the operation instruction from the client-server interaction interface module, it further receives the analysis result information about the target from the video/image content analysis module, and receives the information from the external information retrieval engine. The external search result information of the target; after receiving the information, the module will generate tag information associated with the target in the movie/image specified by the client instruction. The generated tag information will be passed to the client using the client-server interworking interface module. At the same time, the tag information will also be saved as a tag file and stored in the tag database on the server side.
另外在客戶端,客戶端-伺服器交互操作介面模組同樣用於提供客戶端與伺服器兩端之間的交互作用,能完成包括檔案上傳/下載、用戶資訊驗證、用戶登入操作、操作指令傳遞、以及其他一些的操作動作等。用戶操作介面模組用於提供與用戶或者標籤生成者之間的交互作用、進行影片/圖像上傳、標籤資訊的添加、建立嵌入外部資訊的影片/圖像、播放嵌入外部資訊的影片/圖像、取得用戶感興趣的目標物或目標物資訊、以及其它一些操作等。客戶端的原始影片/圖像資料庫儲存了原始的影片/圖像檔案資料。此外,新的影片/圖像檔案也可以從伺服器端的影片/圖像資料庫,或用戶本人通過用戶操作介面模組而添加到原始影片/圖像資料庫中。標籤資訊資料庫儲存了可能被用於產生標籤的所有可用外部資訊。同樣的,新的外部資訊也可以從伺服器端的標籤資料庫或用戶本人通過用戶操作介面模組而添加到標籤資訊資料庫中。客戶端的影片/圖像內容分析模組通過用戶操作介面模組,從用戶或者標籤生成者方面接收操作指令。當該模組被操作指令觸發時,它將對用戶在影片/圖像中選定的目標物進行分割、追蹤、以及辨識等動作。另外,標籤嵌入引擎是從以下三個模組接收資訊的,分別是:影片/圖像內容分析模組、標籤資訊資料庫、以及用戶操作介面模組。當該引擎被觸發時,它首先從標籤資訊資料庫或者直接通過用戶操作介面模組從標籤生成者這裡獲取標籤資訊。同時,該引擎也會從影片/圖像內容分析模組中接收分析出的目標物相關資訊。自此以後,標籤嵌入引擎將產生和用戶指定的目標物相關的目標物外部資訊標籤。生成的標籤資訊將會存成一個檔案並儲存在客戶端的標籤資訊資料庫中,或者上傳到伺服器端的標籤資料庫中。In addition, on the client side, the client-server interaction interface module is also used to provide interaction between the client and the server, and can complete file upload/download, user information verification, user login operation, and operation instructions. Delivery, and other operational actions. The user interface module is used to provide interaction with the user or tag generator, to perform movie/image uploading, tag information addition, to create a movie/image embedded with external information, and to play a movie/picture embedded with external information. Like, get the target or target information that the user is interested in, and other operations. The original movie/image repository of the client stores the original movie/image archive. In addition, new movie/image files can be added to the original movie/image library from the server/image library on the server side, or by the user through the user interface module. The tag information library stores all available external information that may be used to generate tags. Similarly, new external information can be added to the tag information library from the tag database on the server side or by the user through the user interface module. The client's video/image content analysis module receives operational commands from the user or tag generator through the user interface module. When the module is triggered by an operation command, it will segment, track, and recognize the target selected by the user in the movie/image. In addition, the tag embedding engine receives information from the following three modules: a video/image content analysis module, a tag information database, and a user operation interface module. When the engine is triggered, it first obtains tag information from the tag generator from the tag information repository or directly through the user interface module. At the same time, the engine will also receive the analyzed target related information from the film/image content analysis module. From then on, the tag embedding engine will generate an object external information tag associated with the user-specified target. The generated tag information will be saved as a file and stored in the client's tag information database, or uploaded to the server's tag database.
另外,原始的影片/圖像和它所對應的標籤資訊檔案組成了一個嵌入外部資訊的影片/圖像單元。一般而言,每個嵌入外部資訊的影片/圖像單元包含了一個原始影片/圖像檔案以及一個標籤檔案。然而,並非每個嵌入外部資訊的影片/圖像單元只能包含一個原始影片/圖像檔案及一個標籤檔案,也可以是一個原始影片/圖像檔案以及多個標籤檔案的情況。嵌入外部資訊的影片/圖像單元可以通過用戶操作介面模組播放來實現交互式外部資訊顯示功能。標籤資訊文件包含了目標物在影片/圖像中的位置、區域大小、及與目標物相關的所有外部資訊等等。在影片/圖像播放時,用戶操作介面模組中的播放器同時也根據用戶操作情況對標籤資訊檔案進行即時的分析(例如:一個用戶操作情況的例子是用戶游標在影片中的位置)。當用戶移動游標到某些被標籤檔案指定含有外部資訊的目標物範圍內時,對應於該目標物的外部資訊標籤就會被自動彈出。否則的話,不會有外部資訊標籤彈出,影片/圖像就像平常一樣播放。In addition, the original movie/image and its corresponding tag information file form a movie/image unit that embeds external information. In general, each movie/image unit that embeds external information contains an original movie/image file and a tag file. However, not every movie/image unit embedded with external information can contain only one original movie/image file and one tag file, or an original movie/image file and multiple tag files. The video/image unit embedded with external information can be interactively displayed by the user interface module. The tag information file contains the location of the target in the movie/image, the size of the region, and all external information related to the target. During the movie/image playback, the player in the user interface module also analyzes the tag information file in real time according to the user's operation (for example, an example of a user operation situation is the position of the user cursor in the movie). When the user moves the cursor to a range of objects specified by the tag file that contain external information, the external information tag corresponding to the target is automatically popped up. Otherwise, no external information tags will pop up and the movie/image will play as usual.
在下面的部分中,我們將具體描述關於我們發明框架的兩個具體實施例(或兩種運行模式),即標籤產生者-用戶模式(實施例1)以及用戶中心模式(實施例2)。In the following sections, we will describe in detail two specific embodiments (or two modes of operation) with respect to our inventive framework, namely the tag producer-user mode (Embodiment 1) and the user center mode (Embodiment 2).
在實施例1(標籤產生者-用戶模式)中,在客戶端,標籤產生者(這些標籤產生者包括廣告商、品牌公司等等)首先選擇從原始影片/圖像資料庫或自己另外上傳的影片/圖像中選擇合適的影片/圖像。接著,標籤產生者將通過用戶操作介面模組從該影片/圖像中選擇他們想要嵌入外部資訊的合適目標物(這些選擇的目標物例子包括衣服,物體等等)。然後,用戶操作介面模組將觸發影片/圖像內容分析模組對選擇的目標物進行自動分割、追蹤等操作。同時,標籤生成者可以通過直接輸入目標物相關外部標籤資訊或者從標籤資訊資料庫中檢索出合適的標籤等方法得到目標物標籤資訊。然後,用戶操作介面模組將觸發標籤嵌入引擎將標籤資訊嵌入到影片/圖像中,並生成一個獨立的標籤檔案。標籤檔案生成以後,該檔案及其對應的影片/圖像檔將通過客戶端-伺服器交互操作介面模組被上傳到伺服器端。In Embodiment 1 (tag producer-user mode), on the client side, the tag producers (these tag producers including advertisers, brand companies, etc.) first select additional uploads from the original film/image database or themselves. Select the appropriate movie/image in the movie/image. The tag producer will then select the appropriate target from the movie/image that they want to embed external information through the user interface module (examples of these selected objects include clothes, objects, etc.). Then, the user operation interface module will trigger the film/image content analysis module to automatically segment and track the selected target object. At the same time, the tag generator can obtain the target tag information by directly inputting the relevant external tag information of the target or by retrieving the appropriate tag from the tag information database. The user interface module then embeds the tag embedding engine to embed the tag information into the movie/image and generates a separate tag file. After the tag file is generated, the file and its corresponding movie/image file will be uploaded to the server through the client-server interaction interface module.
在伺服器端,伺服器從客戶端-伺服器交互操作介面模組接收影片/圖像檔案及標籤檔案。同時,把它們分別保存到伺服器端的影片/圖像資料庫和標籤資料庫中。On the server side, the server receives video/image files and tag files from the client-server interworking interface module. At the same time, save them separately to the video/image database and tag database on the server side.
影片/圖像觀看者(即用戶)同樣也是通過另外一個客戶端觀看影片/圖像的。影片/圖像觀看者首先通過用戶操作介面模組選擇他們感興趣的影片/圖像檔案。用戶操作介面模組進一步通過客戶端-伺服器交互操作介面模組直接從伺服器端的影片/圖像資料庫以及標籤資料庫中獲取影片/圖像及其對應的標籤檔案。獲取了這些檔案之後,就在客戶端進行播放。在播放時,當用戶將游標移動到影片/圖像中他們感興趣的目標物時,對應已經嵌入的標籤資訊將會被從標籤文件中觸發並彈出。透過這樣的方式,所有和目標物相關的外部資訊將會被顯示在彈出後的標籤中供瀏覽、點擊並提供用戶作進一步的互動。The movie/image viewer (ie the user) also watches the movie/image through another client. The movie/image viewer first selects the movie/image file they are interested in through the user interface module. The user interface module further obtains the video/image and its corresponding label file directly from the video/image database and the label database of the server through the client-server interaction interface module. Once these files are obtained, they are played on the client. During playback, when the user moves the cursor to the object of interest in the movie/image, the corresponding tab information will be triggered and popped from the tag file. In this way, all external information related to the target will be displayed in the pop-up tab for browsing, clicking and providing the user for further interaction.
在實施例2(用戶中心模式)中,在客戶端,影片/圖像觀看者(即用戶)首先通過用戶操作介面模組選擇他們感興趣的影片/圖像。此時,用戶操作介面模組將直接從伺服器端的影片/圖像資料庫中取得該影片/圖像並在客戶端播放。當用戶移動游標到或者選擇他們感興趣的目標物時,伺服器端的影片/圖像內容分析模組將被觸發並對選定的目標物進行自動的分割、追蹤、及辨識。影片/圖像內容分析模組的輸出是目標物在影片/圖像中的位置以及對目標物進行辨識後的資訊。在這個步驟之後,辨識出的目標物資訊將會被輸入到外部資訊檢索引擎中,該引擎將利用公眾搜尋引擎、標籤資料庫、或者是額外的資料庫來檢索出外部資訊。外部資訊檢索引擎的輸出將是用戶選定的目標物相關的外部資訊。最後,這些檢索出的目標物外部資訊以及辨識出的目標物本身資訊都將被輸入到影片/圖像-外部資訊關係分析模組,該模組將分析外部資訊以及目標物本身資訊之間的關係,並且生成適合用戶選定目標物的標籤。產生的標籤將會在影片中用戶選定目標物的邊上彈出。同時,生成的標籤資訊也會保存到標籤檔案中並存到伺服器端的標籤資料庫中。值得注意的是,和實施例1標籤生成者事先生成標籤不同,實施例2中,標籤資訊是即時動態生成的。In Embodiment 2 (User Center Mode), at the client, the movie/image viewer (ie, the user) first selects the movie/image of interest by the user interface module. At this point, the user interface module will retrieve the movie/image directly from the video/image library on the server side and play it on the client. When the user moves the cursor to or selects the target they are interested in, the video/image content analysis module on the server side will be triggered and the selected target will be automatically segmented, tracked, and identified. The output of the film/image content analysis module is the position of the target in the movie/image and the information after identifying the target. After this step, the identified target information will be entered into an external information retrieval engine that will retrieve the external information using a public search engine, a tag database, or an additional database. The output of the external information retrieval engine will be external information related to the target selected by the user. Finally, the external information of the retrieved target and the information of the identified target itself will be input into the film/image-external information relationship analysis module, which will analyze the external information and the information of the target itself. Relationships, and generate tags that are appropriate for the user's selected target. The resulting tag will pop up on the side of the movie where the user selected the target. At the same time, the generated tag information is also saved to the tag file and stored in the tag database of the server. It is to be noted that, unlike the first embodiment in which the tag generator of the first embodiment generates a tag, in the second embodiment, the tag information is dynamically generated in real time.
本發明框架的分塊框圖將在圖1中顯示。嵌入外部資訊的影片/圖像的播放流程圖將在圖2中顯示。實施例1和實施例2的實現流程圖將分別在圖3和圖4中顯示。A block diagram of the framework of the present invention will be shown in FIG. A playback flowchart of the movie/image embedded with external information will be shown in Figure 2. The implementation flowcharts of Embodiment 1 and Embodiment 2 will be shown in Figures 3 and 4, respectively.
110...客戶端110. . . Client
120...伺服器端120. . . Server side
121...客戶端-伺服器交互操作介面模組121. . . Client-server interaction interface module
122...影片/圖像資料庫122. . . Video/image database
123...標籤資料庫123. . . Tag database
124...影片/圖像內容分析模組124. . . Video/image content analysis module
125...影片/圖像-外部資訊關係分析模組125. . . Video/Image-External Information Relationship Analysis Module
126...外部資訊檢索引擎126. . . External information retrieval engine
111...客戶端-伺服器交互操作介面模組111. . . Client-server interaction interface module
112...影片/圖像內容分析模組112. . . Video/image content analysis module
113...標籤嵌入引擎113. . . Tag embedding engine
114...原始影片/圖像資料庫114. . . Original film/image database
115...標籤資訊資料庫115. . . Tag information database
116...用戶操作介面模組116. . . User interface module
第一圖 本發明框架的分塊框圖(水平線條紋代表標籤生成者-用戶模式,垂直線條紋代表用戶中心模式,交叉條紋代表同時使用兩種模式)。First Figure A block diagram of the framework of the present invention (horizontal line stripes represent tag generator-user mode, vertical line stripes represent user center mode, and cross stripes represent two modes simultaneously).
第二圖 嵌入外部資訊的影片/圖像的播放流程圖。The second picture shows the playback flowchart of the movie/image embedded with external information.
第三圖 實施例1的實現流程圖。Third Embodiment The flowchart of the implementation of Embodiment 1.
第四圖 實施例2的實現流程圖。Fourth Figure Flowchart of the implementation of Embodiment 2.
110...客戶端110. . . Client
120...伺服器端120. . . Server side
121...客戶端-伺服器交互操作介面模組121. . . Client-server interaction interface module
122...影片/圖像資料庫122. . . Video/image database
123...標籤資料庫123. . . Tag database
124...影片/圖像內容分析模組124. . . Video/image content analysis module
125...影片/圖像-外部資訊關係分析模組125. . . Video/Image-External Information Relationship Analysis Module
126...外部資訊檢索引擎126. . . External information retrieval engine
111...客戶端-伺服器交互操作介面模組111. . . Client-server interaction interface module
112...影片/圖像內容分析模組112. . . Video/image content analysis module
113...標籤嵌入引擎113. . . Tag embedding engine
114...原始影片/圖像資料庫114. . . Original film/image database
115...標籤資訊資料庫115. . . Tag information database
116...用戶操作介面模組116. . . User interface module
Claims (10)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW101101261A TW201330600A (en) | 2012-01-12 | 2012-01-12 | Video and image information embedded technology system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
TW101101261A TW201330600A (en) | 2012-01-12 | 2012-01-12 | Video and image information embedded technology system |
Publications (1)
Publication Number | Publication Date |
---|---|
TW201330600A true TW201330600A (en) | 2013-07-16 |
Family
ID=49225908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
TW101101261A TW201330600A (en) | 2012-01-12 | 2012-01-12 | Video and image information embedded technology system |
Country Status (1)
Country | Link |
---|---|
TW (1) | TW201330600A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9899057B2 (en) | 2016-01-22 | 2018-02-20 | Innomind Solution Company Limited | Control method for synchronized video, control system for synchronized video and electronic apparatus thereof |
-
2012
- 2012-01-12 TW TW101101261A patent/TW201330600A/en unknown
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9899057B2 (en) | 2016-01-22 | 2018-02-20 | Innomind Solution Company Limited | Control method for synchronized video, control system for synchronized video and electronic apparatus thereof |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11422671B2 (en) | Defining, displaying and interacting with tags in a three-dimensional model | |
US10769438B2 (en) | Augmented reality | |
US9288511B2 (en) | Methods and apparatus for media navigation | |
Amato et al. | AI in the media and creative industries | |
US10218665B2 (en) | System relating to 3D, 360 degree or spherical for refering to and/or embedding posts, videos or digital media within other posts, videos, digital data or digital media and posts within anypart of another posts, videos, digital data or digital media | |
CN102244807B (en) | Adaptive video zoom | |
Xie et al. | Event mining in multimedia streams | |
Smeaton | Techniques used and open challenges to the analysis, indexing and retrieval of digital video | |
US20140328570A1 (en) | Identifying, describing, and sharing salient events in images and videos | |
US20140108932A1 (en) | Online search, storage, manipulation, and delivery of video content | |
US10699487B2 (en) | Interaction analysis systems and methods | |
WO2019245781A1 (en) | Video summarization and collaboration systems and methods | |
US9449109B1 (en) | Visualizing, sharing and monetizing multimedia content | |
AU2010256367A1 (en) | Ecosystem for smart content tagging and interaction | |
US20140115622A1 (en) | Interactive Video/Image-relevant Information Embedding Technology | |
Li et al. | Introduction to multimedia | |
Bailer et al. | A video browsing tool for content management in postproduction | |
TW201330600A (en) | Video and image information embedded technology system | |
US11249823B2 (en) | Methods and systems for facilitating application programming interface communications | |
US10990456B2 (en) | Methods and systems for facilitating application programming interface communications | |
Hammoud | Introduction to interactive video | |
Zöllner et al. | Snapshot augmented reality-augmented photography | |
US20100082596A1 (en) | Video-related meta data engine system and method | |
Feng | Examination of the Hollywood movie trailers editing pattern evolution over time by using the quantitative approach of statistical stylistic analysis | |
CN103247063A (en) | Technology system for embedding of film and image information |