TW201237802A - Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program - Google Patents

Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program Download PDF

Info

Publication number
TW201237802A
TW201237802A TW100145223A TW100145223A TW201237802A TW 201237802 A TW201237802 A TW 201237802A TW 100145223 A TW100145223 A TW 100145223A TW 100145223 A TW100145223 A TW 100145223A TW 201237802 A TW201237802 A TW 201237802A
Authority
TW
Taiwan
Prior art keywords
information
image
embedding
visualized
embedded
Prior art date
Application number
TW100145223A
Other languages
Chinese (zh)
Inventor
Kenichi Sakuma
Naoto Hanyu
Takakuni Douseki
Kazuma Kitamura
Hiroshi Fukui
Original Assignee
Shiseido Co Ltd
Ritsumeikan Trust
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shiseido Co Ltd, Ritsumeikan Trust filed Critical Shiseido Co Ltd
Publication of TW201237802A publication Critical patent/TW201237802A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

This content-providing system using invisible information comprises: an invisible information embedding device that embeds invisible information at a predetermined location of an acquired image; and a recognition device that recognizes an object and invisible information contained in the image obtained by the embedding device. The embedding device has: an object-for-embedding setting means that sets an object to embed invisible information from among objects contained in the acquired image; and an invisible information embedding means that, in the periphery of the object acquired by the object-for-embedding setting means, embeds invisible information corresponding to the object. The recognition device has: an object extraction means that extracts an object from the invisible information embedding region contained in the image; an invisible information analysis means that, when the object has been extracted by the object extraction means, analyzes the processing contents with respect to the object from the invisible information; and a display information generation means that generates an object that displays an image in accordance with the processing contents obtained by the invisible information analysis means.

Description

201237802 六、發明說明: 【發明所屬之技術領域】 本發明係關於使用非視覺化f訊之内容提供系統、非視 覺化資訊之埋入裝置、認識裝置、埋入方法、認識方法、 埋二程式及認識程式’尤其關於使用用以迅速進行非視覺 化資訊之埋人或擷取、讀供附加價值性優良之高精度的 内容服務之非視覺化資訊之内容提供系統、非視覺化資訊 之埋入裝置、認識裝置、埋人方法、認識方法、埋入程式 及認識程式。 【先前技術】 先前隨著附帶相機之行動終端器等的普及而提供使用靜 2及動畫等之各種數位内容。又,先前可使用行動終端器 專的相機,讀取紙等印刷媒體上所列印之qr碼等(註冊商 1 票)2維碼,而存取自讀取之2維碼所獲得之内容,或顯示 與内容對應之訊息等。 然而,2維碼外觀不佳,且需要用以配置2維碼之空間。 對此,近年來揭示一種不使用2維碼而自内容之 =訊,並基於所取得之資料㈣叙處理之技術(例 如參照專利文獻丨及專利文獻2)。 =利文獻1所不之技術揭示—種圆像處理裝置,其且備 :域特定機構’其將圖像中所含之特Μ域作為操取條 ’取得與圖像屬性相關之指示資訊,並基於所取得之指 ==自圖像中所應掏取之區域,,專利文獻1 像中之特徵量而進行與背景圖像分離之技 160697.doc 201237802 術。 又專利文獻2所示之技術係揭示一種圖像處理裝置,其 具備輸人圖像資料之輸人機構;編碼圖像資料而產生編碼 資料之編碼機構,形狀資料產生機構,其自圖像資料擁取 任意形狀之物件並產生物件之形狀資料;及埋入機構,其 藉由電子浮水印將形狀資料埋人編碼資料。又,專利文獻 2中揭示-種於圖像中埋人電子浮水印並將埋入電子泮 水印之位置與未埋人之位置之交界作為背景分離之資訊而 使用之技術。 先前技術文獻 專利文獻 專利文獻1:日本特開2〇〇6-13722號公報 專利文獻2:日本特開2004-80096號公報 【發明内容】 發明所欲解決之問題 然而,例如專利文獻1所示之技術由於是使用圖像中之 特徵量進行與背景圖像之分離,故當背景圖像與物件之顏 色相同或類似之情形時,無法在正確的位置自圖像中之特 徵量分離出物件。 又,專利文獻2所示之技術中,在對圖像埋入2維碼等之 情形時,必須在將原始圖像轉換為頻率區域之後埋入2次 元後’進而轉換為空間區域而進行埋入。又,讀出2維碼 之情形時,必須將輸入之圖像轉換為頻率區域,且於讀出 編竭後轉換為空間區域而榻取編碼。因此,在埋入或讀出 160697.doc 201237802 編碼時會產生較多圖像處理,故不適用於行動終端器等低 規格機器之處理或即時處理。 本發明係鑒於上述課題而完成者,其目的在於提供—種 使用非視覺化資訊之内容提供系統、非視覺化資訊之埋入 裝置、認、識裝置、埋入方法、認識方法、埋入程式及認識 程式,用以迅速進行非視覺化資訊之埋入或擷取、且提供 附加價值性優良之高精度之内容服務。 解決問題之技術手段 根據本發明之一態樣,使用非視覺化資訊之内容提供系 統包含:非視覺化資訊之埋入裝置,其於取得之圖像之特 疋位置埋入非視覺化資訊;及認識裝置,其認識藉由該埋 入裝置所獲得之圖像中所含之物件及非視覺化資訊;其特 徵在於,上述埋人裝置包含:埋人對象物件設定機構,其 自上述取得之圖像中所含之物件而設定所要埋人非視覺化 資訊之物件;及非視覺化資訊埋人機構,其於藉由上述埋 入對象物件設定機構所獲得之物件之,埋人與上述物 件對應之上料視覺化f訊;域㈣裝置包含:物件操 取機構,其自上述圖像中 吓a t非視覺化資訊之埋入區域 操取物件;非視覺化資 ^ 解析機構,其當藉由上述物件擷 取機構擷取出上述物件 .. ...,, ^ 之11形時,自上述非視覺化資訊解 析對上述物件之處理内 由上述麵覺化資贿/ f1fl產生㈣,其與藉 4 ^- °斤機構所獲得之處理内容對應而產 生顯不於晝面之物件。 又,根據本發明之—離 〜、樣’本發明提供一種埋入裝置, 160697.doc 201237802 其係於取知之圖像之特定位置埋&amp;非視覺化資訊者,其特 徵在於包含’圖像解析機構’其取得上述圖像中所含之物 件及位置資訊;埋人對象物件設定機構,纟自藉由上述圖 像解析機構所獲得之物件而設定上述圖像為埋人對象之物 件,及非視覺化資訊埋入機構,其於藉由上述埋入對象物 件設定機構所獲得《物件之周圍,埋人與上述物件對應之 上述非視覺化資訊。 又根據本發明之一態樣,本發明提供一種認識裝置, 其係認識取得之圖像中所含之物件及非視覺化資訊者,其 特徵在於包含.物件#|取機構,纟自上述圖像中所含之非 視覺化資訊之埋入區域操取物#;非視覺化資訊解析機 構,其當藉由上述物件擷取機構擷取出上述物件時,自上 述非視覺化資訊解析對上述物件之處理内容;及顯示資訊 產生機構’其與藉由上述非視覺化資訊解析機構所獲得之 處理内谷對應而產生顯示於畫面之物件。 又’根據本發明之—態樣,本發明提供—種非視覺化資 訊之埋人方法’其係於取得之圖像之特定位置埋入非視覺 化資訊者’其特徵在於包含:圖像解析步驟,係取得上述 圖像中所含之物件及位置資訊;埋人對象物件設定步驟, 係自藉由上述圖像解析步驟所獲得之物件而設定上述圖像 為埋入對象之物件;及非視覺化資訊埋入步驟,其於藉由 上述埋入#象物彳設定步驟所獲得之物件之周圍,埋入與 上述物件對應之上述非視覺化資訊。 又’根據本發明之—態樣’本發明提供_種認識方法, 160697.doc 201237802 2係認識取得之圖像中所含之物件及非視覺化資訊者,盆 :徵在於包::物件操取步驟,係、自上述圖像中所包含: 1:視,化貧況之埋入區域擷取物件;非視覺化資訊解析 二驟,係當藉由上述物件掏取步驟梅取出上述物件之情形 ’’自上㈣視覺化資訊解析與上述物件對應之處理内 ^及顯示資訊產生㈣’係與藉由上述非視覺化資訊解 析步驟所獲得之處理内容對應而產生顯示於畫面之物件。 又’根據本發明之一能媒,士 β , K心樣,本發明提供一種埋入程式, 其使電腦作為上述埋人裝置所包含之圖像解析機構、埋入 對象物件設定機構及非視覺化資訊埋入機構之各機構而發 揮功能。 根據本發明t ,態樣,本發明提供一種認識程式, -使電腦作為上述s忍識裝置所包含之物件擁取機構、非視 覺化資訊解析機構及顯示資訊產生機構之各機構而發揮功 根據本發明之g樣,本發明可提供—種快速進行非視 覺化資訊之埋入或擷取且附加價值性優良之高精度之内容 服務。 【實施方式】 參照添付之圖面,通過閱讀以下詳細說明,本發明之其 他目的、特徵及優勢可更加明瞭。 &lt;關於本發明&gt; 根據本發明之一態樣,例如相畫面上顯示之圖像或映 像或者包含紙或膜片等各種印刷媒體等之相片、明信 160697.doc 201237802 片、海報及卡片等,藉由肉眼無法認識之加工,於圖像中 所含之物件(物體資訊)之周圍埋入非視覺化資訊(標記), 藉由數位相機或行動終端器(行動電話、智慧手機(註冊商 標)等)上設置之相機等之攝像機構拍攝該圖像或映像、印 刷媒體等之局部,然後對如個人電腦或行動終端器、 註冊商標财板PC等匯入所拍狀圖像或映像等並 進行圆像處理,以認識所埋入之標記並擷取物件。 根據本發明之一態樣,即使為例如行動終端器等之 電令或性旎受限之機種,亦可實現組合可認識經埋入之標 記之標記埋入方法及圖像處理之標記認識。 再者,根據本發明之一態樣,自經認識之標記之資訊中 取得例如用以使物件動作之埋入資訊,基於所取得之埋入 資訊’對經㈣之對象之物件進行處理。χ,根據本發明 之一態樣,提供-種内容提供系統,其係進行自操取非視 覺化資訊之埋X乃至於針對圖I中之物件之處理内容之一 連串動作。 又,所謂非視覺化資訊制可為使用者之視網膜無法感 知之狀態之資訊,亦可為使用者之視網膜可感知但使用者 之大腦並不能將其識別為資訊之狀態之資訊等。 又,非視覺化資訊可為調整圖像中之每—像素之明度等 (明暗差)之高頻部或低頻部,亦可為可直接認識之文字資 訊,或亦可為記號或數字、標記、圖案、色彩叫維碼、2 維碼等’又進而亦可為該等之組合q,作為編碼資訊, 可使用例如QR編料2維條碼等^又,本實施形態之編碼 I60697.doc 201237802 資訊並非限定於QR碼,例如亦可使用jAN碼、或ITF碼、 NW-7、CODE39、CODE128、UPC、PDF417、CODE49、201237802 VI. Description of the Invention: [Technical Field] The present invention relates to a content providing system using non-visualized information, a buried device for non-visual information, a cognitive device, a burying method, a cognitive method, and a buried program. And the understanding of the program's use of non-visualized information content systems and non-visual information buried in the use of high-precision content services for the purpose of quickly non-visualized information. Devices, devices, methods of burying people, methods of understanding, methods of embedding, and programs of recognition. [Prior Art] Various digital contents such as static video and animation have been provided in the past with the spread of mobile terminal devices and the like. In addition, the camera of the mobile terminal can be used to read the 2D code of the qr code (registrar 1 ticket) printed on the printing medium such as paper, and access the content obtained from the read 2D code. , or display a message corresponding to the content, etc. However, the 2-dimensional code does not look good and requires a space for configuring the 2-dimensional code. In response to this, in recent years, a technique has been disclosed which does not use a two-dimensional code from the content and is based on the obtained data (IV) (for example, refer to Patent Document 专利 and Patent Document 2). </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; </ RTI> <RTIgt; Based on the obtained finger == the area to be extracted from the image, the feature of the patent document 1 is separated from the background image 160697.doc 201237802. Further, the technique disclosed in Patent Document 2 discloses an image processing apparatus having an input mechanism for inputting image data, an encoding mechanism for encoding image data to generate encoded data, a shape data generating mechanism, and self-image data. The object of any shape is captured and the shape data of the object is generated; and the embedding mechanism is used to embed the shape data into the encoded data by electronic watermarking. Further, Patent Document 2 discloses a technique for planting an electronic watermark in an image and using the boundary between the position where the electronic watermark is buried and the position where the electronic watermark is buried as a background separation information. CITATION LIST Patent Literature Patent Literature 1: Japanese Laid-Open Patent Publication No. Hei. No. Hei. No. Hei. No. 2004-80096. Since the technique uses the feature quantity in the image to separate from the background image, when the background image is the same or similar to the color of the object, the object cannot be separated from the feature quantity in the image at the correct position. . Further, in the technique disclosed in Patent Document 2, when a two-dimensional code or the like is embedded in an image, it is necessary to embed the original image into a frequency region and then embed the second dimension, and then convert it into a spatial region and bury it. In. Further, when reading a two-dimensional code, it is necessary to convert the input image into a frequency region, and after reading and editing, it is converted into a spatial region and is encoded. Therefore, when embedding or reading 160697.doc 201237802 encoding, more image processing is generated, so it is not suitable for processing or immediate processing of low-standard machines such as mobile terminals. The present invention has been made in view of the above problems, and an object of the present invention is to provide a content providing system using non-visualized information, an embedding device for non-visual information, a recognition device, an embedding method, an understanding method, and an embedding program. And recognize programs to quickly embed or capture non-visual information and provide high-precision content services with added value. Technical Solution to Problem According to one aspect of the present invention, a content providing system using non-visualized information includes: a non-visualized information embedding device that embeds non-visualized information at a feature location of the acquired image; And an understanding device that recognizes an object and non-visual information contained in the image obtained by the embedding device; wherein the embedding device includes: an object to be buried object setting mechanism, which is obtained from the above An object contained in the image and an object to be buried with non-visual information; and a non-visual information buried mechanism, which is buried in the object obtained by the object object setting mechanism Corresponding to the top material visualization f; the domain (4) device comprises: an object manipulation mechanism, which frightens from the image in the buried area of the non-visualized information; the non-visualization resource analysis mechanism When the object picking mechanism extracts the object of the object ...., , ^, the shape of the object from the non-visual information analysis is generated by the above-mentioned facetization bribe/f1fl (4) 4 by which ^ - ° pounds processing content obtained by means of the correspondence is not to produce significant surface of the article day. Further, in accordance with the present invention, the present invention provides an embedding device, 160697.doc 201237802, which is embedded in a specific position of an image of the known image, and is characterized by including an 'image. The analysis unit 'obtains the object and position information contained in the image; the object to be embedded object setting means sets the image as the object to be buried by the object obtained by the image analysis means, and The non-visual information embedding mechanism obtains the non-visual information corresponding to the object by the surrounding object obtained by the embedding object object setting mechanism. According to another aspect of the present invention, the present invention provides an acknowledgment device that recognizes objects and non-visualized information contained in an acquired image, and is characterized by including an object #| taking mechanism, from the above figure a non-visualized information analysis mechanism for non-visualized information contained in the image; the non-visualized information analysis mechanism, when the object is extracted by the object capturing mechanism, parsing the object from the non-visual information And a display information generating unit that generates an object displayed on the screen corresponding to the processing valley obtained by the non-visual information analyzing unit. Further, according to the present invention, the present invention provides a method for embedding non-visualized information, which is embedded in a non-visualized information at a specific position of an acquired image, which is characterized by: image resolution The step of obtaining the object and the position information contained in the image; the step of setting the object to be buried is to set the image as the object to be buried by the object obtained by the image analysis step; The visual information embedding step embeds the non-visualized information corresponding to the object by the surrounding object obtained by the embedding of the #image object setting step. Further, according to the present invention, the present invention provides a method for recognizing, and the method for recognizing the object and the non-visual information contained in the image obtained by the invention is: the package: the object operation: Taking steps, which are included in the above image: 1: visually, the buried area of the poor condition is extracted; and the non-visualized information analysis is performed by taking the above-mentioned object extraction step to take out the object. In the case of 'fourth' visual information analysis, the processing corresponding to the object and the display information generation (4)' are generated corresponding to the processing content obtained by the non-visual information analysis step, and the object displayed on the screen is generated. Further, according to one of the present inventions, the present invention provides an embedding program for causing a computer as an image analysis mechanism, an object object setting mechanism, and a non-visual object included in the buried device. Information is embedded in the various agencies of the organization to function. According to the present invention, the present invention provides an acquaintance program, which enables the computer to function as an object accommodating mechanism, a non-visual information analyzing mechanism, and a display information generating mechanism included in the s-bearing device. In the present invention, the present invention can provide a high-precision content service that quickly embeds or captures non-visualized information and has excellent added value. [Embodiment] Other objects, features and advantages of the present invention will become more apparent from the detailed description of the appended claims. &lt;Regarding the Invention&gt; According to one aspect of the present invention, for example, an image or image displayed on a phase picture or a photo including various print media such as paper or film, Mingxin 160697.doc 201237802, poster, card, etc. , through the processing that is invisible to the naked eye, embeds non-visual information (marking) around the object (object information) contained in the image, by digital camera or mobile terminal (mobile phone, smart phone (registered trademark) a camera such as a camera set up to capture a part of the image, image, print medium, etc., and then import images, images, etc., such as a personal computer or a mobile terminal, a registered trademark financial board, and the like. And perform round image processing to recognize the embedded mark and pick up the object. According to an aspect of the present invention, even if it is a model in which the electric terminal or the sex is limited, such as a mobile terminal device, it is possible to realize the combination of the mark embedding method and the image processing which can recognize the embedded mark. Further, according to an aspect of the present invention, for example, information on the object to be manipulated is obtained from the information of the recognized mark, and the object of the object of (4) is processed based on the acquired buried information. That is, according to one aspect of the present invention, a content providing system is provided which performs a series of actions for self-observing non-visualized information and even for one of the processing contents of the object in FIG. Further, the non-visual information system may be information on a state in which the user's retina cannot be perceived, or may be information such as a state in which the user's retina is perceptible but the user's brain cannot recognize it as information. Moreover, the non-visualization information may be a high-frequency part or a low-frequency part of the brightness (light and dark) of each pixel in the image, or may be a text message that can be directly recognized, or may be a mark or a number or a mark. , the pattern, the color is called the dimension code, the 2D code, etc. 'and further can be the combination q of the above, as the coding information, for example, the QR coded 2D barcode can be used, etc., the coding of the embodiment I60697.doc 201237802 Information is not limited to QR codes. For example, you can use jAN code, or ITF code, NW-7, CODE39, CODE128, UPC, PDF417, CODE49,

Data Matrix、Maxi Code等之條碼資訊等。 以下,使用圖面對適宜進行本發明之使用非視覺化資訊 之内容提供系統、非視覺化資訊之埋入裝置、認識裝置、 埋入方法、認識方法、埋入程式及認識程式之形態進行說 明。又,本實施形態中說明之r圖像」,包含相片等i張 圖像’亦包含映像中之連續之圖框單位之圖像。又,本實 施形態中之内容提供系統例如揭示有AR(Augmented Reality :擴增實鏡)系統’但本發明並非限定於此,其他 内容服務亦可全部適用。 &lt;使用視覺化資訊之内容提供系統&gt; 首先’使用圖式對本實施形態之使用視覺化資訊之内容 k供系統之概要進行說明。圖1係顯示本實施形態之内容 知:供系統之概略構成之一例的圖。圖1所示之内容提供系 統1 〇係構成為包含被輸入之第1印刷媒體i丨、第1圖像資訊 取得裝置12、埋入裝置13、第2印刷媒體14、第2圖像資訊 取得裝置1 5、及認識裝置丨6。 第1印刷媒體11係列印有埋入非視覺化資訊前之圖像。 内容提供系統1 〇中可使用複數個第1印刷媒體u (圖1之例 為第1印刷媒體11 -11、11 _2)。又,本實施形態中作為一 例’揭示有列印使用埋入裝置13而產生對第1印刷媒體i j 埋入有非視覺化資訊之第2印刷媒體14之例,但本發明並 非限定於此,例如亦可直接取得圖像、對所取得之圖像埋 160697.doc 201237802 入之圓像等而產生第2印刷媒體 入非視覺化資訊、列印埋 14 八有物杜—^態中’第1印刷媒體11上列印之圓像等令 二ΓΓ本發明中’物件係指例如基於非視覺化資訊中 3 4進行動作、或用以顯示關聯資訊之物體資 L❹人或_(亦包含臉部或手4等局部物件等)、 二匕木乂通工具、桌子、椅子、帽子、眼鏡、鞋 包#|造物等可與背景分開之所有物件等。 第!圖像資訊取得裝置12拍攝未包含非視覺化資訊之約 印刷媒體11,而&amp;m &gt; 于作為圖像資訊。又,作為第1圖像資 sfl取得裝置12,例如可 例如了使用相機12-1(攝像機構)或掃描器 12.2專(資訊讀取裝置),但本發明並非限^於此可使用 具有可取得圆像資訊之功能之所有裝置形態,例如附帶複 寫機或相機功能之行動終端器、遊戲機及平板pc(per_i Computer:個人電腦)等。 埋入裝置13係對自第!印刷媒體i 1等輸入之圖像資料, 於認識裝置16中選擇要執行裁切等處理之對象物件,於該 物件之特定周圍埋入預設之非視覺化資訊,並輸出埋入之 圖像。 又’埋人裝置13亦可將埋人非視覺化資訊後之圖像作為 印刷媒體(第2印刷媒體14)而輸出,亦可原樣輸出圖像。’ 又’埋入裝置13可使用例如泛用之PCl;M或行動終端器 13-2等,但本發明並非限定於此。再者,行動終端器13_2 中設置有作為第1圖像資訊取得裝置12之内建相機12_3, 160697.doc •10· 201237802 藉此可直接取得第1印刷媒體丨丨之圖像資料。又,關於埋 入裝置13之具體之功能構成等將於後述。 第2印刷媒體14係本實施形態之埋入有非視覺化資訊之 印刷媒體。又,於第2印刷媒體14中埋入有例如對特定物 件進行裁切或特定動作等之非視覺化資訊。於圖丨中,第2 印刷媒體14之例即第2印刷媒體丨“丨、14_2分別對應於第ι 印刷媒體11-1、11·2。 第2圖像資訊取得裝置15係列印以與上述第丨圖像資訊取 得裝置12相同之方式對第2印刷媒體14取得圖像資料。 又’作為第2圖像資訊取得裝置15,例如可使用相機叫 或掃描器15-2等,但本發明並非限定於此,例如可使用具 有可取得圖像資訊之功能之所有裝置形態,例如具備複寫 機或相機功能之行動終端器、遊戲機及平板1&gt;(:等。 認識裝置16使用自第2圖像f訊取得裝置15所獲得之圖 像資料,認識圖像中所含之非視覺化資訊,自經 視覺化資訊裁切圖像資料中所含之特定物件。&amp;,裁切係 指例如自圖像中除去不需要之部分等而僅摘取對象物件 (亦包含物件之粗略之外框等)。 又’認識機構⑽賴取之物件執行與預設或非視覺化 資对所含之指示内容對應之動作。χ,作為對物件之處 ::二了如於認識裝置16之畫面上使物件移動、旋轉、 =二縮小二置換為其他物件、進行其他之不同圖像或文 =顯不、曰訊輸出、關聯資訊之顯示、乃至於該等之組 合等。 160697.doc 201237802 又w識裝置16可不對掘取之物件立即進行動作,而基 於例如使用者等之二或扣+咨% 一 #日不資讯’對該對象物件執行處 理又作為一 人彳日示資訊,例如包含對該認識裝置丨6之 風壓(例如向麥克等之音訊輸入機構吹氣、實際吹風等)、 或音訊(語音、音樂及特定話語(例如咒語或吆喝聲等)、光 (例如曝瞩陽光等)、機器之旋轉(例如包含原地旋轉、振 動、保持水平或垂直等動作)、壓力(觸碰畫面、拖曳等), 及除此之外根據溫度或時間、地點及天氣等環境條件等而 獲得之資訊,或亦可組合該等資訊中之數個,於該情形 下,亦可依據複數個二次指示資訊之指示順序(例如旋轉 後吆喝等)、或根據針對每個指示資訊所預設之優先度等 而設定最終之指示資訊。 又,認識裝置16例如可使用泛用之pcmj或行動終端器 16-2專,但本發明並非限定於此。又,行動終端器1 6-2中 預設有作為第2圖像資訊取得裝置丨5之内建相機丨5_3,藉 此可直接取得第2印刷媒體η之圖像資料。又,關於認識 裝置16之具體之功能構成等將於後述。 &lt;埋入裝置13 :功能構成例&gt; 此處’使用圖式對上述本實施形態之埋入裝置丨3之功能 構成例進行說明。圖2係顯示本實施形態之埋入裝置13之 功月b構成之^例的圖。 圖1所示之埋入裝置13構成為包含輸入機構21、輸出機 構22、儲存機構23、圖像取得機構24、圖像解析機構25、 埋入對象物件設定機構26、埋入資訊設定機構27、埋入資 160697.doc •12- 201237802 訊產生機構28、非視覺化資訊埋入機構29、收發機構3〇、 及控制機構31等。 輸入機構21接欠來自使用者等之圖像取得指示或圖像解 析扣不、埋入對象物件設定指示、埋入資訊設定指示 '埋 入資訊產生指示、非視覺化資訊埋入指示及收發指示等之 各種指示之開始/結束等之輸入。又,輸入機構21若為例 如PC等泛用之電腦’則包含鍵盤或滑鼠等之指標設備,若 為行動終端器等則包含各操作按鈕群等。 又,輸入機構21例如亦可具有輸入藉由數位相機等之攝 像機構等所拍攝之圖像或映像等之功能。此情形時,攝像 機構可内建於埋入裝置13,亦可為外部之功能構成。再 者,輸入機構21亦可具有輸入音訊等之音訊輸入功能。 輸出機構22係進行藉由輸入機構21輸入之内容、或基於 該輸入内容而執行之内容等之輸出。具體而言,輸出機構 22係進行取得之圖像或圖像解析結果、埋入對象物件設定 結果、經設;t之非視覺化資訊、經產生之非視覺化資訊、 埋入(合成)有非視覺化資訊之埋入圖像、及各構成中之處 理結果等之畫面顯示或音訊輸出等。又,輸出機構22包含 顯示器或揚聲器等。 進而,輸出機構22可具有印表機等之列印功能,亦可將 上述各輸出内容列印於例如紙或明信片、海報等各種印刷 媒體等上,作為上述第2印刷媒體14提供給使用者等。 儲存機構23係儲存本實施形態令必要之各種資訊、或埋 入處理執行時或執行後之各種資料。具體而言,儲存機構 160697.doc -13- 201237802 23係取得藉由輸入或預先儲存之由圖像取得機構24所取得 之藉由拍攝等所獲得之1個或複數個圖像或映像。又,儲 存機構23係儲存由圖像解析機構25解析之結果或埋入對象 物件設定機構26之判定結果、埋入資訊設定機構27之設定 内谷藉由埋入資5凡產生機構28所產生之埋入資訊、及藉 由非視覺化埋入機構29所埋入之圖像等。又,儲存機構23 可於必要時讀出儲存之各種資料。 圖像取得機構24係取得包含成為埋入非視覺化資訊之對 象之物件之圓像或映像等。又,圖像或映像等可為例如藉 由相機等之攝像機構所獲得之圖像或映像等,亦可為應用 於目錄或亘傳手冊、相片、卡片、貼紙、雜諸等書籍類、 商品包裝(亦包含箱子)或說明書等之圖像。又,圖像取得 機構24亦可經由收發機構3〇而由連接於通信網路之外部裝 置取得所拍攝之資訊或資料庫等中儲存之圖像或映像等,、 又,亦可經由輸入機構21使用由使用者等利用相機等實際 拍攝之圖像。 ^ 圖像解析機構25係解析由圖像取得機構以取得之圖像、 解析圖像中所含之内容。具體而+,街^ 旲媸而S取付物件顯現於圖像 之哪—部分(位置及區域)、或物件於映像中如何移動等 之物件資訊及座標等之物件位置資訊。 八 闕体螂斫機構25亦可在取得物件時 爭預先料預狀複數個物件圖料,使㈣= 物配等而擷取物件。又’圖像解析機構25在例 ‘”、之情形時,自該人物之臉部特徵部分進行臉 160697.doc •14· 201237802 檢測,亦可將臉部之特徵量數值化,而藉由其結果進行人 物認定。 進而,圖像解析機構25係亦可使用由使用者等利用輸入 機構21等所指定之物件之輪廓(外框)資訊而擷取物件。 埋入對象物件設定機構26基於藉由圖像解析機構以所解 析之結果,判定該圖像資料中是否包含物件,或當圖像資 ;斗中匕3物件之情形時,判定該物件是否為預設之非視覺 化資訊之埋入對象,並自判定結果設定埋入對象物件。 又,埋入對象物件設定機構26當圖像資料中存在複數個物 件之情形時,將複數個物件中至少一個物件設定為埋入對 象物件。此情形時,例如若為預設之優先度或該物件相對 於晝面全體之顯示區域之位置、映像,則可藉由該物件所 顯不之時間等而任意設定埋入對象物件,且亦使該等埋入 詳細資訊儲存於儲存機構23中。 此處,於埋入對象物件設定機構26中,對於圖像中所含 之物件疋否為埋入非視覺化資訊之對象,亦可例如藉由使 用者等於儲存機構23等中儲存預設之埋入判定資訊,並使 用該埋入判定資訊進行埋入判定,或亦可對該物件之動作 或附加資訊搜尋是否已儲存於儲存機構23,若已儲存附加 資訊,則判定為埋入對象物件。 因此,例如若根據由圖像解析機構25解析圖像之結果, 而解析出圖像之—部分包含風車或㈣、商品等物件時, 埋入對象物件設定機構26判定與該物件相關之附加資訊是 否已储存於儲存機構23 ,若存在與該物件相關之附加資 160697.doc 15 201237802 訊,則判定該物件為埋入對象物件。又,埋入内容(附加 資訊)例如可於埋入資訊設定機構27等中設定’且將設定 之内容儲存於儲存機構23。 又,上述之本實施形態中,乃基於圖像解析機構25之解 析結果進行埋入對象物件之設定,但本發明並非限定於 此,亦可使用例如輸出機構22等將圖像資料顯示於晝面, 使用者等自所顯示之圖像資料中所含之丨個或複數個物件 中,使用輸入機構21選擇埋入對象物件。 埋入資訊設定機構27對於要對由埋入對象物件設定機構 26所設定之物件埋入何種資訊作為附加資訊,而設定具體 之資訊内容及顯示手法(例如畫面顯示(包含大小或位置等 詳細資訊)、音訊輸出及列印輸出等)。具體而言,埋入資 訊設定機構27在例如物件為人物之情形時,對該物件設定 使其進行何種動作之動作指示資訊(旋轉、移動、放大、 縮小、置換成其他物件等),或設定該人物之姓名、年 齡、性別、身高、興趣及經歷等,進而設定所設定之各種 資訊之顯示手法卜又,物件為錢包或服飾等時,設定其 品牌名稱或商品名肖、價格、網站或部諸等之位址等, 進而設定所言免定之各種資訊之顯示手法等。χ,物件若為 書籍’歧該書籍之書名、作者、出版日冑、價格及與作 者相關之資訊等’進而設定所設定之各種資訊之顯示手 法。又,作為由埋入資訊設定機構27所設定之附加資訊, 包含映像及圖像等。 又,埋入資訊設定機構27係設定要以何種形態附加資 160697.doc • 16 · 201237802 讯°例如’設定附加資訊為特定之加密之文字、圖案或記 號及編碼資訊等。又,如為編碼資訊等時,為可取得對 應於編碼之資訊,較好為於認識裝Ϊ 16侧設置有對應之 資料庫。如上述,彳自複數之形態設定資訊附加之形:, 故可根據埋人對象之圖像内容而選擇並設^適當之埋入資 埋入貝Λ產生機構28係產生包含使埋入對象物件進行何 種動作等(動作指示資訊)之處理内容之埋人資訊。又,埋 入資訊產生機構28可將埋人資訊產生為直接文字資訊,亦 可產生為編碼資訊。具體而言,埋入資訊產生機構28例如 為使埋入資訊在實際上提供給使用者之印刷媒體或圖像等 上難以被相,以埋人對象之原始圖像之顏色資訊等為基 準’將埋人資訊產生為例如使用低頻部與高頻部之圖像, 或產生為僅使用低頻部或高頻部之圖像。 此處,本實施形態之低頻部係指以埋人非視覺化資訊之 部分之原始圖像之明度為基準,顯示相較於其使明度降低 之部分或區域;高頻部係指以上述原始圖像之明度為基 準,顯示相較於其使明度變高之部分或區域。關於埋入資 。凡產生機構28之非視覺化資訊之細節將於後述。 又’本實施形態之非視覺化資訊係於認識裝置16側操取 物件時使用’故在埋人埋人資訊時,可將埋人資訊埋入該 非視覺化資訊之全部,亦可將埋入資訊埋入非視覺化資訊 之一部分(例如非視覺化資訊區域(裁切標記標記)内之角落 或周緣部等)。 I60697.doc 17 201237802 又,埋入資訊產生機構28係作為座標資訊取得於圖像中 之哪個位置埋入非視覺化資訊。又,本實施形態中辞例如 以成為動作對象之物件為基準,於其周圍之特定區域埋入 非視覺化資訊。即,本實施形態中,於認識裝置16中,藉 由該非視覺化資訊掌握物件之位置,而裁切(擷取等)物 件。 因此’本實施形態之非視覺化資訊,於認識裝置16側例 如具有擷取所要裁切之物件之用途,及於必要時用以取得 指示對物件進行何種動作之動作指示資訊或其他附加資訊 等之用途。 又,本實施形態中,於埋入埋入資訊時,較好為例如於 圖像中所顯示之物件之位置上或其周邊,埋入與該物件對 應之埋入資訊。此情形時,以藉由圖像解析機構乃所獲得 之物件位置資訊為基準,對對象圖像進行埋入資訊之埋 入0 即,根據本實施形態,並非對圖像全體賦予一個埋入資 訊,而可於適當之場所埋入複數個非視覺化資訊。又,非 視覺化埋入機構2 9亦可埋入用則吏原始圖像中所含之複數 個或全部之物件動作之非視覺化資訊。 此外’埋人對象之圖像如為映像,非視覺化資訊埋入機 構29可對播放中之映像中之物件之移動追隨移動而於該 物件上埋入非視覺化資訊。’非視覺化資訊埋入機構Μ 可在每當埋人對象圖像被輸人時相對拍攝圖像進行埋入處 理,並使該埋人圖像依序顯示,或者作為第2印刷媒㈣ 160697.doc •18. 201237802 輸出。 收發機構3 0係介面,其係使用通信網路等自可連接之外 部裝置取得所需之外部圖像(拍攝圖像或埋入圖像等),或 取得實現本發明之非視覺化資訊埋入處理之執行程式等。 又,收發機構30可向外部裝置發送在埋入裝置13内產生之 各種資訊。 控制機構31進行埋入裝置13之各構成部全體之控制。具 體而s ’控制機構3 1基於例如來自輸入機構2 1之由使用者 等下達之指示等,進行圖像取得或圖像解析、是否為埋入 對象物件之判斷 '非視覺化資訊之設定及埋入等之各項處 理之控制等。又,亦可預先設定或預先產生埋入資訊設定 機構27之非視覺化資訊或埋入資訊產生機構28之非視覺化 資訊’並儲存於儲存機構23。 &lt;認識裝置16 :功能構成例&gt; 接著,使用圖式對上述本實施形態之認識裝置丨6之功能 構成例進行說明。圖3係顯示本實施形態之認識裝置丨6之 功能構成之一例的圖。 圖3所示之認識裝置16構成為包含輸入機構41、輸出機 構42、儲存機構43、埋入圖像取得機構44、物件擷取機構 45、非視覺化資訊解析機構46、顯示資訊產生機構、收 發機構48、及控制機構49。 輸入機構41接受來自使用者等之埋入圖像取得指示、非 視覺化資訊擷取指示、非視覺化資訊解析指示、顯示資訊 產生指不、或收發指示等各種指示之開始/結束等之輸 160697.doc 201237802 入。又,輸入機構4!若為例如PC等泛用之電腦,則包含鍵 盤、滑鼠等之指標設備,若為行動終端器等則包含各操作 按紐群等。又,輸入機構41亦具有輸入例如藉由數位相機 等之攝像機構等所拍攝之圖像或映像之功能。且,上述攝 像機構可内建於認識裝置16,亦可為外部之功能構成。再 者’輸入機構亦可具有輸人音訊等之音訊輸入功能。 此外,輸入機構可自上述第2印刷媒心即紙或明信 片 '海報、相片及卡片等印刷媒體取得埋入圖像。該情形 時,具備使用相機等之攝像機構或掃描器功能等而讀取資 料之功能。 又’輸出機構42係輸出藉由輸人機構41輸人之内容、或 二輸入内办所執订之内容等。具體而言,輸出機構U係 子藉由顯示資訊產生機構47獲得之圖像或映像上顯示 ^件之附加資訊等。又’輸出機構42包含顯示器或揚聲 器寺。 再者’輸出機構42可具有印表機等之列印功能,亦可將 尊久錄\件之實際動作内容等之各輸出内容列印於例如紙 Ρ刷媒體上,而提供給使用者等。 覺=機構43係儲存本實施形態中必要之各種資訊或非視 認識處理之執行時或執行後之各種資料。具體而 ^儲存機構43係儲存由埋人圖像取得機構44取得之埋入 由物㈣取機構45所取得之非視覺化資訊(標記 覺化解析機構46所解析之非視覺化資訊或埋入資 訊,或由顯示資訊產生機構47所產生之顯示内容等。 160697.doc •20- 201237802 再者,儲存機構43可儲存相對於藉由非視覺化資訊 機構46所解析之資料之關聯資訊。例如料視覺化資訊為 某編碼資訊(亦包含文字編碼及2維碼等)等時,將對應於該 編碼資訊之各種資料(例如對應於編碼資訊之物件之詳細 資訊(文字、映像、圖像及音訊等);將資料顯示於畫面時 之大小、帛色、時間、位置及動作内容等)預先儲存於健 存機構43H儲存機_可在取得編碼料或視其他 需要而讀出所儲存之各種資料。 埋入圖像取得機構44係自第2圖像資訊取得裝置Η取得 對應於第2印刷媒體14之圖像資料。又,當儲存機構^中 預先儲存有處理對象之圖像資料時,埋入圖像取得機構44 可自儲存機構43取得處理對象之圖像資料,亦可經由收發 機構48而自連接於通信網路之外部裝置取得埋入圖像。 又,埋入圖像亦包含映像。 物件擷取機構45係擷取埋入圖像中所含之物件。具體而 5,物件擷取機構45係例如對所輸入之埋入圖像進行某特 定頻率之過濾,而取得圖像中所埋入之非視覺化資訊。藉 由特定頻率進行過濾,可易於擷取高頻部及低頻部之區 域。此外,本發明並非限定於此,亦可使用其他手法。 又’圖像中如有複數個非視覺化資訊,則擷取全部之非視 覺化資訊。 接著’物件擷取機構45自圖像_所含之非視覺化資訊之 埋入位置操取物件之外框(邊緣),且對應於該擷取之物件 之外框(邊緣),自背景分離並操取位於其内側之物件。 160697.doc •21· 201237802 例如,物件擁取機構45係藉由設定作為非視覺化資訊之 低頻部與高頻部之組合,擷取物件之周圍之位置資訊,並 ,於所操取之位置資訊而摘取該物件。又,本實施形態並 =定於上述低頻部與高頻部,亦可基於例如色差及明度 差等而操取物件。 又’埋入圖像中如有複數個非視覺化資訊時,物件擁取 機構45可自該全部之非視覺化資訊令操取物件亦可榻取 預先決疋之至少一個非視覺化資訊(例如位於埋入圖像之 最左邊之非視覺化資訊或最大之非視覺化資訊)。 又物件操取機構45亦取得表示自哪個位置操取出非視 覺化資訊之非視覺化資訊操取位置資訊。物件操取機料 將以上述方式取得之各種資訊儲存於儲存機構 再者,本實施形態中’藉由操取非視覺化資訊,可取得 〇物件之方位(角度)或斜度等。因此,物件操取機構μ可 自藉由非視覺化資訊獲得之方位或斜度等之資訊而修正對 象物件之角度或斜度。 、見覺化資解析機構46係解析由物件操取機構Μ獲得 之非視覺化資訊中是否包含埋入資訊,若含有埋入資訊 時,取得該埋入資訊之具體内容。 例如’若非視覺化資訊巾包含作為埋入資訊之文字⑼ 如旋:、向右移動等),則取得該文字作為附加資訊。 又,若作為埋入資訊而含有編碼時,取得該編碼之資訊, 將該取得之内容(例如編碼ID#)作為關鍵字,經由儲存機 構43或收發機構48,搜尋連接於通信網路之預設舰器或 I60697.doc •22· 201237802 資料庫等外部裝置,搜 對物件之動们… 找到與關鍵字對應之針 丁视仟之動作指不資訊或顯示 資訊)’則取得該資訊。 4各種處理内容(附加 =’本貫施形態之非視覺化f訊解析機構μ亦可且 取條碼之條碼讀取器等 八β 非#譽^ , 义口貝取功此。於該情形時,例如若 非視覺化資訊為2維條碼,佬用鉻 得附加資訊。 使用條碼讀取器自該2維條喝取 得之附加資訊,產生針;:=覺化“解析機構46獲 生針對里面上顯示之物件之動作内容、 或基於顯示手法之顯示内容等…對畫面之顯示可於畫 置對話框(開新視窗)而顯示動作,亦可於對應 之物件所顯示之位置上顯示,再者亦可藉由音訊而輸出。 ,又’作為顯示資訊產生機構47之處理’例如可如上述進 行自背景消除物件之處理’或於該消除之位置對物件進行 顯不、放大、縮小、旋轉、移動,或置換成其他物件等。 因此’可對圖像中所含之特定物件進行各個動作並加以顯 不更且,顯不資訊產生機構47可使所要顯示之物件重疊 :背景圖像’或進行空間認識下之角度或斜度之修正理 等。 &lt;此處,本實施形態中’有關使哪個物件進行何種動作之 貝成’例如可自位於該物件周圍之非視覺化資訊取得。因 此’本實施形態中’只要針對圖像中所欲執行動作之物件 而僅於其周圍埋人非視覺化資訊,即可在埋人裝置13側設 定該每個物件單獨之處理。 ° 160697.doc -23- 201237802 又’顯示資訊產生機構47可將所取得之非視覺化資訊原 樣視覺化顯示’亦可自儲存機構43或外部裝置等取得與非 視覺化資訊對應之附加資訊,並顯示該取得之附加資訊。 更且,藉由非視覺化資訊解析機構46獲得之結果若為編 碼’則顯示資訊產生機構47基於上述之編碼⑴等而產生顯 示資訊’以基於自儲存機構43等取得之針對每個附加資訊 設定之於畫面上顯示時之大小、顏色、時間、位置及動作 内容等而使非視覺化資訊顯示…若對象物件為映像而 處於移動狀態之情形時,可追隨該物件之位置而顯示附加 資訊等,,亦可固$顯示於最初於畫面上顯示之位置。 收發機構48係介面,其係使用通信網路等自可連接之外 部裝置取得所需之外部圖像(拍攝圖像等)’或取得實現本 實施形態之非視覺化資訊認識處理之執行程式等。又,收 發機構48可向外部裝置發送在認識裝置“内產生之各種資 訊。 控制機構49係進行認識裝置16之各構成部全體之控制。 具體而言,控制機構49基於例如來自輸入機構“之由使用 者下達之指示等,進行埋人圖像之取得或非視覺化資訊之 擷取、非視覺化資訊之解析、及顯示資訊之產生等之各項 處理之各項控制。 藉由上述裝置構成可有效率地進行#訊取得,而提供附 加價值性優良且高精度之圖像。 &lt;埋入裝置13、認識裝置16 :硬體構成&gt; 此處,於上述埋入裝置13及認識裝置16中,藉由產生可 160697.doc -24 · 201237802 使各功能於電腦中執行之執行程式(埋入程式、認識程 式),並對例如泛用之個人電腦及伺服器等安裝該執行程 式,可實現本實施形態之非視覺化資訊之埋入處理及認識 處理等。 此處,使用圖式對本實施形態之可實現非視覺化資訊之 埋入處理之電腦之硬體構成例進行說明。圖4係顯示本實 施形態之可實現非視覺化資訊之埋入處理及認識處理之硬 體構成之一例的圖。 圖4之電腦本體構成為包含輸入裝置51、輸出裝置52、 驅動裝置53、輔助記憶裝置54、記憶裝置55、進行各種控 制之 CPU 56(Central Processing Unit :中央處理器)、及網 路連接裝置5 7 ’該等以系統匯流排而相互連接。 輸入裝置51具有使甩者等所操作之鍵盤及滑鼠等之指標 設備,輸入來自使用者等之程式執行等各種操作信號。 輸入裝置51具有自相機等攝像機構輸入所拍攝之圖像 之圖像輸入單元。 輸出裝置52包含顯示器’顯示操作用以進行本實施形態 處理之電腦本體所需之各種視窗或資料等,可藉由 56所具有之控制程式而顯示程式之執行經過或結果等。Bar code information such as Data Matrix, Maxi Code, etc. Hereinafter, a description will be given of a form of a content providing system using non-visualized information, a non-visual information embedding device, an acknowledgment device, an embedding method, an acknowledgment method, an embedding program, and an acquaintance program, which are suitable for carrying out the present invention. . Further, the r image "described in the present embodiment" includes an i-picture such as a photo and includes an image of a continuous frame unit in the image. Further, the content providing system in the present embodiment discloses, for example, an AR (Augmented Reality) system, but the present invention is not limited thereto, and other content services may be applied. &lt;Content Providing System Using Visualized Information&gt; First, the outline of the system using the content k of the visual information of the present embodiment will be described using a drawing. Fig. 1 is a view showing an example of a schematic configuration of a system in the present embodiment. The content providing system 1 shown in FIG. 1 is configured to include the input first printing medium i, the first image information acquiring device 12, the embedding device 13, the second printing medium 14, and the second image information acquisition. The device 15 and the device 丨6. The first print medium 11 series is printed with an image embedded in the non-visualized information. A plurality of first print media u can be used in the content providing system 1 (the first print media 11-11, 11_2 in the example of Fig. 1). Further, in the present embodiment, as an example, the example in which the second printing medium 14 in which the non-visual information is embedded in the first printing medium ij is printed by using the embedding device 13, the present invention is not limited thereto. For example, it is also possible to directly obtain an image, and to embed a circular image of 160697.doc 201237802 into the acquired image to generate a second printing medium into non-visualized information, and to print a buried image. 1 A circular image printed on the printing medium 11 or the like. In the present invention, an object refers to an object that is operated based on, for example, a non-visualized information, or an object for displaying related information, or a face. Part or hand 4, etc.), 匕木乂通工具, table, chair, hat, glasses, shoe bag #|Creation and other objects that can be separated from the background. The first! The image information obtaining means 12 captures the approximate print medium 11 which does not contain non-visualized information, and &m &gt; as image information. Further, as the first image asset sfl acquiring device 12, for example, a camera 12-1 (image pickup mechanism) or a scanner 12.2 (information reading device) can be used, but the present invention is not limited thereto. All device types that have the function of round image information, such as mobile terminal with copying machine or camera function, game console and tablet pc (per_i Computer: personal computer). The embedding device 13 is self-contained! The image data input by the printing medium i 1 or the like is selected by the recognition device 16 to perform an object such as cutting, and a predetermined non-visual information is embedded in a specific surrounding of the object, and the embedded image is output. . Further, the buried device 13 can output an image in which the non-visual information is buried as a print medium (the second print medium 14), and can output the image as it is. The 'buried device 13' can use, for example, a general-purpose PC1; M or a mobile terminal 13-2, but the present invention is not limited thereto. Further, the mobile terminal device 13_2 is provided with the built-in camera 12_3, 160697.doc • 10· 201237802 as the first image information acquiring device 12, whereby the image data of the first print medium can be directly obtained. Further, the specific functional configuration and the like of the embedding device 13 will be described later. The second print medium 14 is a print medium in which non-visual information is embedded in the present embodiment. Further, non-visual information such as cutting or specific operation of a specific object is embedded in the second printing medium 14. In the figure, the second print medium 丨 "丨, 14_2 corresponds to the first print media 11-1, 11·2, respectively. The second image information acquisition device 15 is printed with the above. In the same manner as the second image information obtaining device 12, image data is acquired on the second printing medium 14. Further, as the second image information acquiring device 15, for example, a camera call or a scanner 15-2 may be used, but the present invention The present invention is not limited to this. For example, all device configurations having a function of acquiring image information, for example, a mobile terminal having a copying machine or a camera function, a game machine, and a tablet 1&gt; (: etc. can be used. The recognition device 16 is used from the second. The image data obtained by the image acquisition device 15 recognizes the non-visual information contained in the image, and cuts the specific object contained in the image data from the visualized information. & For example, the object is removed from the image, and only the object object (including the rough outer frame of the object, etc.) is extracted. Further, the object of the object (10) is executed and preset or non-visualized. The action corresponding to the instruction content.χ As the object to the object:: Second, as in the screen of the recognition device 16, the object moves, rotates, = two reductions and two replacements for other objects, other different images or text = display, output, related information The display, or the combination of the above, etc. 160697.doc 201237802 The device 16 can be operated immediately without the object being excavated, and is based on, for example, the user or the like. The processing of the object object is performed as a person's daily information, for example, including the wind pressure of the cognitive device (6 (for example, blowing into an audio input mechanism such as a microphone, actual air blowing, etc.), or audio (speech, music, and specific). Discourse (such as a spell or a slap, etc.), light (such as exposure to sunlight, etc.), rotation of the machine (for example, including in-situ rotation, vibration, horizontal or vertical movement), pressure (touching the screen, dragging, etc.), and Other than the information obtained from environmental conditions such as temperature, time, location and weather, or a combination of the information, in this case, may also be based on a plurality of secondary indicators. The final instruction information is set in the order of the information (for example, after the rotation, etc.), or according to the priority set for each instruction information, etc. Further, the cognitive device 16 can use, for example, a general-purpose pcmj or a mobile terminal 16 -2, but the present invention is not limited thereto. Further, the mobile terminal device 1 6-2 is provided with a built-in camera 丨5_3 as the second image information acquiring device ,5, whereby the second printing can be directly obtained. The image data of the media η. The specific functional configuration of the acquaintance device 16 will be described later. <Embedded device 13: Functional configuration example> Here, the embedding device of the above-described embodiment is used. An example of the functional configuration of the crucible 3 will be described. Fig. 2 is a view showing an example of the configuration of the power month b of the embedding device 13 of the present embodiment. The embedding device 13 shown in FIG. 1 includes an input mechanism 21, an output mechanism 22, a storage mechanism 23, an image obtaining unit 24, an image analyzing unit 25, an embedded object setting unit 26, and a buried information setting unit 27. The burial mechanism 160697.doc • 12- 201237802 is used to generate the mechanism 28, the non-visual information embedding mechanism 29, the transceiver unit 3, and the control unit 31. The input unit 21 receives an image acquisition instruction or an image analysis button from the user or the like, an embedded object object setting instruction, a buried information setting instruction, an embedded information generation instruction, a non-visualization information embedding instruction, and a transmission/reception instruction. Enter the start/end of various instructions, etc. In addition, if the input means 21 is a computer for general use such as a PC, a pointing device such as a keyboard or a mouse is included, and if it is a mobile terminal, the operation button group or the like is included. Further, the input unit 21 may have a function of inputting an image or a map imaged by an image pickup unit such as a digital camera. In this case, the camera mechanism can be built in the embedding device 13, or it can be configured as an external function. Further, the input unit 21 may have an audio input function for inputting an audio or the like. The output unit 22 outputs the content input by the input unit 21 or the content executed based on the input content. Specifically, the output unit 22 performs the acquired image or image analysis result, the embedded object object setting result, and the set; the non-visualized information of t, the generated non-visualized information, and the embedded (synthesized) Screen display or audio output such as embedded images of non-visualized information, processing results in each configuration, and the like. Further, the output mechanism 22 includes a display, a speaker, and the like. Further, the output unit 22 may have a printing function of a printer or the like, and the output contents may be printed on various printing media such as paper, postcards, posters, etc., and provided to the user as the second printing medium 14. Wait. The storage unit 23 stores various kinds of information necessary for the present embodiment, or various materials at the time of execution or execution of the embedding process. Specifically, the storage unit 160697.doc - 13 - 201237802 23 acquires one or a plurality of images or images obtained by the image acquisition unit 24, which are obtained by photographing or the like, which are input or stored in advance. Further, the storage unit 23 stores the result of the analysis by the image analysis unit 25 or the determination result of the embedded object object setting unit 26, and the set inner valley of the buried information setting unit 27 is generated by the burying unit 5 generating means 28. The buried information, the image embedded by the non-visual embedding mechanism 29, and the like. Further, the storage unit 23 can read various stored materials as necessary. The image acquisition unit 24 acquires a circular image or a map or the like including an object that is an object that embeds the non-visualized information. Further, the image, the image, and the like may be, for example, an image or image obtained by an image pickup mechanism such as a camera, or may be applied to a catalogue or a booklet, a photo, a card, a sticker, a book, or the like. An image of the package (also containing the box) or instructions. Further, the image acquisition unit 24 may acquire an image or a map stored in the captured information, a database, or the like by an external device connected to the communication network via the transmission/reception mechanism 3, or may be via an input mechanism. 21 An image actually taken by a user or the like using a camera or the like is used. ^ The image analysis unit 25 analyzes the image acquired by the image acquisition means and analyzes the content contained in the image. Specifically, +, the street ^ 旲媸 and S take the object to appear in the image - part (position and area), or how the object moves in the image, such as object information and coordinates of the object location information. The 阙 阙 body 25 mechanism 25 can also pre-predict a plurality of object drawings when the object is obtained, so that (4) = the object is matched and the object is taken. In the case of the image analysis unit 25, in the case of the example, the face feature is detected from the face feature portion of the character, and the feature amount of the face can be quantified by As a result, the image analysis unit 25 can extract the object using the outline (outer frame) information of the object specified by the input mechanism 21 or the like by the user or the like. The embedded object object setting mechanism 26 is based on the borrowing. The image analysis mechanism determines whether the object data is included in the image data, or when the image is in the image, and determines whether the object is a preset non-visual information buried. When the object is placed, the object to be embedded is set as the result of the determination. When the plurality of objects are present in the image data, the embedded object object setting unit 26 sets at least one of the plurality of objects as the object to be embedded. In this case, for example, if the priority is preset or the position or image of the object relative to the display area of the entire face, the time can be arbitrarily set by the time displayed by the object or the like. The object object is entered, and the embedded detailed information is also stored in the storage mechanism 23. Here, in the embedded object object setting unit 26, whether the object contained in the image is buried or not is visualized. For example, the user may store the preset embedding determination information by using the storage mechanism 23 or the like, and use the embedding determination information to perform the embedding determination, or may search for the action or additional information of the object. It has been stored in the storage unit 23, and if the additional information has been stored, it is determined that the object is embedded. Therefore, for example, if the image is analyzed by the image analysis unit 25, the image is partially analyzed to contain the windmill or (4) In the case of a product or the like, the embedded object object setting unit 26 determines whether the additional information related to the object has been stored in the storage unit 23, and if there is an additional charge associated with the object, the object is determined to be Further, the embedded content (additional information) can be set, for example, in the embedded information setting unit 27 or the like, and the set contents are stored in the storage unit 23. Further, in the above-described embodiment, the setting of the embedded object is performed based on the analysis result of the image analysis unit 25. However, the present invention is not limited thereto, and the image data may be displayed on the image using, for example, the output unit 22 or the like. The user or the like selects the embedded object from the one or more objects included in the displayed image data. The embedded information setting mechanism 27 sets the mechanism for the object to be embedded. 26 What kind of information is embedded in the object as the additional information, and set specific information content and display methods (such as screen display (including detailed information such as size or position), audio output and print output, etc.), specifically, When the object is a person, for example, the embedded information setting means 27 sets an action instruction information (rotation, movement, enlargement, reduction, replacement, replacement, etc.) of the object to which the action is performed, or sets the character's Name, age, gender, height, interest, experience, etc., and then set the display information of the various information set, and the object is money Isochronous or costumes, sets its brand name or trade name Shaw, price, and other various sites or part of the address, etc., and then set free said display techniques prescribed a variety of information and so on. χ, if the object is a book, the title of the book, the author, the date of publication, the price, and the information related to the author, etc., and then display the various information displayed. Further, the additional information set by the embedded information setting unit 27 includes a map, an image, and the like. Further, the embedded information setting unit 27 sets the form in which the additional information is to be added. For example, 'set the additional information to a specific encrypted character, pattern, symbol, coded information, and the like. Further, in the case of encoding information, etc., in order to obtain information corresponding to the encoding, it is preferable to provide a corresponding database on the side of the recognition device 16. As described above, the form of the morphological setting information added from the plural number is selected, and therefore, the burial object can be selected and set according to the image content of the buried object. The buried information of the processing content of the action (operation instruction information). Further, the embedded information generating unit 28 can generate the buried information as direct text information or as encoded information. Specifically, the embedding information generating means 28 is, for example, such that the embedding information is difficult to be phased on a print medium or an image actually provided to the user, and is based on the color information of the original image of the buried object. The buried information is generated, for example, by using an image of the low frequency portion and the high frequency portion, or for generating an image using only the low frequency portion or the high frequency portion. Here, the low-frequency portion of the present embodiment refers to a portion or region in which the brightness is lowered as compared with the brightness of the original image of the portion of the non-visualized information, and the high-frequency portion refers to the original The brightness of the image is the reference, showing the part or area that makes the brightness higher. About burying funds. Details of the non-visual information generated by the institution 28 will be described later. In addition, the non-visualized information of the present embodiment is used when the device is operated by the cognitive device 16 side. Therefore, when the buried information is buried, the buried information can be buried in the non-visualized information, or buried. The information is embedded in one of the non-visualized information (for example, a corner or a peripheral part of the non-visualized information area (cut mark). I60697.doc 17 201237802 Further, the embedding information generating unit 28 is embedded in the image as the coordinate information to embed the non-visual information. Further, in the present embodiment, for example, non-visual information is embedded in a specific area around the object to be operated. That is, in the present embodiment, the recognition device 16 grasps (takes, etc.) the object by grasping the position of the object by the non-visualization information. Therefore, the non-visualization information of the present embodiment has, for example, the purpose of capturing the object to be cut on the side of the recognition device 16, and, if necessary, obtaining action instruction information or other additional information indicating what kind of action is performed on the object. The purpose of the use. Further, in the present embodiment, when embedding the embedding information, it is preferable to embed the embedding information corresponding to the object, for example, at or around the position of the object displayed on the image. In this case, the embedding information is embedded in the target image based on the object position information obtained by the image analyzing means. According to the present embodiment, not one embedding information is given to the entire image. , and multiple non-visual information can be buried in appropriate places. Further, the non-visual embedding mechanism 29 may embed non-visual information for the operation of a plurality of or all of the objects contained in the original image. In addition, if the image of the buried object is an image, the non-visual information embedding mechanism 29 can follow the movement of the object in the image being played to embed the non-visual information on the object. 'Non-visualized information embedding mechanism Μ It is possible to embed the image with respect to the captured image whenever the image of the buried object is input, and display the buried image sequentially, or as the second printing medium (4) 160697 .doc •18. 201237802 Output. The transceiver 30 interface is a non-visualized information embedding the present invention by obtaining an external image (a captured image or a buried image, etc.) from a connectable external device using a communication network or the like. Enter the execution program of the processing, etc. Further, the transceiver 30 can transmit various information generated in the embedding device 13 to an external device. The control unit 31 performs control of the entire components of the embedding device 13. Specifically, the control unit 3 1 performs image acquisition, image analysis, determination of whether or not to embed an object, and setting of non-visualization information based on, for example, an instruction from a user or the like from the input unit 2 1 . Control of various treatments such as embedding. Further, the non-visualized information embedded in the information setting unit 27 or the non-visualized information embedded in the information generating unit 28 may be set in advance or stored in the storage unit 23 in advance. &lt;Identification device 16: Functional configuration example&gt; Next, a functional configuration example of the recognition device 6 of the above-described embodiment will be described with reference to the drawings. Fig. 3 is a view showing an example of the functional configuration of the cognitive device 丨6 of the present embodiment. The recognition device 16 shown in FIG. 3 includes an input mechanism 41, an output mechanism 42, a storage mechanism 43, an embedded image acquisition unit 44, an object extraction mechanism 45, a non-visualization information analysis unit 46, a display information generation mechanism, and The transceiver mechanism 48 and the control mechanism 49. The input unit 41 accepts the start/end of various instructions such as an embedded image acquisition instruction, a non-visualization information acquisition instruction, a non-visualization information analysis instruction, a display information generation instruction, or a transmission/reception instruction from a user or the like. 160697.doc 201237802 In. In addition, if the input unit 4 is a general-purpose computer such as a PC, it includes a pointing device such as a keyboard or a mouse, and if it is a mobile terminal or the like, it includes each operation button group or the like. Further, the input unit 41 also has a function of inputting an image or a picture captured by, for example, an imaging unit such as a digital camera. Further, the above-described image pickup mechanism may be built in the recognition device 16, or may be an external function. Furthermore, the input mechanism can also have an audio input function such as inputting audio. Further, the input means can obtain an embedded image from a printing medium such as a paper or a letter piece "poster, photo, and card" of the second printing medium. In this case, there is a function of reading data using an imaging mechanism such as a camera or a scanner function. Further, the output unit 42 outputs the content input by the input institution 41 or the content of the second office. Specifically, the output unit U is an image obtained by the display information generating unit 47 or an additional information displayed on the image. Further, the output mechanism 42 includes a display or a speaker temple. Furthermore, the 'output mechanism 42' may have a printing function of a printer or the like, and may also print the output contents of the actual action content of the permanent recording device, for example, on a paper brushing medium, and provide it to the user, etc. . The sensation=mechanism 43 stores various kinds of information necessary or executed after the execution of the various kinds of information necessary for the present embodiment or the non-visual processing. Specifically, the storage unit 43 stores the non-visualized information acquired by the embedded object (four) taking mechanism 45 obtained by the buried image obtaining unit 44 (non-visualized information or embedded by the marker detection mechanism 46) The information, or the display content generated by the display information generating unit 47, etc. 160697.doc • 20- 201237802 Furthermore, the storage unit 43 can store associated information with respect to the data analyzed by the non-visualized information mechanism 46. When the visual information is a piece of coded information (including text code, 2D code, etc.), etc., it will correspond to various information of the coded information (for example, the details of the object corresponding to the coded information (text, image, image, and Audio, etc.; the size, color, time, position, and action content of the data displayed on the screen, etc.) are stored in advance in the storage mechanism 43H storage machine _ can be read out in the acquisition of the code material or other needs The embedded image acquisition unit 44 acquires image data corresponding to the second print medium 14 from the second image information acquisition device. Further, when the storage mechanism is stored in advance, When the image data of the object is processed, the embedded image obtaining unit 44 can acquire the image data of the processing target from the storage unit 43, and can acquire the embedded image from the external device connected to the communication network via the transmitting and receiving unit 48. Further, the embedded image also includes an image. The object capturing mechanism 45 extracts an object included in the embedded image. Specifically, the object capturing mechanism 45 performs, for example, a specific image of the input embedded image. The frequency is filtered to obtain non-visualized information embedded in the image. By filtering at a specific frequency, the high frequency portion and the low frequency portion can be easily captured. Further, the present invention is not limited thereto and may be used. Other methods. Also, if there are multiple non-visualized information in the image, all non-visual information is captured. Then the object extraction mechanism 45 operates from the embedded position of the non-visual information contained in the image_ The outer frame (edge) of the object is taken, and corresponds to the outer frame (edge) of the captured object, and the object located on the inner side is separated from the background. 160697.doc • 21· 201237802 For example, the object fetching mechanism 45 By setting For the combination of the low-frequency part and the high-frequency part of the non-visualized information, the position information around the object is extracted, and the object is extracted from the position information acquired. Further, the embodiment is determined to be the low frequency mentioned above. The part and the high-frequency part can also acquire objects based on, for example, chromatic aberration and brightness difference. Further, when there is a plurality of non-visualized information embedded in the image, the object fetching mechanism 45 can be non-visualized from the whole. The information allows the object to be manipulated to pre-determine at least one non-visualized information (eg, non-visualized information at the far left of the embedded image or the largest non-visual information). Obtaining non-visual information operation position information indicating from which position to operate the non-visualized information. The object operation machine stores various information obtained in the above manner in the storage mechanism, and in this embodiment, Non-visual information, you can get the orientation (angle) or slope of the object. Therefore, the object manipulation mechanism μ can correct the angle or inclination of the object from the information such as the orientation or the inclination obtained by the non-visual information. The Discovery Funding Analysis Department 46 analyzes whether the non-visual information obtained by the object handling organization includes the embedded information, and if the embedded information is included, the specific content of the buried information is obtained. For example, if the non-visualized information towel contains the text (9) as the embedded information, such as a rotation: moving to the right, etc., the text is obtained as additional information. Further, when the code is included as the embedded information, the coded information is acquired, and the acquired content (for example, code ID#) is used as a key to search for a connection to the communication network via the storage unit 43 or the transceiver 48. Set up a ship or an external device such as the I60697.doc •22· 201237802 database, search for the object's movements... Find the action corresponding to the keyword, which means that the action is not information or display information) 'Get the information. 4 various processing contents (additional = 'the non-visualized f-analysis mechanism μ of the local application mode can also take the bar code reader and other eight beta non-Y reputation ^, Yikoubei take this. In this case, For example, if the non-visualized information is a 2D barcode, the additional information is obtained by using the chrome. The additional information obtained from the 2D strip is generated by the barcode reader, and the needle is generated;:=The "analysis mechanism 46 is obtained for the display on the inside" The action content of the object, or the display content based on the display method, etc. The display of the screen can be displayed in the drawing dialog box (opening a new window), and can also be displayed at the position displayed by the corresponding object, and then It can be output by audio. In addition, as the processing of the display information generating mechanism 47, for example, the processing of the object can be eliminated from the background as described above or the object can be displayed, enlarged, reduced, rotated, and moved at the position where the object is eliminated. , or replaced with other objects, etc. Therefore 'can perform various actions on the specific objects contained in the image and display them differently, and the display information generating mechanism 47 can overlap the objects to be displayed: background image 'Or the correction of the angle or the slope under the space recognition. &lt; Here, in the present embodiment, 'about what kind of action is performed on which object' can be derived from non-visual information around the object. Therefore, in the present embodiment, it is possible to set the individual processing of each object on the side of the buried device 13 as long as the non-visual information is buried around the object to be operated in the image. 160697.doc -23- 201237802 Further, the display information generating unit 47 can visually display the obtained non-visualized information as it is, and can obtain additional information corresponding to the non-visualized information from the storage unit 43 or an external device, and Further, if the result obtained by the non-visualized information analysis unit 46 is the code ', the display information generating unit 47 generates the display information based on the above-described code (1) or the like to be based on the self-storage mechanism 43 or the like. The non-visual information display is made for each additional information set to the size, color, time, position, and action content displayed on the screen... If the object is moved while being imaged, the information can be displayed following the position of the object, and the information can be displayed on the screen first. The transceiver 48 is an interface, which uses communication. The network or the like obtains an external image (captured image, etc.) required from the external device that can be connected' or obtains an execution program for realizing the non-visualized information recognition process of the present embodiment. Further, the transceiver 48 can be externally connected. The various information generated in the recognition device is transmitted. The control unit 49 controls the entire components of the recognition device 16. Specifically, the control unit 49 performs an instruction from the user, for example, from the input device. Control of the acquisition of buried images or the capture of non-visualized information, the analysis of non-visualized information, and the generation of display information. By the above-described apparatus configuration, it is possible to efficiently perform the acquisition, and to provide an image with excellent value and high precision. &lt;Embedding device 13 and recognition device 16: hardware configuration&gt; Here, in the embedding device 13 and the cognition device 16, each function is executed in a computer by generating 160697.doc -24 · 201237802 By executing the program (embedded program, acquainted with the program) and installing the execution program on, for example, a general-purpose personal computer and a server, the embedding processing and the recognition processing of the non-visual information of the present embodiment can be realized. Here, a hardware configuration example of a computer capable of embedding non-visual information in the present embodiment will be described using a drawing. Fig. 4 is a view showing an example of a hardware configuration in which embedding processing and recognition processing of non-visual information can be realized in the embodiment. The computer body of FIG. 4 is configured to include an input device 51, an output device 52, a drive device 53, an auxiliary memory device 54, a memory device 55, a CPU 56 (Central Processing Unit) for performing various controls, and a network connection device. 5 7 'These are connected to each other by the system bus. The input device 51 has an index device such as a keyboard and a mouse operated by a user or the like, and inputs various operation signals such as program execution from a user or the like. The input device 51 has an image input unit that inputs an image captured from an imaging mechanism such as a camera. The output device 52 includes a display ">displaying various windows or materials required for operating the computer body for performing the processing of the present embodiment, and displaying the execution progress or results of the program by the control program of the 56.

Serial Bus :^ = =CD_R〇M等可移動式記錄媒體58等來提供本 媒體58^=於電腦本體之執行程式。記錄程式之記錄 炼體W可设置於驅動裝罟 53,將記錄媒體58中所含之執/自崎媒體58經由驅動裝置 執行知式安裝於輔助記憶裝置 160697.doc ,25· 201237802 54 ° 輔助記憶裝置54係硬碟等之存儲機構,其儲存本發明之 執行程式或電腦中所設之控制程式等,且於必要時進行輸 出入。 記憶裝置55收納藉由CPU 56自輔助記憶裝置54讀出之執 行程式等。又,記憶裝置55包含R〇M〇Read 〇nly河⑽的厂 唯讀記憶體)或RAM(Random ACcess Mem〇ry :隨機存取記 憶體)等。 ° CPU 56可基於0S(0perating System:作業系統)等之控 制程式及存放於記憶裝置55中之執行程式,控制各種運算 或與各硬體構成部之資料輸出人等電腦全體之處理,實現 埋入處理及認識處理之各處理β χ,程式執行中所需之各 種資訊等可自輔助記憶裝置54取得,亦可存放執行結果 /網路連接裝置57藉由與通信網料連接,可自連接於超 心網路之其他終端器等取得執行程式,或向其他終端器考 提供藉由執行程式而獲得之執行結果、或本實施形態之朝 行程式本身。 :艮據如上述之硬體構成,可執行本實施形態之非視覺化 用=埋人處理及認識處理。又,藉由安裝程式,利用泛 可#於實現本實施形態之埋人處理及認識 處理。 及對上述之埋人程式中之非視覺化f訊之埋入處理 …式中之非視f化資訊之認識處理進行具體說明。 160697.doc -26 - 201237802 &lt;非視覺化資訊之埋入處理程序&gt; 首先,對本實施形態之非視覺化資訊之埋入處理程序進 行說明。圖5係顯示本實施形態之非視覺化資訊之埋入處 理程序之一例的流程圖。 圖5所示之埋入處理,首於先步驟s〇1中’取得藉由相機 等攝像機構所拍攝之圖像,於步驟s〇2中,進行圖像之解 析’取得圖像中所含之物件及物件位置資訊等。 接著於步驟S03中,基於由步驟s〇2之處理所獲得之資 訊,進行埋入對象物件之設定,於步驟s〇4中,對該物件 判斷是否要埋入非視覺化資訊(標記此處,如要埋入非 視覺化資訊(步驟S04中為是),於步驟s〇5中,對該物件設 定表示埋入何種動作指示資訊等處理内容之埋入資訊,: 步驟S06中’產生藉由步雜5之處理所設定之埋入資訊。、 又,於步驟S07中’於圖像中特定物件之周圍之特定位 置埋入藉由步驟S06之處理所產生之埋入資訊,於步驟咖 中’藉由顯示器等輸出機構顯示該埋入後之合成圖像或輸 出作為資料。X’步驟808之處理中,亦列印將藉由步驟 S07所獲得之圖像輸出至印刷媒體。 步驟S08之處理結束後或於步驟s〇4之處理中,如未埋入 非視覺化資訊(步驟S〇4中為否),則於步職艸,判斷是 否:其他圖像埋入非視覺化資訊。如要對其他圖像埋入非 視覺化資訊(步驟S09中A暑、,目,丨、G , 心為疋),則返回步驟S〇1重複執行其 於步㈣9之處理中,如未對其他圖像埋 '化貝讯(步驟S〇9t為否)’則結束非視覺化資訊 I60697.doc •27· 201237802 埋入處理。 又,本實施形態亦可不進行上述步驟S02之處理,而由 使用者等自畫面上所顯示之圖像資料設定至少一個物件。 &lt;非視覺化資訊之認識處理程序&gt; 接著’對本實施形態之非視覺化資訊之認識處理程序進 行說明。圖6係顯示本實施形態之非視覺化資訊之認識處 理程序之一例的流程圖。 圖6所示之認識處理,首先於步驟S11中,取得藉由上述 埋入處理而經埋入非視覺化資訊之埋入圖像。又,在步驟 S11之處理中,只要能夠取得圖像,可使用任意之步驟, 例如可如上述拍攝印刷媒體而取得圖像資料外^目外 裝置等取得作為圖像資料而預先準備之埋入圖像。接著 於步驟S12中,自取得之圖像中所含 令τ r/r a炙非視覺化資訊擷 物件。 接者,於步驟S13中,判斷於步驟Sl2之處理中是否自 =化資㈣取出包含針對物件之動作指示内容或顯示 專之各種處理内容之埋入資訊,若棟取出埋入資 =13中為是),則於步驟川中進行埋入資訊 者’於步_,自藉由埋入資訊之解析結果所獲得 附加資5fl產生所要顯示於畫面等之 ㈣中顯示所產生之内容。 冑,而於步」 二於中’判斷是否要自其他圖像_M 广如要自其他圖像認識非視覺化 ^ 為疋),則返回步驟SU,重複執行其後之處理 160697.doc -28- 201237802 驟S17之處理中’若不自其他圖像認識非視覺化資訊(步驟 S 1 7中為否),則結束非視覺化資訊認識處理。 藉由上述處理,可迅速進行非視覺化資訊之埋入或擷 取’且可提供附加價值性優良之高精度之内容服務。又, 藉由安裝程式,利用泛用之個人電腦等可易於實現本發明 之非視覺化資訊埋入處理或認識處理。 〈非視覺化資訊之埋入例&gt; 接著,使用圖式對本實施形態之非視覺化資訊之埋入例 進行說月。圖7係用以說明本實施形態之非視覺化資訊之 埋入例的圖。又,進行埋入處理之情形時,以圖7(a)〜圖 7(d)之順序進行埋入。 首先,對自上述第丨印刷媒體丨丨等輸入之拍攝圖像即圖 像資料6 1中所含之某特定物件62插入非視覺化資訊之情形 時:如圖7(a)所示,以物件62為中心,自《中心如圖了⑷ 之箭頭所示’於物件62之周圍配置特定形狀之編碼外框 馮杈兩認識裝置16之讀取精度,外框63之形狀較好 為正方形’但本實施形態並非限定於此,例如亦可為矩形 或羡形、6角形、8角形等之多角形或圓形、橢圓形、與物 似且放大之形狀、或由使用者等利用滑鼠等輸入 機構手繪之形狀等。 “接:圖7(b)所示’指定相對於物㈣之大致輪摩 機構而任ΓΗ可由使用者等使用滑鼠或觸控面板等輸入 意设定。再者,輪廓64亦可藉由物件62與其背景 160697.doc -29- 201237802 之亮度值等之差異而擷取粗略之輪廓64。又,該輪廓64可 為粗略者,只要是包含例如物件62全體之區域即可。 接著,如圖7(c)所示,與輪廓64之形狀對應而設定編碼内 框65。又,編碼内框65係未超越外框63,作為編碼内框65可 設定例如輪廓64之一圈(特定像素數)外侧之區域。接著,如 圖7(d)所示,於編碼内框65與外框63之間賦予高頻部μ之編 碼區域,於輪廓64與内框65之間賦予低頻部67。藉此,可在 認識裝置16側埋入用以認識物件62之位置之非視覺化資訊。 又,本實施形態中,利用任何文字或記號、編碼等,對 非視覺化資訊插人針對物件之動作指示f訊或顯示手法等 之各處理内容,藉此可對每個圓像中之特定物件進行特定 動作專藉此,可提供具有附加價值之内容服務。 &lt;非視覺化資訊之擷取及物件之認識例&gt; -接著,使用圖式對本實施形態之認識裝置〗6之非視覺化 資訊之掏取及物件之認識例進行說明。圖8係用以說明本 實施形態之非視覺化資訊之擷取及物件之認識例的圖。 又,進行本實施形態之認識處理之非視覺化資訊之擷取 及物件之認識時’以&quot;⑷〜圖8(d)之順序進行處理。首 先,對自上述^印刷媒_等輸入之圖像資料Η中所含 之某特定物件62擁取非視覺化資訊之情形時,首先如圖 ❿)所示,取得圖像61中所含之編碼資訊。具體而言’例 如擷取外框63、高頻部66及低頻部67所存在之區域。 接著’ M8(b)心’自外㈣使_之高頻料透明 化,進而如圖8⑷所示,自高頻部66之邊緣朝向内側使低 160697.doc 201237802 頻部67變成透明。又,可基於與周圍像素之亮度資訊等而 掌握低頻部67,但本發明並非限定於此,例如亦可以自高 頻部使特定位元數内側透明化之方式而僅使預設之位元數 透明化,或亦可使用色差或明暗差等進行上述透明化處 理。 其後,如圖8(d)所示,可得到僅包含物件62之裁切圖像 71。又,為方便起見,圖8(d)所示之物件62之周圍區域顯 示附有經透明化之圖案。 &lt;針對物件之動作例&gt; 接著,使用圖式對本實施形態中針對所提供之内容中所 含之物件之動作例進行說明。圖9係用以說明本實施形態 之物件動作例的圖。 使物件動作時、例如如圖9(a)所示,對拍攝圖像61重疊 裁切圖像71,接著如圖9(b)所示’去掉背景圖像之物件 62a。具體而言,例如使用物件62&amp;之周圍背景圖像之像素 蓋掉物件62a之區域。又,本發明而非限定於此’例如亦 可使用預設之顏色等進行塗蓋。 接著,以經去掉物件62a之拍攝圖像61為背景圖像,使 裁切圖像71中所含之物件62b於其上動作。又,圖9(c)之動 作内容係顯示使物件62b以特定時序向右旋轉,但本發明 並非限定於此,可進行使物件62b放大或縮小、於特定方 向移動、置換成其他物件等之動作。 又’作為輸入與對應動作之變形,在本實施形態中,例 如可為.當使用者等對認識裝置16之音訊輸入裝置等吹氣 160697.doc -31 - 201237802 (麥克風輸入)時則物件之一部分或全部旋轉;若使認識裝 置16傾斜則物件滾動;若觸碰認識裝置从畫面則物件有 所反應;或者若走到特㈣所則物件進行該場所固有 作。 再者’在本實施形態中亦可設為如下動作:根據朝向之 方位角使物件之動作產生變化;根據時間使物件倒下或立 起,或根據天氣等使物件之動作產生變化;或依照使用者 :走之動作或伴隨該動作之振動、移動速度及移動距離 等,使物件亦產生變形。 &lt;低頻部及高頻部&gt; 此處,對本實施形態之低頻部及高頻部進行說明。通 常,頻率包含關於時間之頻率(時間頻率)與關於空間位置 之頻率(空間頻率)’但在本實施形態中只要未特別指明,Serial Bus : ^ = = CD_R 〇 M and other removable recording medium 58 and the like to provide the execution program of the medium on the computer body. The recorded refining body W of the recording program can be installed in the drive unit 53, and the execution/supreme medium 58 included in the recording medium 58 can be read and installed by the drive device to the auxiliary memory device 160697.doc, 25·201237802 54 ° The memory device 54 is a storage mechanism such as a hard disk, and stores an execution program of the present invention or a control program provided in a computer, and performs input and output as necessary. The memory device 55 stores an execution type or the like read by the CPU 56 from the auxiliary storage device 54. Further, the memory device 55 includes R厂M〇Read 〇nly River (10) factory read only memory) or RAM (Random ACcess Mem〇ry: random access memory). ° The CPU 56 can control various calculations or the entire computer such as the data output person of each hardware component based on the control program such as OS (Operating System) and the execution program stored in the memory device 55, and realize the burying. The processing of the processing and the recognition processing β, various information required for the execution of the program, etc. can be obtained from the auxiliary memory device 54, and the execution result/network connection device 57 can be connected to the communication network material and can be self-connected. The execution program is acquired by another terminal device of the super-core network, or the execution result obtained by executing the program is provided to other terminal devices, or the stroke type itself of the present embodiment. According to the above-described hardware configuration, the non-visualization of the present embodiment = the burying process and the recognition process can be performed. Further, by installing the program, the burying process and the recognition process of the present embodiment are realized by using the general program #. And the recognition processing of the non-visualization information in the embedding processing method of the above-mentioned buried program is specifically described. 160697.doc -26 - 201237802 &lt;Un-visualization information embedding processing program&gt; First, the embedding processing program of the non-visualization information of the present embodiment will be described. Fig. 5 is a flow chart showing an example of the embedding processing program of the non-visualization information of the embodiment. In the embedding process shown in FIG. 5, the image captured by an imaging means such as a camera is acquired in the first step s1, and the image is analyzed in the step s2. Object and object location information, etc. Next, in step S03, based on the information obtained by the processing of step s2, the setting of the embedded object is performed, and in step s4, the object is determined whether or not to embed the non-visual information (marked here) If the non-visualization information is to be embedded (YES in step S04), in step s〇5, the embedded information indicating the processing instruction information such as the embedded operation instruction information is set to the object, and is generated in step S06. The embedding information set by the processing of step 5, and the embedding information generated by the processing of step S06 is buried in a specific position around the specific object in the image in step S07. In the coffee machine, the embedded composite image or output is displayed as data by an output mechanism such as a display. In the process of X' step 808, the image obtained by step S07 is also printed and output to the printing medium. After the processing of S08 is finished or during the processing of step s4, if the non-visualization information is not embedded (No in step S4), then in step, it is judged whether: other images are buried in non-visualization. Information. If you want to embed other images Visualizing the information (A heat, eye, 丨, G, heart is 疋 in step S09), then returning to step S〇1 to repeatedly perform its processing in step (4) 9, if no other images are buried. Step S〇9t is NO) 'End the non-visualization information I60697.doc • 27· 201237802 embedding process. In addition, in this embodiment, the processing of step S02 described above may not be performed, and the user or the like may display it from the screen. At least one object is set in the image data. &lt;Identification processing program for non-visualization information&gt; Next, the recognition processing program for non-visualization information of the present embodiment will be described. Fig. 6 shows the non-visualization information of the present embodiment. A flowchart of an example of the processing program is recognized. First, in step S11, the embedded image in which the non-visualized information is embedded by the embedding process is acquired. Further, in step S11, In the process, as long as the image can be acquired, any step can be used. For example, the image can be obtained by capturing the print medium as described above, and the image can be obtained as an image data. In step S12, the τ r/ra 炙 non-visualized information 撷 object is included in the acquired image. In step S13, it is determined whether the processing in step S12 is from the chemistry (four) If the object's action indicates the content or displays the embedded information of the various processing contents, if the ridge is taken out and the subsidy = 13 is YES, then the immersed information in the step Chuanzhong 'step _, from burying the information The additional capital 5fl obtained by the analysis result generates the content to be displayed displayed in (4) of the screen or the like.胄, and in the step of "two in the middle" to determine whether it is necessary to recognize non-visualization from other images _M from other images ^, then return to step SU, repeat the subsequent processing 160697.doc - 28-201237802 In the processing of step S17, if non-visualization information is not recognized from other images (NO in step S17), the non-visualization information recognition processing is ended. By the above processing, it is possible to quickly embed or capture non-visualized information and provide a high-precision content service with excellent added value. Further, the non-visual information embedding process or the cognition process of the present invention can be easily realized by installing a program and using a general-purpose personal computer or the like. <Example of embedding of non-visualization information> Next, the embedding example of the non-visualization information of the present embodiment is described using a graph. Fig. 7 is a view for explaining an example of embedding non-visualization information in the embodiment. Further, when the embedding process is performed, the embedding is performed in the order of Figs. 7(a) to 7(d). First, when a non-visualized information is inserted into a specific object 62 included in the image data 61, which is a captured image input from the above-mentioned second print medium, as shown in FIG. 7(a), The object 62 is centered, and the reading accuracy of the coded outer frame Feng Wei two recognition device 16 is arranged around the object 62 from the center as shown by the arrow of the figure (4), and the shape of the outer frame 63 is preferably square. However, the present embodiment is not limited thereto, and may be, for example, a polygonal shape such as a rectangular shape, a circular shape, a hexagonal shape, or an octagonal shape, a circular shape, an elliptical shape, a shape similar to an enlarged shape, or a mouse by a user or the like. Such as the shape of the input mechanism hand-painted. "Connect: as shown in Figure 7(b), the designation of the general vehicle with respect to the object (4) can be set by the user or the like using a mouse or a touch panel. Further, the contour 64 can also be used by The object 62 draws a rough outline 64 from the difference in brightness value of the background 160697.doc -29-201237802, etc. Further, the outline 64 may be rough as long as it is an area including, for example, the entire object 62. Next, As shown in Fig. 7(c), the coded inner frame 65 is set corresponding to the shape of the outline 64. Further, the coded inner frame 65 does not exceed the outer frame 63, and as the coded inner frame 65, for example, one circle of the outline 64 can be set (specific pixels The area on the outer side. Next, as shown in FIG. 7(d), a coding region of the high-frequency portion μ is provided between the code inner frame 65 and the outer frame 63, and a low-frequency portion 67 is provided between the contour 64 and the inner frame 65. Thereby, non-visualization information for recognizing the position of the object 62 can be embedded in the side of the recognition device 16. In the present embodiment, the non-visual information is inserted into the object by any character, symbol, code, or the like. The action indicates the processing content of the message or the display method, thereby The specific object in each of the round images is specifically designed to provide a content service with added value. &lt;Collection of non-visualized information and recognition of objects&gt; - Next, using the schema for the present embodiment An example of the acquisition of the non-visualized information of the device and the recognition of the object will be described. Fig. 8 is a view for explaining an example of the acquisition of the non-visualized information and the object of the present embodiment. When the non-visualized information of the form is recognized and the object is recognized, it is processed in the order of &quot;(4)~Fig.8(d). First, the image data input from the above-mentioned ^Printing media_, etc. When a specific object 62 is included in the case of non-visual information, the encoded information contained in the image 61 is obtained as shown in FIG. 。. Specifically, for example, the outer frame 63 and the high-frequency portion are captured. 66 and the region where the low-frequency portion 67 exists. Then, the 'M8(b) core' is transparent from the outer (four), and further, as shown in Fig. 8 (4), the low-frequency 160697 is lowered from the edge of the high-frequency portion 66 toward the inner side. Doc 201237802 Frequency section 67 becomes transparent. Again, based on The low-frequency portion 67 is grasped by the brightness information of the surrounding pixels, etc., but the present invention is not limited thereto. For example, the number of predetermined bits may be made transparent from the high-frequency portion so that the number of specific bits is transparent inside, or The above-described transparency processing can also be performed using chromatic aberration, shading, etc. Thereafter, as shown in Fig. 8(d), a cut image 71 including only the object 62 can be obtained. Further, for convenience, Fig. 8 (d) The surrounding area of the object 62 is shown with a transparent pattern. <Example of operation of the object> Next, an operation example of the object included in the content provided in the present embodiment is performed using the drawing. Fig. 9 is a view for explaining an example of the operation of the object of the embodiment. When the object is operated, for example, as shown in Fig. 9(a), the cut image 71 is superimposed on the captured image 61, and then the object 62a of the background image is removed as shown in Fig. 9(b). Specifically, for example, the pixels of the surrounding background image of the object 62 &amp; cover the area of the object 62a. Further, the present invention is not limited thereto, and for example, it may be coated with a predetermined color or the like. Next, the captured image 61 of the removed object 62a is used as a background image, and the object 62b included in the cut image 71 is moved thereon. Further, the operation of FIG. 9(c) shows that the object 62b is rotated to the right at a specific timing. However, the present invention is not limited thereto, and the object 62b may be enlarged or reduced, moved in a specific direction, and replaced with another object. action. In addition, in the present embodiment, the deformation of the input and the corresponding operation may be, for example, when the user blows 160697.doc -31 - 201237802 (microphone input) to the audio input device of the cognitive device 16 or the like. Part or all of the rotation; if the recognition device 16 is tilted, the object scrolls; if the touch device is touched from the screen, the object reacts; or if it goes to the special (4), the object is intrinsic to the site. Furthermore, in the present embodiment, the operation may be such that the movement of the object is changed according to the azimuth of the orientation; the object is caused to fall or stand up according to the time, or the movement of the object is changed according to the weather or the like; or The user: the movement of the movement or the vibration, the moving speed and the moving distance accompanying the movement, so that the object is also deformed. &lt;Low frequency portion and high frequency portion&gt; Here, the low frequency portion and the high frequency portion of the present embodiment will be described. Generally, the frequency includes a frequency (time frequency) with respect to time and a frequency (spatial frequency) with respect to a spatial position, but in the present embodiment, unless otherwise specified,

頻率係指空間頻率。空間頻率係定義為「相對於單位長度 之像素值之週期之倒數」, X 本實施形態之頻率並非特別限定,例如只要在高頻部為 〇.2〜2[週期/像素]、低頻部為卜丨[週期/像素]之範圍内進行 設定即可,具體而言,只要高頻部之頻率高於低頻部之頻 率即可。 又,包含由高頻部形成之特定像素區域(例如4χ4ρχ(像 素)等)之格柵,只要為明部與暗部週期性反復即可,可舉 例如直條紋、橫直條紋及格狀等。 例如,如圖10Α所示,亦可交互配置以白色四方形所示 之明部200之列與以畫有陰線之四方形所示之暗部之 160697.doc -32- 201237802 列,或如圖10B所示,亦可於列方向(橫向)及行方向(縱向) 分別交互配置明部200與暗部300。 又’此時之明部與暗部之明度差只要為10以上即可,較 好為50以上’更好為1〇〇以上。又,本實施形態之明度差 如上所述’首先以通常顯示之圖像之明度為基準而產生明 部與暗部’而使用所產生之明部與暗部之明度差,但本實 施形態並非限定於此。例如亦可使用通常之圖像之明度與 低頻部或高頻部之明度相對於該明度之明度差。 此情形時’例如灰度下以相鄰之元素為基準,被視為高 頻部之明度差若約為1 5以上即可視為高頻部。具體而言, 約15〜3 5左右之明度差為主要可作為高頻部使用之區域。 又,所謂το素係指由縱橫lpx以上之像素所構成者,在本 實施形態中’例如將1個元素設為2χ2ρχ。 此處’例如由相機等攝像裝置所讀取之明度差之最小值 為15時,若明度差超過35則即便是人類的肉眼亦易於認 識。與之相反,埋入之圖像之部分為極暗或極亮時,為提 高相機等之讀取精度,亦必須賦予例如35以上之明度差。 因此,在本實施形態中,因應埋入位置之圖像(背景)之明 度或免度等或拍攝之相機之功能等對明度差任意賦予變 化’而產生非視覺化資訊之編媽。 又,上述附加資訊可使用之像素尺寸由於會隨例如圖像 與觀察圖像的人之距離而變化,故並非特別限定,但若為 例如約1 m之距離,則較好為約0.05〜2 mm左右,若為約10 m之距離,則較好為約〇 5 _〜2〇 _左右。若自更遠之距 160697.doc -33- 201237802 離使用時,亦較好為保持相同像素尺寸與距離之比率。 即,本實施形態係將使用由棋盤圖案中(方格圖案)之小 且明亮之正方形及灰暗之正方形構成之高頻率區塊之非視 覺化編碼作為本實施形態之非視覺化資訊使用。藉此,可 於任何地點進行埋入,且可自任意角度閱覽。 此處,圖1 1係用以說明具有高頻部及低頻部之非視覺化 資訊的圖。圖11 (a)所示之非視覺化圖像之構造例中,於位 置檢測圖案81中具有旋轉檢測圖案82。藉此,可正確地認 識編碼之所在位置與編碼方位。 又,位置檢測圖案8 1中具有作為影像埋入於映像中之高 頻部之高頻區塊83,於中心具有收容物件之邊緣84。如圖 11(a)所示,本實施形態中,根據周圍之位置與方向之檢測 圖案而檢測影像之位置及方向,該等圖案亦由高頻區塊83 構成。又,圖11 (a)之例中,以虛線之導引線包圍位置檢測 圖案81之外框’以實線之導引線包圍旋轉檢測圖案82之外 框’但實際上並不一定要有框線。 為使人眼難以認識且縮小視野,圖丨1(b)所示之各區塊 由亮要素及暗要素交互配置之ΝχΝ棋盤排列而成。明亮元 素(要素)與暗元素係如上所述以背景之亮度(明亮度)為基 準’藉由調高或調低該亮度而產生。 又’藉由埋入強度(其為明亮度±α之變化)來調節明亮度 之良化。埋入強度為一定之情形時,映像會極為明亮,而 若為極暗之情形則讀取認識精度不佳。對此,如圖11(c)所 不,藉由改變背景色之埋入強度而提高讀取認識精度。具 160697.doc -34- 201237802 體而言’如為變更元素(要素)之明亮度而獨立改變背景之 各RGB值,色調會增大映像之視野而改變。因此,本實施 形態中為縮小視野且提高讀取認識之正確性,將原始彩色 映像轉換成灰階,以背景之灰度為基準而設定埋入強度。 此處,圖12係顯示埋入強度與背景之灰度值之關係的 圖。如圖12所示,人類肉眼對白色的明亮度之變化非常敏 感,對於黑色變化則無感覺。因&amp;,與灰度值相關且以二 次函數、根據最小強度為20之方程式,利用以下算式給出 埋入強度之變化, 灰度值 α=1/2167χ(αΓ-255)2+20 〈實驗結果〉 此處,使用圖式對應用本發明之實驗結果進行說明。由 已埋入何種編碼(編碼樣式)來評估本實施形態之讀取精 度。 圖13係顯示單色影像用之非視覺化編碼之例的圖。圖13 所不之編碼為約3 cm左右之正方形,且保持1〇〇位元之資 料。又,高頻區塊之各側具有4個元素(要素)之長度。又, 70素尺寸為約2x2像素。又,如圖13所示,高頻部區塊具 有相對於單色影像之棋盤區塊(方格圖案)。 圖14係顯示相對於使灰色影像在〇〜255内變化之明亮度 之讀取認識精度與非視覺化之關係的圖。本實施形態中, 頁示自相距50 cm之處觀察有編碼之影像與無編碼之影像 之差的評估結果。該結果為,若背景影像之明亮度之範圍 為灰階40〜180,讀取認識精度為9〇%以上,判斷為視覺化 160697.doc -35- 201237802 之情形時則不足10% 進而,為確認本實施形態之非視覺化編碼之有效性,針 對作為-例之3種-般類型之影像中所埋人之非視覺化編 碼之讀取5忍識精度進行了評估。 圖15係用讀對本實施形態之每個料之非視覺化資訊 之讀取認識精度之評估進行說明的圖。將如圖i5(a)所示之 非視覺化編碼分別作為圖15(b)〜⑷所示之測試影像並插入 至黑白(單色)影像、女性影像及動物(狒狒)影像並評估 各種影像中之編碼之認識精度。 此處’圖16A、圖16B係顯示對應於圖15之評估結果的 圖。又’圖16A係顯示相對於自相機至編碼之距離之認識 精度的圖纟,圖16B係、顯示相對於相機之拍攝角度之認識 精度的圖表。 β 如圖16A所示,針對自攝像機至編碼之距離的正確性, 該距離只要為約7〜U cm左右’則對全部的影像之_ 度為100%。 又’如圖16B所示,相對於相機之角度在約〇〜4〇之角度 下對於全部之影像之認識精度為1 〇〇%。 &lt;應用本實施形態之内容應用例&gt; 接著’使用圖式對應用上述本實施形態之内容應用例進 行說明…以下所示之例係針對具備作為非視覺化資訊 之裁切§己之AR應用系統進行說明。 &lt;實施例1&gt; 圖π係顯示實施狀从應料統之—例的Hw) 160697.doc 36 - 201237802 =貝丁使用裁切標記作為非視覺化資訊之影像之例。裁切 下°己被埋人於圖1 7(a)所示之風車之影像的周圍。又,於實 施例1中’可為周圍全體,亦可為周圍之一部分區域。 例如’使用者藉由例如智慧手機等之行動終端器上所設 相機拍攝$像之相片’藉此取得風車之影像。再者,使 用者使用行動終端器並選擇拍攝之圖像中所含之物件中至 少:個後’藉由實施例1之AR應用程式,對該物件埋入裁 刀心。己°在®17之例中’對風車埋人裁切標記,並列印經 之圖像(風車)。又,實施例1中,裁切標記只要對圖 η⑷所示之複數個風車(物件)中在認識侧進行裁a之對象 之特定風車(裁切物件)進行即可。 _處圖17(b)係顯示非視覺化裁切標記之構造。裁切 標記包含高頻部與低頻部,裁切標記之中心包含操取之對 象物件15〇(裁切物件)。藉由存在於物件150之周圍之位置 檢測圖案152而取得裁切標記之位置。再者物件&quot;^之方 (*方向)例如可藉由對裁切標記預先埋人上述旋轉檢測圖 案等而取得。χ ’上述處理係以行動終端器進行。因此, 仃動終端器具有作為上述第1圖像資料取得裝置12及埋入 裝置13之功能,藉由上述埋入程式之人尺應用程式實現該 專功能。 又,圖17(c)係顯示利用裁切標記擷取物件之例的圖。使 用者例如藉由智慧手機等行動終端器上所設之相機,拍攝 包含裁切標記之經列印之圖像。行動終端器(認識裝置16) 如上述自裁切標記之埋入位置以高頻部154、低頻部⑼之 160697.doc -37- 201237802 進又行透明化處理’以與背景分離之狀態取得裁切物件 起產^㈣終端器將取得之裁切物件15G與背景影像-士虛擬合成之圖像,並顯示於畫面。 “ *十對藉由取得之裁切標記所擷取之物件(風車) 使用圖式進行具趙說明。一 二:進行之風車之動作例的圖。具體而言,動作 圖18(a)〜圖18(d)之順序轉變。 :如’某使用者例如使用智慧手機等之 風車影像之相片(圖18⑷)後’藉由上端15拍攝 白祛旦… 錯由上述本實施形態之手法 者景影像分離出目標影像即 lZ(h,s甘从 1豕P風皁影像(裁切物件)(圖 )Z、後,於行動終端器之顯示書面上顯干 景影像(圖18(c))e 顯不風車及背 此處’作為上述二次指示資訊,使用者向智慧手 之麥克等之音訊輸入裝置吹氣 ’、 &quot;又 乳q呼^氧、說話,智禁丰祕Μ 此掌握其音量(volume)或聲音之 ’、 立使風車旋轉之影像(圓18(d))。又,對於Z變化量而建 示資訊進行何種動作,可預先 、冑種二次指 訊中掏取。 了預先叹疋,亦可自裁切標記之資 作係以行動終端器進行。因此,行動終端器 ^由述第2圖像資訊取得裝置15及認識裝置16之功 r藉由上述認識程式之AR應用程式實現該等功能。 時又若中’使用上述裁切標記指示動作之情形 車:之二複數個裁切標記,亦可藉由與各物件(風 )對應之裁切標記中所含之埋入資訊之指示内容使每個 160697.doc •38· 201237802 物件進行動作,亦可藉由丨個裁切標記中所含之指示資 訊,對經擷取之複數個物件指示動作内容。藉此,例如複 數個物件進行相同之動作時,無需對與各物件對應之裁切 標記設定包含動作指示資訊等之埋入資訊,可藉由1個埋 入資訊使複數個物件動作’故可有效率且迅速實現物件動 作。 此處’圖19係為說明圖像中所含之非視覺化資訊與動作 指示資訊之關係的圖。又,圖19(a)係顯示埋入圖像之一 例,圖19(b)係顯示已埋入非視覺化資訊及動作指示資訊 (埋入資訊)之物件的放大圖(帶有動作指示資訊之導引 線),圖19(c)係顯示與圖19(13)對應之無導引線的圖。 接著’圖19⑷所示之埋入圖像91,於圖像中的3個風車 物件之其中2個物件内埋入有非視覺化資訊^丨及“^。因 此’上述實施W中’於認識裝置16中認識為物件且操取 之物件成為由非視覺化資訊92_丨及92_2包圍之2個風車。 又’為便於說明,圖19⑷〜圖19⑷之例將非視覺化資訊 92-1、92-2之外框視覺化。 接著,® 19(b)之例係於非視覺化資訊92_2之4個角落埋 入有動作指示資訊93·!〜93-4(埋人f訊卜又,圖i9(b)之例 係於動作指示資訊93士93_4之埋人區域顯示有導引線(實 線)’但實際之圖像為如圖19(勹所示者。 此處,對於在非視覺化資訊中埋入動作指示資訊之位 置’本實施形態並非限定於上述,例如可為於非視覺化資 訊之至少-個角落’亦可於非視覺化資訊之外框上。又, 160697.doc -39· 201237802 亦Frequency refers to the spatial frequency. The spatial frequency is defined as "the reciprocal of the period of the pixel value per unit length", and the frequency of the present embodiment is not particularly limited. For example, the high frequency portion is 〇.2 to 2 [period/pixel], and the low frequency portion is It is sufficient to set the dipole [period/pixel] within the range, and specifically, the frequency of the high-frequency portion is higher than the frequency of the low-frequency portion. Further, the grid including the specific pixel region (for example, 4 χ 4 χ (pixel)) formed by the high-frequency portion may be periodically repeated for the bright portion and the dark portion, and examples thereof include straight stripes, horizontal stripes, and lattices. For example, as shown in FIG. 10A, the column of the bright part 200 shown by the white square and the 160697.doc -32-201237802 column of the dark part shown by the square of the negative line may be alternately arranged, or as shown in FIG. 10B. As shown, the bright portion 200 and the dark portion 300 may be alternately arranged in the column direction (lateral direction) and the row direction (longitudinal direction). Further, the difference in brightness between the bright portion and the dark portion at this time may be 10 or more, preferably 50 or more, and more preferably 1 or more. Further, in the above-described embodiment, the difference in brightness is as described above. First, the brightness difference between the bright portion and the dark portion is generated by using the brightness of the image to be displayed as a reference, but the present embodiment is not limited thereto. this. For example, the brightness of a normal image and the brightness of the low-frequency portion or the high-frequency portion may be used as a difference in brightness from the brightness. In this case, for example, if the difference in brightness between the high frequency portions is approximately 15 or more based on the adjacent elements in the gradation, it is regarded as a high frequency portion. Specifically, the difference in brightness of about 15 to 3 5 is a region that can be mainly used as a high-frequency portion. In addition, the term "τ" means a pixel composed of pixels above or below the aspect ratio lpx. In the present embodiment, for example, one element is set to 2 χ 2 ρ χ. Here, for example, when the minimum value of the brightness difference read by an image pickup device such as a camera is 15, if the brightness difference exceeds 35, it is easy for human eyes to recognize. On the other hand, when the portion of the image to be buried is extremely dark or extremely bright, in order to improve the reading accuracy of the camera or the like, it is necessary to impart a difference in brightness of, for example, 35 or more. Therefore, in the present embodiment, the non-visual information is generated by arbitrarily changing the brightness difference depending on the brightness or degree of the image (background) of the embedded position or the function of the camera to be photographed. Further, the pixel size that can be used for the additional information is not particularly limited as it varies depending on, for example, the distance between the image and the person viewing the image. However, if it is, for example, a distance of about 1 m, it is preferably about 0.05 to 2 Approximately mm or so, if it is a distance of about 10 m, it is preferably about 5 _~2 〇_. If it is farther away from 160697.doc -33- 201237802, it is better to maintain the ratio of the same pixel size to distance. That is, in the present embodiment, non-visualization coding using a high-frequency block composed of a small square of a checkerboard pattern (a checkered pattern) and a bright square and a gray square is used as the non-visualization information of the present embodiment. In this way, it can be buried in any location and can be viewed from any angle. Here, Fig. 11 is a view for explaining non-visualization information having a high frequency portion and a low frequency portion. In the configuration example of the non-visualized image shown in Fig. 11 (a), the position detecting pattern 81 has a rotation detecting pattern 82. Thereby, the position and coding orientation of the code can be correctly recognized. Further, the position detecting pattern 8 1 has a high-frequency block 83 as a high-frequency portion in which an image is embedded in the image, and has an edge 84 for accommodating the object at the center. As shown in Fig. 11(a), in the present embodiment, the position and direction of the image are detected based on the detection pattern of the surrounding position and direction, and the patterns are also composed of the high frequency block 83. Further, in the example of Fig. 11(a), the position detecting pattern 81 is surrounded by the guide line of the broken line, and the outer frame 'the guide line of the rotation detecting pattern 82 is surrounded by the guide line of the solid line', but actually does not have to have Frame line. In order to make it difficult for the human eye to recognize and narrow the field of view, each block shown in Fig. 1(b) is arranged by a checker board in which bright elements and dark elements are alternately arranged. The bright elements (elements) and dark elements are generated by adjusting the brightness by raising or lowering the brightness based on the brightness (brightness) of the background as described above. Further, the brightness is adjusted by the embedding intensity, which is a change in brightness ± α. When the embedding strength is constant, the image will be extremely bright, and if it is extremely dark, the reading accuracy will be poor. On the other hand, as shown in Fig. 11 (c), the reading recognition accuracy is improved by changing the embedding strength of the background color. 160697.doc -34- 201237802 For example, if the RGB values of the background are changed independently for changing the brightness of the elements (elements), the hue will change depending on the field of view of the image. Therefore, in the present embodiment, in order to reduce the field of view and improve the accuracy of the reading recognition, the original color image is converted into a gray scale, and the embedding intensity is set based on the gradation of the background. Here, Fig. 12 is a view showing the relationship between the embedding intensity and the gray value of the background. As shown in Fig. 12, the human eye is very sensitive to changes in the brightness of white, and there is no feeling for black variations. Because &, which is related to the gray value and is a quadratic function, according to the equation with the minimum intensity of 20, the variation of the embedding strength is given by the following formula, the gray value α=1/2167χ(αΓ-255)2+20 <Experimental Results> Here, the experimental results of applying the present invention will be described using a schema. The reading accuracy of this embodiment is evaluated by what kind of coding (coding pattern) is embedded. Fig. 13 is a view showing an example of non-visual coding for monochrome images. Figure 13 does not encode a square of about 3 cm and maintains 1 unit of information. Further, each side of the high frequency block has a length of four elements (elements). Also, the 70-element size is about 2 x 2 pixels. Further, as shown in Fig. 13, the high frequency portion block has a checkerboard block (checker pattern) with respect to a monochrome image. Fig. 14 is a view showing the relationship between the reading recognition accuracy and the non-visualization of the brightness which changes the gray image within 〇 to 255. In the present embodiment, the results of the evaluation of the difference between the coded image and the uncoded image are observed from a distance of 50 cm. The result is that if the range of brightness of the background image is gray scale 40 to 180, the reading recognition accuracy is 9〇% or more, and when it is judged to be visualized 160697.doc -35-201237802, it is less than 10%. The validity of the non-visualization coding of the present embodiment was confirmed, and the accuracy of the read 5 tolerance of the non-visual coding embedded in the image of the three types of the example was evaluated. Fig. 15 is a view for explaining the evaluation of the read recognition accuracy of the non-visualization information for each material of the present embodiment. The non-visualization code shown in Fig. i5(a) is used as the test image shown in Figs. 15(b) to (4) and inserted into black and white (monochrome) image, female image and animal (狒狒) image, and various images are evaluated. The accuracy of the recognition in the code. Here, Fig. 16A and Fig. 16B are diagrams showing the evaluation results corresponding to Fig. 15. Further, Fig. 16A shows a map of the recognition accuracy with respect to the distance from the camera to the encoding, and Fig. 16B is a graph showing the recognition accuracy with respect to the photographing angle of the camera. As shown in Fig. 16A, for the correctness of the distance from the camera to the encoding, the distance is about 7 to about Ucm, which is 100% for all images. Further, as shown in Fig. 16B, the recognition accuracy for all the images at an angle of about 〇 4 相对 with respect to the angle of the camera is 1 〇〇 %. &lt;Application of Application Example of the Present Embodiment&gt; Next, an application example in which the above-described embodiment of the present embodiment is applied will be described using the drawings. The following examples are for the provision of the AR as the non-visualization information. Application system for explanation. &lt;Example 1&gt; Fig. π shows the Hw of the embodiment of the application system. 160697.doc 36 - 201237802 = Bedding uses the cut mark as an example of an image of non-visualized information. The cut has been buried around the image of the windmill shown in Figure 17 (a). Further, in the first embodiment, 'the whole may be the surrounding area, or may be a part of the surrounding area. For example, the user takes a photo of the image of the image by using a camera provided on a mobile terminal such as a smart phone to obtain an image of the windmill. Further, the user uses the mobile terminal and selects at least one of the objects contained in the captured image. The object is embedded in the cutting tool by the AR application of the first embodiment. In the case of ®17, the windmill is cut and marked, and the image (windmill) is printed. Further, in the first embodiment, the cutting marks may be formed on a specific windmill (cut object) in which a plurality of windmills (objects) shown in Fig. (4) are cut on the recognition side. Figure 17(b) shows the construction of the non-visualized crop marks. The crop mark includes a high frequency portion and a low frequency portion, and the center of the crop mark includes the manipulated object 15 裁 (cut object). The position of the cut mark is obtained by the position detecting pattern 152 existing around the object 150. Further, the object (&quot;^) can be obtained, for example, by embedding the above-described rotation detecting pattern or the like on the cutting mark. χ 'The above processing is performed by a mobile terminal. Therefore, the swaying terminal has the functions as the first image data acquiring device 12 and the burying device 13, and the dedicated function is realized by the immersive program. Further, Fig. 17 (c) is a view showing an example in which an object is captured by a cutting mark. The user photographs the printed image including the cut mark by, for example, a camera provided on a mobile terminal such as a smart phone. The mobile terminal (the accommodating device 16) is subjected to the transparent processing of the high-frequency portion 154 and the low-frequency portion (9) of 160697.doc -37-201237802 as described above, and the cutting is performed in a state separated from the background. The object is produced ^ (4) The image of the cropped object 15G and the background image - the virtual composite image will be obtained and displayed on the screen. " * Ten pairs of objects (windmills) taken by the cutting marks obtained by the use of the schema to explain Zhao. One or two: Figure of the operation of the windmill. Specifically, the action figure 18 (a) ~ The sequence of Fig. 18(d) is changed. For example, if a user uses a photo of a windmill image such as a smart phone (Fig. 18(4)), the photo is taken by the upper end 15... The image is separated from the target image, that is, lZ (h, s Gan from the 1豕P wind soap image (cut object) (Fig.) Z, and then displayed on the action terminal display written image (Fig. 18(c)) e shows the windmill and the back here. 'As the above-mentioned second instruction information, the user blows the audio input device to the microphone of the smart hand, ', and the other is the milk, the voice, the voice, the ban, the secret. The volume (volume or sound), the image that makes the windmill rotate (circle 18 (d)). In addition, what kind of action is displayed for the Z-variable amount, and the second-information can be pre-arranged. Take the pre-sigh, or the self-cutting mark is carried out by the mobile terminal. Therefore, the action The function of the second image information obtaining device 15 and the acknowledgment device 16 is realized by the AR application program of the acquaintance program. When the vehicle is used to indicate the action using the above-mentioned cutting mark: The second plurality of cutting marks may also be used to act on each of the objects of the embedded information contained in the cutting marks corresponding to the respective objects (wind), or may be borrowed. The action content is indicated for the plurality of objects retrieved by the cutting marks, so that, for example, when a plurality of objects perform the same action, it is not necessary to set the cropping flag corresponding to each object. The embedded information such as the motion indication information can be used to make a plurality of objects operate by one embedded information. Therefore, the object motion can be efficiently and quickly realized. Here, FIG. 19 is a description of the non-visualization included in the image. Figure 19(a) shows an example of a buried image, and Figure 19(b) shows the embedded non-visual information and motion indication information (buried information). Magnified view of the object ( Fig. 19(c) shows a map without a guide line corresponding to Fig. 19(13). Next, the buried image 91 shown in Fig. 19(4) is in the image. Two of the three windmill objects are embedded with non-visual information and "^. Therefore, 'the above-mentioned implementation W' is recognized as an object in the cognitive device 16 and the manipulated object becomes non-visualized information 92. Two windmills surrounded by _丨 and 92_2. For the sake of convenience, the examples of the non-visualization information 92-1 and 92-2 are visualized in the examples of Figs. 19(4) to 19(4). Next, the example of ® 19(b) Attached to the four corners of the non-visual information 92_2, there is an action instruction information 93·!~93-4 (buried person f news, and i9 (b) is an example of the action instruction information 93 93_4 buried The area shows a guide line (solid line)' but the actual image is as shown in Figure 19. Here, the position where the motion instruction information is embedded in the non-visualization information is not limited to the above, and may be, for example, at least one corner of the non-visualized information, or may be outside the non-visual information frame. on. Also, 160697.doc -39· 201237802

可利用動作指示資 訊93-1〜93-4作為上述旋轉 檢測圖 所示之動作指示資訊⑹〜心係例如於各 角洛埋入有各4位元之資訊…,動作指示資訊ΜΙ 〜 93-4可以特 定順序連結該4個角落之資訊成為共 ^ 位元 而產生!個資訊。又’藉由使用16位元可取得如之編 碼圖案,將自編碼所獲得之資訊作為m或動作㈣器使 用’藉此可對物件設定多種動作内容或顯示手法。 進而,在實施例!中,可以特定順序及時序執行與上述* 個角洛之埋入資訊對應之動作’亦可對擷取之全部物件執 行1個埋入資訊之内容。 又’藉由在較之範圍内預先埋人上述埋人資訊,由上 述㈣見覺化資訊解析機構46進行解析時,藉由參照該特定 之範圍即可立即取得埋入資訊。 &lt;實施例2&gt; 圖2〇係顯示實施例2横應用系統之一例的圖。又,圖 2。〇之AR應用系統亦可與上述實施⑷同樣地使用行動終端 器執灯。2 ’此處所使用之行動終端器具有作為例如上述 第2圖像資訊取得裝置丨5及認識裝置16之功能。 貫施例2係如圖20⑷所#,描繪有例如於畫本等紙媒體 00中進行特疋動作之目標即裁切物件丨〇 1 (圖之例為紅 蜻蜓)。再者,於裁切物件101之周圍埋入有上述裁切標記 M2»又,為便於說明,圖2〇中以虛線顯示裁切標記1〇2之 外框,但實際上紙媒體100中並不存在如該等之虛線。 160697.doc •40· 201237802 又如圖20(b)所示’於該裁切標記1〇2内,於裁切物件 101之外开^之周圍埋入有低頻部,進而於低頻部1〇3之 外側埋入有咼頻部1〇4。又,於裁切標記1〇2内,埋入有用 乂對裁切物件1〇丨進行特定之動作之動作指示資訊 • 1〜105-4 。 . 貫&lt;!2中以行動終端器等上所設之相機等(第2圖像 資訊取知裝置15)拍攝該紙媒體1〇〇。又,行動終端器(認識 裝置)自所拍攝之圖像取得裁切標記⑽。又,於裁切標 a己之4個角落埋入有動作指示資訊ι〇5卜⑺口,藉由該動 作指示資訊105·1,5 —之值使物件ΗΠ之動作產生變化。 此處,動作指示資訊…]]。“例如分別埋入有各一 位元之資訊。因此,為自動作指示資訊105-1〜105-4取得 動作,示資訊,例如以特定順序等(例如順時針方向)讀取4 個角落之資訊,且例如對應於讀取之結果,對與背景影像 分離而虛擬合成之裁切物件1〇1執行預設之動作。具體而 言,例如動作指示資訊(4位元)為「11〇〇」_,使裁切物件 101向右移動。又’進行動作指示資訊為「_」時使裁 切物件101向左移動,為「⑴1」時使裁切物件HH原地旋 轉等之類的動作。實施例2之動作内容並非限定於此,例 如可使裁切物件101即紅蜻挺飛離之方向變化。又,亦可 根據動作指示資訊而取得裁切物件之姿勢(例如位置(座標) 及方位)或移動速度等。 又’上述動作之執行時序可為例如由使用者於畫面上觸 碰(廣泛包含輕敲、觸摸及滑動等意)行動終端器之畫面上 160697.doc 201237802 顯示之裁切物件1〇1而進行特定動作,但本實施形態並非 限定於此。例如亦可使用行動終端器之内建相機(使用者 側相機)即時認識明度之變化,當低於例如預設之平均明 度超過一定程度時,判定為要觸碰裁切物件101,而使裁 切物件101即紅蜻蜓進行飛離之動作執行。 又,與動作指示資訊對應而設定之動作内容可預先儲存 於儲存機構等,亦可經由通訊網路等連接預設之外部裝置 4而自所連接之外部裝置取得對象之動作内容。藉此, 例如同樣的動作指示内容可透過儲存機構等所儲存之動作 内容與自外部裝置所取得之動作内容而進行相異之動作。 又,藉由自外部裝置等取得動作内容,可利用外部裝置之 管理而自由進行動作之變更等。 &lt;實施例3&gt; 圖21係顯示實施例3之八尺應用系統之一例的圖。又,圆 21之AR應用系統亦可與實施例2同樣地使用行動終端器執 行。 實施例3中描繪有例如於晝本等之紙媒體11 〇中進行特定 動作之目標即複數個裁切物件111-1〜111-4(圖21之例中為 鯉魚)。又,於各裁切物件“^〜丨丨丨·々之周圍,與各裁切 物件對應埋入有上述裁切標記112-1〜112-4。 於該裁切標記⑴冬丨以内,與圖2()同樣地於各裁切 物件111-1〜11卜4之外形之周圍埋入有低頻部,進而於低 頻部之外側埋入有高頻部。又,於裁切標記丨12_1〜 内埋入有用以對各裁切物件lu•卜111-4進行特定之動作 160697.doc •42· 201237802 之動作指示資訊。 實施例3中,利用行動終端器等上所設之相機等(第2圖 像資訊取得裝置15)拍攝該紙媒體110。再者,行動終端器 (認識裝置16)自所拍攝之圖像取得裁切標記112·1〜112-4β 又行動終端器基於取得之裁切標記112_1〜112·4,將經 裁切之裁切物件⑴士⑴^即裡魚與背景圖像分離’產 生模擬合成之圖像並將其顯示於畫面。 如上述於裁切112-1〜112-4内已埋入動作指示資 訊,藉由該動作指示資訊之值,與上述實施例同樣地使各 裁切物件11 1-lq u_4之動作變化。且,該動作指示資訊 疋要在4個角落,亦可埋入於例如裁切標記u 2_ 1〜112-4内之周緣部、或預設之裁切標記内之特定之位 置。 1 士實施例3中,當使用者之手113觸碰顯示有所拍攝 之圖像之畫面時,會進行波紋擴散、各裁切物件Hill 11-4 即 鯉魚成 放射線 狀游走 之動作 。具 體而言 ,例如 當使用者之手113之指尖觸碰畫面時,行動終端器(認識裝 置16)取得觸碰之座標與各裁切物件m•卜^-々即鯉魚之 座標,使用取得之各座標而產生各裁切物件丨丨卜丨〜丨丨卜斗 之刚進方向向量。 又,於此情形時,帛先於動作㈣資訊中埋入有各裁切 物件⑴-WU-4之朝向方位之角度資訊。又,動作指示 資訊不一定要在例如裁切標個角落,亦 可於各標記之周緣部埋入8位元之資訊,使其能夠表現出 160697.doc -43- 201237802 0〜360度之動作。 進而,行動終端器藉由將該方向向量標準化,而調節各 裁切物件111-1〜111-4之前進速度。藉由上述方式產生向 量即使觸碰行動終端器上之畫面上之任意位置,皆可使 各裁切物件111-1〜111-4進行成放射線狀游走之動作。 &lt;實施例4&gt; 圖22係顯示實施例4iAR應用系統之一例的圖。又圖 22之AR應用系統亦可與上述實施例2同樣地使用行動終端 器執行。 實施例4中,描繪有例如於時尚雜誌等之紙媒體12〇中進 行特定動作之目標即複數個裁切目標12M、i2i_2(圖22之 例中為服裝)。再者’於裁切物件121_丨、121_2之周圍,與 各裁切物件121-1、121·2對應而埋入有上述裁切標記122_ 1 、 122-2 。 實施例4中,利用行動終端器等上所設之相機等(第2圖 像資訊取得裝置15)拍攝該紙媒體120。再者,行動終端器 (認識裝置16)自所拍攝之圖像取得裁切標記mi、ΚΙ 2又,行動終端器基於取得之裁切標記122-1、122-2 ,將 經裁切之裁切物件121]、121_2即服裝與背景圖像分離, 產生虛擬合成之圖像並顯示於4面13〇。&amp;,經裁切之圖 像可於畫面上進行放大、縮小、旋轉及移動等,每當使用 者拍攝商品時便會於畫面上即追加服裝等之圓像,故可依 自身喜好享受搭配的樂趣。 因此,實施例4中,藉由進行上述功能,例如可將應用 160697.doc 201237802 程式提供作為時裝搭配應用程式。具體而言,斜 町呀尚雜言志 等所刊載之上衣物件131-1或裙子物件ι31_2、奴ν 鞋子物件 131-3等之商品埋入裁切標記,當使用者使用行動終端器 拍攝喜愛之商品時,各個商品便可篩選到行動終端器之書 面130上並於畫面上顯示。使用者以手132自行動終端器— 畫面130上顯示之各商品令選擇喜愛之項目並加以移=之 藉此可於晝面130上進行虛擬搭配。 此外,實施例4中並非限定於服裝,亦可包含例如帽子 或眼鏡、飾品及鞋子等,使用♦亦可匯入臉部圖像等而 配合自己的臉型進行搭配。 又,如為上述時裝搭配應用程式的情形,可自例如埋入 於裁切標記内之動作指示資m ’取得商品之背側圖像(例 如上衣或裙子之背側之設計等)或價格、顏色資訊等。 又例如因身側之設計圖像等使得資料容量變大之情形 時’亦可藉由Web等之劉覽器顯示來劇覽圖像。即,^如 當埋入資訊小之情形時(例如為數位元左右),動作指示資 訊可於標記之4個角落或周缘部埋入各項資訊;若埋入資 訊大,可僅預先埋入資訊之1D(識別資訊),於進行裁切之 同時向特定之外部裝置(伺服器)輸送ID ’而自Web等取得 對應之資訊。 、=處’在上述之各實施例中’可於認識裁切標記後即刻 『^ 動作扣示資訊之動作,亦可於認識後經過特定時 / 行進而亦可於認識後由使用者對行動終端器(認 識裝置)進行衫之動作《作時進行。 160697.doc -45· 201237802 又’對複數個㈣進行動作時 動作指示資訊,亦可根據各· 冑各個物件預先設定 進行複合處理。又,所謂 作指示資訊對複數個物件 物件時,進行2個人搭肩、握°作係指例如有2個人物之 實施形態並非限定於此。 、肩杠之類的動作,但本The operation instruction information 93-1 to 93-4 can be used as the operation instruction information (6) shown in the above-described rotation detection map. For example, the information is embedded in each of the four corners, and the action instruction information ΜΙ ~ 93- 4 The information of the four corners can be linked in a specific order to become a total of bits! Information. Further, by using a 16-bit code to obtain a code pattern, the information obtained by self-encoding can be used as an m or an action (4) device, whereby a plurality of action contents or display methods can be set for the object. Furthermore, in the embodiment! In the above, the action corresponding to the embedding information of the above * can be performed in a specific order and timing, and the content of one embedding information can be executed for all the objects captured. Further, when the above-mentioned information is analyzed by the above-mentioned (4) awareness information analysis unit 46, the embedded information can be immediately obtained by referring to the specific range. &lt;Embodiment 2&gt; Fig. 2 is a view showing an example of a horizontal application system of the second embodiment. Also, Figure 2. The AR application system can also use the mobile terminal to operate the lamp in the same manner as in the above embodiment (4). The mobile terminal device used here has functions as, for example, the second image information acquiring device 5 and the recognition device 16 described above. In the second embodiment, as shown in Fig. 20 (4), a crop object 丨〇 1 (for example, a red dot) which is the target of the special operation in the paper medium 00 such as a picture book is depicted. Furthermore, the above-mentioned cutting mark M2» is embedded around the cut object 101. For convenience of explanation, the outer frame of the cutting mark 1〇2 is shown by a broken line in FIG. 2〇, but actually in the paper medium 100 There are no dashed lines like these. 160697.doc •40· 201237802 As shown in Fig. 20(b), in the cutting mark 1〇2, a low-frequency portion is embedded around the cut object 101, and further in the low-frequency portion. 3 The outer side is buried with a frequency unit 1〇4. Further, in the cutting mark 1〇2, the action instruction information for performing a specific action on the cut object 1〇丨 is buried. • 1 to 105-4. In the &lt;!2, the paper medium 1 is photographed by a camera or the like provided by a mobile terminal or the like (the second image information fetching device 15). Further, the mobile terminal (knowledge device) acquires a crop mark (10) from the captured image. Further, in the four corners of the cutting target a, the action instruction information ι〇5b (7) is buried, and the value of the action instruction information 105·1, 5− causes the action of the object to change. Here, the action indicates information...]]. "For example, information about each bit is embedded separately. Therefore, in order to automatically perform instruction information 105-1 to 105-4, an action is taken, and information is displayed, for example, four corners are read in a specific order (for example, clockwise direction). For example, the preset action is performed on the cropped object 1〇1 that is virtually combined with the background image, for example, corresponding to the result of the reading. Specifically, for example, the action indication information (4-bit) is “11”. _, moving the cropped object 101 to the right. Further, when the operation instruction information is "_", the cropped object 101 is moved to the left, and when the "(1) 1" is made, the cropped object HH is rotated in the same manner as the original object. The operation of the second embodiment is not limited thereto, and for example, the cut object 101, that is, the red dragonfly, can be changed in a direction away from the direction. Further, the posture (e.g., position (coordinate) and orientation) or moving speed of the cropped object can be obtained based on the motion instruction information. Further, the execution timing of the above-described operation may be performed, for example, by the user touching the screen on the screen (including tapping, touching, and sliding, which is widely included) on the screen of the mobile terminal device 160697.doc 201237802 to display the cropped object 1〇1. The specific operation is not limited to this embodiment. For example, the built-in camera (user side camera) of the mobile terminal device can also be used to instantly recognize the change in brightness. When the average brightness below the preset brightness exceeds a certain level, it is determined that the cut object 101 is to be touched. The cut object 101, that is, the red dragonfly, performs the action of flying away. Further, the content of the action set in accordance with the operation instruction information may be stored in advance in a storage unit or the like, and the predetermined external device 4 may be connected via a communication network or the like to acquire the action content of the object from the connected external device. Thereby, for example, the same operation instruction content can be operated differently from the action content stored in the storage means or the like and the action content acquired from the external device. Further, by acquiring the content of the operation from an external device or the like, it is possible to freely change the operation by management of the external device. &lt;Embodiment 3&gt; Fig. 21 is a view showing an example of an eight-foot application system of the third embodiment. Further, the AR application system of the circle 21 can be executed using the mobile terminal in the same manner as in the second embodiment. In the third embodiment, for example, a plurality of cropped objects 111-1 to 111-4 (in the example of Fig. 21, squid) which are the targets of performing a specific operation in the paper medium 11 of the transcript or the like are depicted. Further, around the respective cut objects "^~丨丨丨·々, the cutting marks 112-1 to 112-4 are embedded in the respective cut objects. The cutting marks (1) are within the winter, and 2(), in the same manner, a low frequency portion is embedded around the outer shape of each of the cut objects 111-1 to 11b, and a high frequency portion is embedded in the outer side of the low frequency portion. Further, the cutting mark 丨12_1~ The operation instruction information for performing the specific operation for each of the cropped objects lu 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 The image information acquisition device 15) captures the paper medium 110. Further, the mobile terminal (the recognition device 16) acquires the crop marks 112·1 to 112-4β from the captured image, and the mobile terminal device is based on the acquisition. Cut the marks 112_1~112·4, and cut the cut object (1) (1)^, which is the fish and the background image, to generate an image of the simulated composite and display it on the screen. As described above, crop 112-1 The operation instruction information is embedded in ~112-4, and the value of the operation instruction information is similar to that of the above embodiment. The action of the object 11 - lq u_4 changes, and the action indication information is in four corners, and may be embedded in, for example, the peripheral portion of the cutting marks u 2_ 1 to 112-4, or a predetermined cut. The specific position in the mark is cut. In the third embodiment, when the user's hand 113 touches the screen displaying the captured image, the ripple is diffused, and each of the cut objects Hill 11-4 is squid. Specifically, for example, when the fingertip of the user's hand 113 touches the screen, the mobile terminal (the recognition device 16) obtains the coordinates of the touch and each of the cropped objects m•卜^-々 That is, the coordinates of the squid, using the obtained coordinates to produce the cutting direction object 丨丨 丨 丨丨 丨丨 丨丨 斗 之 之 之 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 。 又 又 又 又 又 又 又 又 又 又 又 又 又 又The angle information of the object (1)-WU-4 is oriented. In addition, the motion indication information does not have to be in the corner of the cutting target, for example, and the information of the 8-bit element can be embedded in the periphery of each mark to enable it to be expressed. Out 160697.doc -43- 201237802 0~360 degree action. Further, action The end device adjusts the advance speed of each of the cropped objects 111-1 to 111-4 by normalizing the direction vector. The vector generated by the above method can be made even if it touches any position on the screen on the mobile terminal device. Each of the cropped objects 111-1 to 111-4 performs a radial traveling operation. <Embodiment 4> Fig. 22 is a view showing an example of the iAAR application system of the embodiment 4. The AR application system of Fig. 22 can also be used. The second embodiment is similarly executed by the mobile terminal device. In the fourth embodiment, a plurality of cutting targets 12M and i2i_2, which are targets for performing a specific operation in a paper medium 12 such as a fashion magazine, are depicted (in the example of Fig. 22). For clothing). Further, the above-described cutting marks 122_ 1 and 122-2 are embedded in the periphery of the cut objects 121_丨 and 121_2 in correspondence with the respective cut objects 121-1 and 121·2. In the fourth embodiment, the paper medium 120 is imaged by a camera or the like (second image information acquiring device 15) provided on a mobile terminal or the like. Furthermore, the mobile terminal (the acknowledgment device 16) acquires the crop marks mi, ΚΙ 2 from the captured image, and the action terminal device cuts the cut based on the obtained crop marks 122-1, 122-2. The object 121], 121_2, that is, the garment is separated from the background image, and a virtual composite image is produced and displayed on the four sides. &amp; The cropped image can be enlarged, reduced, rotated, and moved on the screen. Whenever the user shoots a product, a round image such as clothing is added to the screen, so you can enjoy the match according to your own preferences. fun of. Therefore, in the fourth embodiment, by performing the above functions, for example, the application 160697.doc 201237802 can be provided as a fashion matching application. Specifically, the product of the upper clothing item 131-1 or the skirt item ι31_2, the slave s shoes item 131-3, etc., is embedded in the cutting mark, and the user uses the mobile terminal to shoot the favorite. For the goods, each item can be screened into the written program 130 of the mobile terminal and displayed on the screen. The user selects the favorite item by the hand 132 from the mobile terminal - the item displayed on the screen 130, and moves it to the virtual side of the face 130. Further, the fourth embodiment is not limited to the clothing, and may include, for example, a hat or glasses, an ornament, a shoe, etc., and may be used in conjunction with a face image or the like to match the face type. Moreover, in the case of the above-described fashion matching application, the image of the back side of the product (for example, the design of the back side of the top or the skirt) or the price may be obtained from, for example, the operation instruction embedded in the cutting mark. Color information, etc. Further, for example, when the data capacity is increased due to a design image on the side of the body, the image can be displayed by a browser such as the Web. That is, if the information is small (for example, in the case of a few bits), the action instruction information can be embedded in the four corners or the peripheral part of the mark; if the information is large, it can be embedded only in advance. The 1D (identification information) of the information is sent to a specific external device (server) while the cutting is being performed, and the corresponding information is obtained from the Web or the like. In the above-mentioned embodiments, the action of the action information can be immediately after the recognition of the cutting mark, and the user can act after the specific time/line after the recognition. The terminal device (recognition device) performs the action of the shirt. 160697.doc -45· 201237802 In addition, when a plurality of (four) operations are performed, the operation instruction information may be set in advance according to each of the objects. Further, in the case where the instruction information is for a plurality of object objects, the embodiment in which two persons are shoulder-to-hand, and the grip is used, for example, there are two persons, and the embodiment is not limited thereto. Actions such as shoulder bars, but this

進而,本實施形態中,作A 他物件之動作。藉此二動作指示資訊亦可包含對其 又,… 使物件全體進行統-之動作。 又 所明統一之動作# it 1 ^ 複數個人物物件之全員進杆 物時,亦可埋入與該人物=資訊,例如物件為人 ^ X , Μ Μ 目關之資訊(姓名、年齡、性別 專)或網站、部落格等位址等之 加資訊,亦可附加用以於畫 a π °進而’作為附 他物件之資訊。以面上顯示與裁切物件不同之其Further, in the present embodiment, the action of the A object is made. In this way, the second action instruction information can also include the action of the whole object. The movement of the unified unit # it 1 ^ When all the members of the personal object enter the pole, they can also be buried with the person = information, for example, the object is a person ^ X, Μ Μ information (name, age, gender) Specialized) or information such as websites, blogs, etc., may also be used to attach a π ° and then 'as an attached object'. The surface is different from the cropped object

:I:為其他實施例,例如預先對商品或食品之 相子、容器或說明書等埋入卜.+.拉丄 农A 攝並認識圖像,藉此可將商„標記,以相機加以拍 法等顯說明書或使用方法、調理 〔Μ顯不於晝面。例如,裁切物件例如有化妝品(例如口 影睫毛膏等)之情形,藉由裁切標記取得該化妝 2之顏色資訊’並使用顏色資訊對自己或他人之臉部圖 像上妝。 上述實施例2〜4中所示之紙媒體j 〇〇、j j i 2〇亦可 為畫面上顯示之圖像。又’圖像亦包含映像。 如上述’根據本發明,可迅速進行非視覺化資訊之埋入 或摘取’並可提供附加價值性優良之高精度之内容。 160697.doc 46- 201237802 以上已對本發明之較佳實施形態等進行詳述,但本發明 並非限定於特定之實施形態,得於申請專利範圍所揭示之 本發明之主旨範圍内進行種種變形或變更。 本申請案係基於且主張2010年12月7曰提出申請的先前 曰本專利申請案第2010-272994號及2011年8月30日提出申 請的先前曰本專利申請案第2〇11_187597號之優先權的權 利;該案之全部内容以參照之方式併入本文中。 【圖式簡單說明】 圖1係顯示本發明之實施形態之内容提供系統之概略構 成之一例的圖。 圖2係顯示本發明之實施形態之埋入裝置之功能構成之 一例的圖。 圖3係顯示本發明之進行形體之認識裝置之功能構成之 一例的圖。 圖4係顯示本發明之實施形態之可實現非視覺化資訊之:I: For other embodiments, for example, pre-empting the phase or container of the product or food, the container or the manual, etc., and taking the image and recognizing the image, thereby marking the business and taking the camera The method or method of use, conditioning, etc., for example, when cutting an object such as a cosmetic (such as a mascara, etc.), the color information of the makeup 2 is obtained by cutting the mark' The color information is used to apply makeup to the face image of oneself or others. The paper media j 〇〇 and jji 2 所示 shown in the above embodiments 2 to 4 can also be images displayed on the screen. The above-mentioned 'according to the present invention, the non-visualization of information can be quickly embedded or extracted' and can provide high-precision content with excellent added value. 160697.doc 46- 201237802 The preferred embodiment of the present invention has been made above. The present invention is not limited to the specific embodiments, and various modifications and changes can be made within the spirit and scope of the invention as disclosed in the appended claims. The present application is based on and claims. put forward The priority of the prior patent application No. 2010-272994 filed on Jan. 30, 2011, the priority of which is hereby incorporated by reference. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a view showing an example of a schematic configuration of a content providing system according to an embodiment of the present invention. Fig. 2 is a view showing an example of a functional configuration of an embedding device according to an embodiment of the present invention. Fig. 3 is a view showing an example of the functional configuration of the device for understanding the shape of the present invention. Fig. 4 is a view showing the non-visual information of the embodiment of the present invention.

一例的圖。 訊之埋入處 訊之認識處 圖5係顯示本發明之實施形態之非視覺化資 理程式之一例的流程圖。 圖6係顯示本發明之實施形態之非視覺化資 理程式之一例的流程圖。 之非視覺化資訊之 圖7(a)〜(d)係為對本發明之實施形態之非 埋入例進行說明的圖。 形態之非視覺化資訊之 圖8(a)〜(d)係為對本發明之實施形態之非 擷取及物件之認識例進行說明的圖。 160697.doc •47· 201237802 圖9(a) (c)係為說明本發明之實施形態之物件動作例的 圖。 圖l〇A係為說明包含本發明之實施形態之高頻部所形成 之特定像素區域之柵之例的圖。 圖10B係為說明包含本發明之實施形態之高頻部形成之 特定像素區域之栅之其他例的圖。 圖1 (a) (c)係為說明包含本發明之實施形態之高頻部及 低頻部之非視覺化資訊的圖。 圖12係顯示本發明之實施形態之埋入強度與背景之灰度 值之關係的圖。 圖13係顯示本發明之實施形態之單色影像用非視覺化編 碼之例的圖》 圖14係顯示本發明之實施形態之相對使灰色於〇〜255變 化之亮度之讀取認識精度與非視覺化之關係的圖。 圖1 5(a)〜(d)係為對本發明之實施形態之每一影像之非視 覺化資訊之讀取認識精度之評估進行說明的圖。 圖16A係顯示與本發明之實施形態圖15對應之評估結果 的岡。 圖16B係顯示與本發明之實施形態圖15對應之評價結果 的其他圖》 圖1 7(a)〜(c)係顯示本發明之實施形態之實施例1之ar應 用系統之一例的圖。 圖18(a)〜(d)係顯示與本發明之實施形態之取得之修正指 示相關之風車之動作例的圖。 I60697.doc • 48 · 201237802 圖19⑷(e)係為說明本發明之實施形態之畫像中所包含 之非視覺化資訊虫私&amp; &amp; 讯與動作指示資訊之關係的圖。 ()及(b)係顯示本發明之實施形態之實施例2之AR 應用系統之一例的圖》 .'、顯卞本發明之實施形態之實施例3之AR應用系統 之一例的圖。 圖(a)及(b)係顯示本發明之實施形態之實施例4之AR 應用系統之一例的圖。 【主要元件符號說明】 10 内容提供系統 11 第1印刷媒體 12 第1圖像資訊取得裝置 13 埋入裝置 14 第2印刷媒體 15 第2圖像資訊取得裝置 16 5忍識裝置 21 輸入機構 22 輸出機構 23 儲存機構 24 圆像取得機構 25 圖像解析機構 26 埋入對象物件設定機構 27 埋入資訊設定機構 28 埋入資訊產生機構 160697.doc •49· 201237802 29 非視覺化資訊埋入機構 30 收發機構 31 控制機構 41 輸入機構 42 輸出機構 43 儲存機構 44 埋入圖像取得機構 45 物件擷取機構 46 非視覺化資訊解析機構 47 顯示資訊產生機構 48 收發機構 49 控制機構 51 輸入裝置 52 輸出裝置 53 驅動裝置 54 輔助記憶裝置 55 記憶裝置 56 CPU 57 網路連接裝置 58 記錄媒體 61 圖像資料(攝影圖像) 62 物件 63 外框 64 輪廓 160697.doc -50- 201237802 65 編碼内框 66 高頻部 67 低頻部 71 裁切圖像 81 位置檢測圖案 82 旋轉檢測圖案 84 邊緣 91 埋入圖像 92 非視覺化資訊 93 動作指示資訊(埋入資訊) 100 紙媒體 101 裁切物件 102 裁切標記 103 低頻部 104 高頻部 105 動作指示資訊 110 紙媒體 111 裁切物件 112 裁切標記 113 手 120 紙媒體 121 裁切物件 122 裁切標記 130 畫面 -51 - 160697.doc 201237802 131 物件 132 手 150 物件 152 位置檢測圖案 154 高頻部 156 低頻部 200 明部 300 暗部 160697.doc -52-An example of a picture. The information is embedded in the information. Fig. 5 is a flow chart showing an example of a non-visualization management program according to an embodiment of the present invention. Fig. 6 is a flow chart showing an example of a non-visualization management program according to an embodiment of the present invention. Non-visualized information Fig. 7 (a) to (d) are diagrams for explaining a non-buried example of an embodiment of the present invention. Fig. 8 (a) to (d) are diagrams for explaining an example of the non-extraction and the object of the embodiment of the present invention. 160697.doc •47· 201237802 Fig. 9(a) and Fig. 9(c) are diagrams for explaining an example of the operation of the object according to the embodiment of the present invention. Fig. 3A is a view for explaining an example of a gate including a specific pixel region formed by a high-frequency portion of an embodiment of the present invention. Fig. 10B is a view for explaining another example of a gate including a specific pixel region in which a high-frequency portion is formed in the embodiment of the present invention. Fig. 1 (a) and (c) are diagrams for explaining non-visualization information including a high frequency portion and a low frequency portion according to an embodiment of the present invention. Fig. 12 is a view showing the relationship between the embedding strength and the gradation value of the background in the embodiment of the present invention. Fig. 13 is a view showing an example of non-visual coding for monochrome video according to an embodiment of the present invention. Fig. 14 is a view showing the reading accuracy of the brightness of the gray of the embodiment of the present invention. A diagram of the relationship of visualization. Fig. 15 (a) to (d) are diagrams for explaining the evaluation of the read recognition accuracy of the non-visualization information for each image of the embodiment of the present invention. Fig. 16A shows the results of the evaluation corresponding to Fig. 15 of the embodiment of the present invention. Fig. 16B is a view showing another example of the evaluation result corresponding to Fig. 15 of the embodiment of the present invention. Fig. 17 (a) to (c) are views showing an example of the ar application system of the first embodiment of the embodiment of the present invention. Figs. 18(a) to 18(d) are diagrams showing an operation example of a windmill relating to the correction instruction obtained in the embodiment of the present invention. I60697.doc • 48 · 201237802 Fig. 19 (4) and (e) are diagrams for explaining the relationship between the non-visual information insects &amp;&amp;&amp; information and the operation instruction information included in the portrait of the embodiment of the present invention. (a) and (b) are diagrams showing an example of an AR application system according to a second embodiment of the present invention, and an example of an AR application system according to a third embodiment of the embodiment of the present invention. Figures (a) and (b) are diagrams showing an example of an AR application system according to a fourth embodiment of the embodiment of the present invention. [Description of main component symbols] 10 Content providing system 11 First printing medium 12 First image information acquiring device 13 Embedding device 14 Second printing medium 15 Second image information acquiring device 16 5 Enduring device 21 Input mechanism 22 Output Mechanism 23 Storage mechanism 24 Circular image acquisition unit 25 Image analysis unit 26 Embedded object setting unit 27 Buried information setting unit 28 Buried information generation unit 160697.doc • 49· 201237802 29 Non-visual information embedding mechanism 30 Transceiver Mechanism 31 Control mechanism 41 Input mechanism 42 Output mechanism 43 Storage mechanism 44 Buried image acquisition mechanism 45 Object capture mechanism 46 Non-visualization information analysis mechanism 47 Display information generation mechanism 48 Transceiver 49 Control mechanism 51 Input device 52 Output device 53 Drive device 54 Auxiliary memory device 55 Memory device 56 CPU 57 Network connection device 58 Recording media 61 Image data (photographic image) 62 Object 63 Frame 64 Outline 160697.doc -50- 201237802 65 Encoding inner frame 66 High frequency part 67 Low frequency section 71 Crop image 81 Position detection pattern 82 Rotation detection pattern 84 Edge 91 Buried image 92 Non-visual information 93 Motion indication information (buried information) 100 Paper media 101 Cropped object 102 Crop mark 103 Low frequency part 104 High frequency part 105 Action indication information 110 Paper medium 111 Cropped object 112 Crop mark 113 Hand 120 Paper media 121 Cropped object 122 Crop mark 130 Screen -51 - 160697.doc 201237802 131 Object 132 Hand 150 Object 152 Position detection pattern 154 High frequency part 156 Low frequency part 200 Part 300 dark part 160697.doc -52-

Claims (1)

201237802 七、申請專利範圍: 1. 一種使用非視覺化資訊之内容提供系統,其包含: 非視覺化資訊之埋入裝置,其於取得之圖像之特定位 置埋入非視覺化資訊;及認識裝置,其認識藉由該埋入 • 裝置所獲得之圖像中所含之物件及非視覺化資訊;其特 徵在於, 上述埋入裝置包含: 埋入對象物件設定機構,其自上述取得之圖像中 所含之物件而設定所要埋入非視覺化資訊之物件;及 非視覺化資訊埋入機構,其於藉由上述埋入對象 物件設定機構所獲得之物件之周圍’埋入與上述物 件對應之上述非視覺化資訊; 上述認識裝置包含: 物件榻取機構,纟自上述圖像中所含之非視覺化 資afl之埋入區域願取物件; 非視覺化資訊解析機構,其當藉由上述物件掏取 機構操取出上述物件之情形時,自上述非視覺化資 訊解析對上述物件之處理内容;及 顯示資訊產生機構,其與藉由上述非視覺化資訊 解析機構所獲得之處理内容對應而產生顯示於畫面 之物件。 、 2· 一種埋入裝置,其係於取得之圖像之特定位置埋入非視 :化資訊之非視覺化資訊之埋入裝置者,其特徵I: 160697.doc 201237802 圖像解析機構,其取得上述圖像中所含之物件及位 資訊; 埋入對象物件設定機構,其自藉由上述圖像解析機構 所獲得之物件而設定上述圖像為埋入對象之物件;及 ;非視覺化資訊埋入機構’其於藉由上述埋入對象物件 »又疋機構所獲彳于之物件之周圍,埋入與上述物件對應之 上述非視覺化資訊。 3. 如凊求項2之埋入裝置,其中包含: 埋入資訊設定機構,其設定藉由上述非視覺化資訊埋 入機構所埋入之上述非視覺化資訊之埋入内容;及 埋入資訊產生機構,其自藉由上述埋入資訊設定機構 所设定之埋入資訊而產生經非視覺化之埋入資訊。 4. 如請求項3之埋入裝置,其中 上述埋入資訊設定機構將上述非視覺化資訊之形態設 為2維碼,且由相對於原始圖像之明度之低頻部及/或高 頻部而構成上述2維碼之編碼部。 5. 如請求項2之埋入裝置,其中 上述非視覺化資訊埋入機構基於藉由上述圖像解析機 構所取得之物件之位置資訊,而合成與上述物件對應之 非視覺化資訊。 6. 一種認識裝置,其係認識取得之圖像中所含之物件及非 視覺化資訊者’其特徵在於包含: 物件棟取機構’其自上述圖像中所含之非視覺化資訊 之埋入區域擷取物件; 160697.doc • 2 · 201237802 非視覺化資訊解析機構,立者 八曰猎由上述物件掘取機播 擷取出上述物件之情形時, 自上述非視覺化貧訊解析 上述物件之處理内容;及 顯示資訊產生機構,苴盘益丄, ,、與藉由上述非視覺化資訊解析 機構所獲得之處理内容對雍 于應而產生顯示於晝面之物件。 如請求項6之認識裝置,其中 上述物件掏取機構使用特定頻率對上述圖像進行過濾 處而自與取仔之頻率對應之區域擷取非視覺化資 訊。 8. -種埋入方法’其係於取得之圖像之特定位置埋入非視 覺化資訊之非視覺化資m 見。貝Λ之埋入方法,其特徵在於包 含: 圖像解析步驟,係取得上述圖像中所含之物件及位置 資訊; 埋入對象物件設定步驟,係自藉由上述圖像解析步驟 所獲得之物件而設定上述圖像為埋入對象之物件;及 非視覺化資訊埋入步驟,其於藉由上述埋入對象物件 設定步驟㈣得之物件之周圍,埋人與上述物件對應之 上述非視覺化資訊。 9.如請求項8之埋入方法,其中包含: 埋入資訊設定步驟’係設定藉由上述非視覺化資訊埋 入步驟所要埋人之上述非視覺化資訊之埋人内容;及 非視覺化資訊產生步驟,係自藉由上述埋入資訊設定 步驟所設定之埋入資訊而產生經非視覺化之埋入資訊。 160697.doc 201237802 1 〇.如請求項9之埋入方法,其中 上述埋入資訊設定步驟將上述非視覺化資訊之形態設 為2維碼,且由相對於原始圖像之明度之低頻部及/或高 頻部而構成上述2維碼之編碼部。 11. 如請求項8之埋入方法,其中 上述非視覺化資tfl埋入步驟基於藉由上述圖像解析步 驟斤取得之物件之位置資訊,而合成與上述物件對應之 非視覺化資訊。 12. -種認識方法,其係認識取得之圖像中所含之物件及非 視覺化資訊者,其特徵在於包含: 物件梅取步驟,係自上述圖像中所含之非視覺化資訊 之埋入區域擁取物件; 非視覺化資訊解析步驟,係當藉由上述物件操取步驟 ㈣出上述物件之情形時’自上述非視覺化資訊解析與 上述物件對應之處理内容;及 顯示資訊產生步驟,係與藉 兴精由上述非視覺化資訊解析 步驟所獲得之處理内容餅庙而^日一 對應而產生顯示於晝面之物件。 13. 如請求項12之認識方法,其中 上述物件擷取步驟佶用 使用特疋頻率對上述圖像進行過濾 處理,而自與取得之頻率 ’ ^ 貝丰對應之區域擷取非視覺化資 14. 之冤腦作為如請求項2之埋入裝置许 包含之圖像解析機構、埋人對象物件設定機構及非視賀 化資訊埋入機構之各機構而發揮功能。 160697.doc 201237802 15. —種認識程式,其使電腦作為如請求項6之認識裝置所 包含之物件擷取機構、非視覺化資訊解析機構及顯示資 訊產生機構之各機構而發揮功能。 160697.doc201237802 VII. Patent application scope: 1. A content providing system using non-visualized information, comprising: a non-visualized information embedding device, embedding non-visualized information at a specific position of the acquired image; and understanding The device recognizes an object and non-visual information contained in an image obtained by the embedding device; and the embedding device includes: an embedding object setting mechanism, wherein the image is obtained from the above An object that is to be embedded with non-visual information, and a non-visual information embedding mechanism that embeds and objects with the object obtained by the object object setting mechanism Corresponding to the above non-visualization information; the above-mentioned recognition device comprises: an object reclining mechanism, which is obtained from the buried area of the non-visualization fund afl included in the image; the non-visualized information analysis mechanism When the object picking mechanism handles the object, the processing content of the object is analyzed from the non-visual information; and the display Information generating means, the processing by which the non-visual information obtained by the analyzing means corresponding to the content generated is displayed on the screen of the object. 2. An embedding device that embeds a non-visualized non-visualized information embedding device at a specific position of the acquired image, and features I: 160697.doc 201237802 image analysis mechanism, Obtaining an object and bit information included in the image; and embedding the object object setting means, setting the image as an object to be embedded by the object obtained by the image analysis means; and; non-visualizing The information embedding mechanism embeds the non-visual information corresponding to the object by surrounding the object obtained by the object and the object. 3. The embedding device of claim 2, comprising: an embedding information setting mechanism that sets a buried content of the non-visualized information embedded by the non-visual information embedding mechanism; and embedding The information generating mechanism generates non-visualized buried information by embedding the information set by the embedded information setting institution. 4. The embedding device of claim 3, wherein the embedding information setting means sets the form of the non-visualized information to a two-dimensional code, and the low frequency portion and/or the high frequency portion of the brightness relative to the original image The coding unit constituting the above two-dimensional code. 5. The embedding device of claim 2, wherein the non-visual information embedding means synthesizes non-visualized information corresponding to the object based on position information of the object obtained by the image analyzing means. 6. A cognitive device that recognizes objects and non-visualized information contained in an acquired image, and is characterized by: an object building mechanism that buryes non-visualized information contained in the image Incorporating objects into the area; 160697.doc • 2 · 201237802 Non-visualized information analysis organization, when the above-mentioned object picking machine broadcasts the above objects, the above-mentioned non-visualized information is analyzed The content of the processing; and the display information generating mechanism, and the processing content obtained by the non-visual information analyzing mechanism are generated to produce the object displayed on the surface. The device of claim 6, wherein the object capture mechanism filters the image using a specific frequency and extracts non-visualized information from an area corresponding to the frequency of the picking. 8. - A method of embedding a 'non-visualization information' that embeds non-visualized information at a specific location of the acquired image. The method for embedding Bessie includes the following steps: an image analysis step of acquiring an object and position information contained in the image; and an embedding object setting step obtained by the image analysis step And setting the image as an object to be embedded; and a non-visual information embedding step, wherein the non-visual corresponding to the object is buried around the object obtained by the step (4) of embedding the object object Information. 9. The method of embedding according to claim 8, comprising: the embedding information setting step of setting a buried content of the non-visual information to be buried by the non-visual information embedding step; and non-visualization The information generation step generates non-visualized buried information by using the buried information set by the embedded information setting step. The method of embedding claim 9, wherein the embedding information setting step sets the form of the non-visualized information to a two-dimensional code, and the low frequency portion of the brightness relative to the original image and / or a high frequency portion to constitute an encoding portion of the above two-dimensional code. 11. The embedding method of claim 8, wherein the non-visualization tfl embedding step synthesizes non-visualized information corresponding to the object based on position information of the object obtained by the image parsing step. 12. A method of recognizing, which is to recognize the objects and non-visual information contained in the acquired image, and is characterized in that: the object taking step is based on the non-visual information contained in the image. Embedding the area to capture the object; the non-visual information parsing step is to analyze the content corresponding to the object from the non-visual information when the object is operated by the object (step 4); and display information generated The step is to generate an object displayed on the surface of the cake by the processing of the content of the cake obtained by the non-visual information analysis step. 13. The method of claim 12, wherein the object capturing step uses the special frequency to filter the image, and extracts the non-visualized resource from the region corresponding to the frequency of the obtained ' ^ Befeng The camphor functions as an image analysis unit, a buried object object setting mechanism, and a non-visualization information embedding mechanism which are included in the embedding device of claim 2. 160697.doc 201237802 15. An acquaintance program for causing a computer to function as an object capture mechanism, a non-visual information analysis mechanism, and a display information generating mechanism included in the acknowledgment device of claim 6. 160697.doc
TW100145223A 2010-12-07 2011-12-07 Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program TW201237802A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010272994 2010-12-07
JP2011187597A JP4972712B1 (en) 2010-12-07 2011-08-30 Content providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program

Publications (1)

Publication Number Publication Date
TW201237802A true TW201237802A (en) 2012-09-16

Family

ID=46207201

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100145223A TW201237802A (en) 2010-12-07 2011-12-07 Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program

Country Status (3)

Country Link
JP (1) JP4972712B1 (en)
TW (1) TW201237802A (en)
WO (1) WO2012077715A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104980606A (en) * 2014-04-03 2015-10-14 扬升照明股份有限公司 Apparatus for wireless transmission system and program thereof
US9258485B2 (en) 2014-03-24 2016-02-09 Omnivision Technologies, Inc. Image sensor cropping images in response to cropping coordinate feedback
TWI708164B (en) * 2019-03-13 2020-10-21 麗寶大數據股份有限公司 Virtual make-up system and virtual make-up coloring method

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5779158B2 (en) * 2012-09-18 2015-09-16 株式会社ソニー・コンピュータエンタテインメント Information processing apparatus and information processing method
KR102146244B1 (en) 2013-02-22 2020-08-21 삼성전자주식회사 Methdo for controlling display of a plurality of objects according to input related to operation for mobile terminal and the mobile terminal therefor
JP2017183942A (en) * 2016-03-29 2017-10-05 株式会社リコー Information processing device
JP6891055B2 (en) * 2017-06-28 2021-06-18 キヤノン株式会社 Image processing equipment, image processing methods, and programs
US10885689B2 (en) * 2018-07-06 2021-01-05 General Electric Company System and method for augmented reality overlay
JP6637576B1 (en) * 2018-10-31 2020-01-29 ソノー電機工業株式会社 Information processing program, server / client system, and information processing method
JP7451159B2 (en) * 2019-12-09 2024-03-18 キヤノン株式会社 Image processing device, image processing method, and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3214907B2 (en) * 1992-08-06 2001-10-02 松下電器産業株式会社 Image processing device
JPH1141453A (en) * 1997-07-24 1999-02-12 Nippon Telegr & Teleph Corp <Ntt> Imbedded electronic watermark read processing method, storage medium for imbedded electronic watermark processing program and electronic watermark read processing program storage medium
JP2002118736A (en) * 2000-10-10 2002-04-19 Konica Corp Electronic watermark inserting device and electronic watermark extracting apparatus, and electronic watermark system
JP2004194233A (en) * 2002-12-13 2004-07-08 Mitsubishi Electric Corp Contents management apparatus and contents distribution apparatus
JP4676852B2 (en) * 2005-09-22 2011-04-27 日本放送協会 Content transmission device
JP5168124B2 (en) * 2008-12-18 2013-03-21 富士通株式会社 Image marker adding apparatus, method, and program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9258485B2 (en) 2014-03-24 2016-02-09 Omnivision Technologies, Inc. Image sensor cropping images in response to cropping coordinate feedback
CN104980606A (en) * 2014-04-03 2015-10-14 扬升照明股份有限公司 Apparatus for wireless transmission system and program thereof
CN104980606B (en) * 2014-04-03 2018-03-27 扬升照明股份有限公司 Apparatus and method for wireless transmission system
TWI708164B (en) * 2019-03-13 2020-10-21 麗寶大數據股份有限公司 Virtual make-up system and virtual make-up coloring method

Also Published As

Publication number Publication date
WO2012077715A1 (en) 2012-06-14
JP2012138892A (en) 2012-07-19
JP4972712B1 (en) 2012-07-11

Similar Documents

Publication Publication Date Title
TW201237802A (en) Content-providing system using invisible information, invisible information embedding device, recognition device, embedding method, recognition method, embedding program, and recognition program
CN110662484B (en) System and method for whole body measurement extraction
US7376276B2 (en) Indexing, storage and retrieval of digital images
JP5021061B2 (en) Non-visualization information embedding device, non-visualization information recognition device, non-visualization information embedding method, non-visualization information recognition method, non-visualization information embedding program, and non-visualization information recognition program
CN110276366A (en) Carry out test object using Weakly supervised model
US10650264B2 (en) Image recognition apparatus, processing method thereof, and program
KR20100138863A (en) Providing method of augmented reality and personal contents corresponding to code in terminal with camera
KR101744123B1 (en) Apparatus and method for detecting medicinal products
US11915305B2 (en) Identification of physical products for augmented reality experiences in a messaging system
JP2015001875A (en) Image processing apparatus, image processing method, program, print medium, and print-media set
JP2011209887A (en) Method and program for creating avatar, and network service system
CN106056183B (en) The printed medium of printing press readable image and the system and method for scanning the image
CN103888695B (en) Image processing terminal and image processing system
KR20130006878A (en) Method for restoring an image of object using base marker in smart device, and smart device for restoring an image of object using the base marker
EP3574837A1 (en) Medical information virtual reality server system, medical information virtual reality program, medical information virtual reality system, method of creating medical information virtual reality data, and medical information virtual reality data
CN111625100A (en) Method and device for presenting picture content, computer equipment and storage medium
CN111652983A (en) Augmented reality AR special effect generation method, device and equipment
JP6806955B1 (en) Information processing equipment, information processing systems, information processing methods, and programs
CN113011544A (en) Face biological information identification method, system, terminal and medium based on two-dimensional code
US20180232781A1 (en) Advertisement system and advertisement method using 3d model
CN113345110A (en) Special effect display method and device, electronic equipment and storage medium
CN113767410A (en) Information generation device, information generation method, and computer program
KR20140094057A (en) Method and system for interpreting symbolic codes
JP7426544B1 (en) Image processing system, image processing method, and program
US20240078839A1 (en) Ethical human-centric image dataset