TW200947265A - Presentation system - Google Patents

Presentation system Download PDF

Info

Publication number
TW200947265A
TW200947265A TW098109826A TW98109826A TW200947265A TW 200947265 A TW200947265 A TW 200947265A TW 098109826 A TW098109826 A TW 098109826A TW 98109826 A TW98109826 A TW 98109826A TW 200947265 A TW200947265 A TW 200947265A
Authority
TW
Taiwan
Prior art keywords
image
coordinate
color
background
indicator
Prior art date
Application number
TW098109826A
Other languages
Chinese (zh)
Inventor
Yuichiro Takai
Original Assignee
Nissha Printing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissha Printing filed Critical Nissha Printing
Publication of TW200947265A publication Critical patent/TW200947265A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/0416Control or interface arrangements specially adapted for digitisers
    • G06F3/0418Control or interface arrangements specially adapted for digitisers for error correction or compensation, e.g. based on parallax, calibration or alignment

Abstract

A presentation system simplifies determination of a predetermined first process request such as a commanding operation and a predetermined second process request such as a drafting operation and has an operating method for following a presenter's instinct. The presentation system comprises a display face 2; display control means for displaying images which are made up; an indicating body 3; a particular background area 91 or 92; and cameras 4, 5 for taking a three-dimensional coordinate space. From the image taken by the cameras, a three-dimension coordinate value of the indicating body and a three-dimensional coordinate value of the particular background area are calculated. According to the related position relationship of the three-dimensional coordinate values of the indicating body and the particular background area, the predetermined first process and the predetermined second process are executed.

Description

200947265 六、發明說明: 【發明所屬之技術領域】 本發明係有關於簡報系統,其將圖像顯示於銀幕、顯 示器等之顯示體,在發表者進行簡報時,使指示體移動, 進行顯示圖像之變更等命令操作,同時進行將線等加註於 顯示圖像的繪圖處理。 【先前技術】 以往的手指示裝置,有將表示三維空間的圖像顯示於 0 顯示器,再從彼此相異之複數個方向拍攝識別對象者,並 判定識別對象者的手是否成爲特定之形狀,在其他條件一 .致的情況下,將顯示圖像進行放大縮小者(例如,參照專利 • 文獻1。)。 專利文獻1 :特開平1 1 — 1 34089號公報 【發明内容】 【發明所欲解決之課題】 該以往之手指示裝置是判定手之形狀。發表者之手的 ® 大小或手指之長度等因人而異。因而,需要將識別對象進 行規格化,並判定其變化,裝置的計算處理過度集中。又, 亦有攝影圖像因識別對象者之手所在的方向等而變化,判 別變得困難的情況。 此外,在簡報時,發表者係同時進行說明、顯示圖像 的選擇、圖像特定位置的指示、聽眾之反應追蹤等各種的 行爲,此外,精神亦緊張。在這種狀況下所使用之裝置的 操作,儘可能是簡單且符合直覺的操作較佳。 200947265 本發明之課題在於得到一種簡報系統,其將命令操作 等既定之第一處理要求、和繪圖操作等既定之第二處理要 求的判別簡化。文,本發明之課題在於得到一種簡報系統, 其具備有操作之熟練容易,而且符合發表者之直覺的操作 方法。 根據本發明的說明,將明白本發明之其他的課題。 【解決課題之手段】 本發明之一形態的簡報系統, ❹ 具有: 在X軸、γ軸方向擴大的顯示面; 已製作圖像記憶部,係記憶顯示於該顯示面所需的已 . 製作圖像; 已製作圖像選擇手段,係選擇該已製作圖像記憶部所 記憶的已製作圖像; 顯示控制手段,係將已製作圖像選擇手段所選擇之已 製作圖像顯示於該顯示面; © 2台相機,係拍攝包含有該顯示面的三維座標空間; 著色指示體,係可能在該三維座標空間內移動; 指示體色背景色記憶部,係記憶該指示體的顔色資 料,及對可能位於該2台相機所拍攝之相機視野內,與是 該相機視野內之部分區域的特定背景區域,記億該特定背 景區域的顔色資料; 指示體背景相互位置記憶部,係記憶該指示體和該特 定背景區域之固定的位置關係;以及 攝影圖像記憶部,係各自記憶該2台相機所同時拍攝 200947265 的攝影圖像; 座標位置檢測手段,係叫出該攝影圖像記憶部所記憶 之攝影圖像,再檢測和該指示體色背景色記憶部所記憶之 指示體色同色的指示體圖像位置,並算出指示體的三維座 標値,及檢測和該指示體色背景色記憶部所記憶之背景色 同色的特定背景圖像,並算出特定背景區域的三維座標値; 指示體背景相互位置判定手段,係判定該指示體之三 維座標値和該特定背景區域之三維座標値的相互位置關係 Q 是否滿足該指示體背景相互位置記憶部所記憶之固定的位 置關係; 在指示體背景相互位置判定手段滿足固定之位置關係 .的情況下,進行既定之第一處理; 在指示體背景相互位置判定手段不滿足固定之位置關 係的情況下,進行既定之第二處理。 本發明係判定指示體周圍背景的背景色,在背景色爲 特定色的情況下進行既定之第一處理,而在背景色爲特定 〇 色以外的情況進行既定之第二處理。 作爲既定之處理,可例示對顯示於顯示面之圖像的繪 圖處理、已製作圖像之進頁倒頁、顯示圖像變化之所謂的 動畫的開始停止、音響的產生停止等。 '在本發明之較佳的實施形態的簡報系統,該特定背景 區域亦可是顯示已製作圖像之該顯示面的輪廓區域的部分 區域。 本較佳實施形態,係作爲背景色,使用顯示面之輪廓 部分的顏色。作爲輪廓部分,例如可使用牆壁的顏色。在 200947265 發表者要求既定之第一處理的情況下,在已製作圖像顯示 面的輪廓(外側)使指示體移動。在要求既定之第二處理的 情況下,在顯示面內使指示體移動。因而,成爲更加符合 發表者之直覺的簡報系統。 在本發明之其他的較佳實施形態的簡報系統,該特定 背景區域亦可是顯示已製作圖像之該顯示面區域的部分區 域。 本較佳實施形態,係作爲背景色,使用顯示面之特定 Ο 色。例如預先在幻燈片顯示面內於幻燈片的右側製作綠色 部分,即使係顯示任何幻燈片的情況,都在將指示體放置 於綠色部分時,進行既定之第一處理。因此,成爲更加符 合發表者之直覺的簡報系統。 本發明之其他的較佳實施形態的簡報系統,亦可該既 定之第一處理是對該已製作圖像選擇手段或該顯示控制手 段指示變更的處理,該既定之第二處理是對該顯示面所顯 不之圖像追加繪圖圖像的繪圖處理。 ® 本較佳實施形態是進行命令處理和繪圖處理。 本發明之其他的較佳實施形態的簡報系統,亦可其特 徵爲:該既定之第二處理是繪圖處理,該簡報系統又具有: 指示體座標値記憶部,係記憶該指示體的三維座標 値;及 繪圖手段’係參照該座標値記憶部所記憶之該三維座 標値,並因應於該三維座標値而產生繪圖圖像; 該顯不控制手段是將該繪圖圖像和所選擇之該已製作 圖像同時顯示於該顯示面,再利用該座標値計算手段計算 200947265 在時刻tl之該指示體的三維座標値xl、yl、 標値記億於該座標値記憶部,同時利用該座 計算在從時刻tl經過既定時間後的時刻t2之 維座標値x2、y2、z2,並將該座標値記憶於 部; 該繪圖手段所產生之繪圖圖像係因應於 軸座標値的zl的値而決定圖像種類,並對確 之X軸和Y軸座標値xl、yl的値,而決定藉 〇 應於在時刻t2之X軸和Y軸座標値x2、y2 繪圖的終點。 依據本較佳實施形態,繪圖處理以和指 置相依的方式進行繪圖圖像的修正,並將修 加於顯示面。成爲可進行對發表者想強調之 著的繪圖等符合發表者之直覺或習性的操作 以上所說明之本發明、本發明之較佳實 些所包含之構成要素可儘可能地組合並實施 〇 【發明之效果】 依據本發明,根據背景色而判別既定之 定之第二處理要求。因而,處理要求的判別 統的計算變得單純。進而,達成追隨性提高, 另一方面,若在特定色的部分操作指示體, 第一處理要求,若在圖像上操作指示體,即 二處理要求,發表者的操作符合直覺,又具 單之效果。 【實施方式】 zl,並將該座 標値計算手段 該指示體的三 該座標値記憶 在時刻11之Z 【於在時刻tl 『圖的起點,對 的値,而決定 示體的深度位 正後的繪圖追 部分進行更顯 的簡報系統。 施形態以及這 第一處理和既 變得簡單,系 系統便宜等。 即成爲既定之 成爲既定之第 有操作變得簡 200947265 以下’參照圖面’進一步說明本發明之實施例的簡報 系統1。本發明的實施形態所記載之構件或部分的尺寸、 材質、形狀及其相對位置等,特別是在無特定的記載下, 只不過是說明例’其主旨並非將本發明之範圍僅限定於此。 第1圖係簡報系統1之構成圖。簡報系統丨由是銀幕 之顯示面2、指示體3、第1相機4、第2相機5及系統控 制器8所構成。 本實施例之圖像顯示利用投影機6及銀幕進行。可 〇 是’在本發明中’顯示面2未限定爲銀幕,例如亦可係液 晶顯示器等的顯示面。 顯示面2是在X軸方向和γ軸方向擴大的平面。在本 發明,平面包含有幾何學上所定義的平面、及因應於投影 機或顯示器之特性的曲面。又,在本發明,X軸、Y軸是 表示和表達與顯示面之距離的Z軸之相對關係。 第1相機4和第2相機5是拍攝包含有顯示面2之三 維空間的相機,可爲動態圖像相機,亦可爲靜態圖像相機。 〇 不過,爲了進行圓滑的繪圖等,因爲以例如約60圖框/秒 以上的速度得到圖像較佳,所以第1相機4和第2相機5 是動態圖像相機較佳。第1相機4和第2相機5可使用例 如CCD相機。 第1相機4和第2相機5將顯示面2的整個面設爲視 野較佳,如第1圖中之視野7所示,視野包含顯示面2之 輪廓區域較佳。單一之簡報系統所包含的相機未限定爲2 台,亦可是3台以上。 指示體3例如是紅色或綠色的物體。物體之形狀例如 -10- 200947265 是球體或立方體。爲了使藉指示體3的移動之簡報系統1 的操作變得容易,將該指示體3附著於棒的前端較佳。又, 亦可發表者將例如紅色或綠色的手套戴在一隻手,而將該 手套作爲指示體3。 指示體3是操作簡報系統1的手段。指示體3在第1 相機4和第2相機5的視野內移動或停止。又,移動至或 停止於超出該視野外。檢測在視野內之指示體,並進行簡 報系統1的處理。 ❹ 指示體3可能定位於特定背景區域(91或92)的前面。 特定背景區域(1)91是將該區域設置於顯示面之中的一 例。特定背景區域可設置成將是已製作圖像之幻燈片的一 部分著色成特定的顏色。在此情況下,若幻燈片之張數有 複數張’即將全部的幻燈片之著同一色的特定背景區域設 置於全部之幻燈片的共同部分(例如左上側)較佳。特定背 景區域(2)92是將該區域設置於顯示面之外側的一例。例 如,可將簡報場所之牆壁設爲特定背景區域。 〇 系統控制器8例如是電腦。系統控制器8具有:已製 作圖像記憶部11、是指示體色背景色記憶部的指示體色背 景色LUT12(査表)、三維座標記憶部13、透鏡修正係數記 億部1 4、圖像一空間轉換係數記億部1 5、移動距離値記憶 部16、繪圖圖像種類記憶部17、繪圖修正記憶部18、深度 値記億部19'指示體背景相互位置記憶部20、攝影圖像記 億部31、已透鏡修正圖像記憶部32、指示體座標値記憶部 33、背景座標値記億部34的各記憶部、已製作圖像選擇手 段21、顯示控制手段22、座標値計算手段23、繪圖手段 -11 - 200947265 2 4、第一操作判定手段2 5、命令控制手段2 6、對象顔色判 定手段27、深度判定手段30、指示體背景相互位置判定手 段5 1的各手段、以及主控制部28、周邊機器控制部29。 上述各手段可藉電腦所載入的程式及CPU、RAM、ROM 等來實現。上述之各記憶部,例如可藉由指派電腦之硬式 記憶體內的固定部分而實現。又,已製作圖像選擇手段21 例如可使用市面上之簡報系統用程式。 主控制部2 8進行系統控制器8之整體的動作控制等。 〇 周邊機器控制部29進行第1相機4、第2相機5的動作、 圖像處理及投影機6的控制等。又,系統控制器8具有系 統顯示裝置41及輸入裝置42。系統顯示裝置41是附屬於 電腦的液晶顯示器,輸入裝置42是鍵盤和指示器,例如滑 鼠。 在本實施例所使用之指示體色背景色LUT和在相機攝 影圖像中之指示體或背景色的判別手法記載於如下的文 獻。 〇 和田俊和:使用最接近之識別器的色靶檢測 資訊處理學會講演記錄「電腦視覺和圖像媒體」 Vol.44.No.SIG17 - 014,2002 (起始設定) 說明簡報系統1之起始設定的操作等。 製作顯示於顯示面2的已製作圖像,並記憶於已製作 圖像記憶部1 1。已製作圖像是文字、圖形、相片、插圖、 動態圖像以及這些的混合物等。又,已製作圖像亦可是白 紙的圖像。已製作圖像之具體例是簡報用的幻燈片原稿。 -12- 200947265 將繪圖圖像種類記憶於繪圖圖像種類記憶部17。繪圖 圖像種類例如是實線、虛線、一點鏈線、圓形記號、三角 形記號、四角形記號、箭號等。所記憶之繪圖圖像種類亦 可是一種。 將繪圖圖像的修正事項記憶於繪圖修正記憶部18。修 正事項例如是線的粗細'顏色的濃淡、色調等。而且,記 述和z 1値的關係式。例如’若是修正線的粗細,記述成在 OS zl<50的情況爲線的粗細100像素,在50$ zl<100的情 Ο 況爲線的粗細70像素,在100$Z1<150的情況爲線的粗細 40像素。又,例如若是修正色調,記述成在〇$ζ1<100的 情況爲紅,在100Szl<200的情況爲橙,在200 Szl<300 的情況爲黃。 於指示體背景相互位置記憶部20,記述指示體之檢測 座標値和特定背景區域之檢測座標値的關係。若進行將指 示體3定位於特定背景區域部分的操作,兩者之Z軸値即 接近,兩者之X軸値、Y軸値亦接近。將這種接近之關係 © 等的位置關係記述於該記憶部即可。 例如,將顯示面設爲z=o的基準平面,在特定背景區 域位於是和基準平面平行的平面之Z = n的特定背景區域平 面內的情況下,使指示體背景相互位置記憶部20記憶如下 之關係,若指示體之檢測座標値X軸値、Y軸値位於特定 背景區域內,且指示體之檢測座標値的Z軸値和特定背景 區域之檢測座標値的Z軸値接近至所預定之固定値以內, 則滿足位置關係。 又,亦可預先使指示體背景相互位置記憶部20僅記憶 -13- 200947265 是指示體之檢測座標値和特定背景區域之檢測座標値的距 離之固定値。在此情況下,雖然不是指示體是否位於特定 背景區域內之嚴格的判斷,但是若設定成在充分接近的情 況判定爲滿足位置關係,就對操作無礙。 (環境設定) 接著,進行環境設定。環境設定是將簡報系統1設置 於實際的使用場所,並因應於照明狀態、指示體移動範圍 等,而向簡報系統1輸入所需的資料之設定。 ❹ 將簡報系統1設置於簡報場所,並配置顯示面2、投 影機6。配置第1相機4和第2相機5,並調整相機視野7。 相機視野在接著環境設定中之穩定運轉中被固定。 接著,對簡報系統1教導識別對象顏色。 第2圖係指示體色背景色LUT12之內容說明圖,第3 圖係識別對象顏色教導處理之流程圖。 指示體色背景色LUT 12具有複數種色通道。作爲第1 相機攝影圖像用,具有指示體色判別所使用的色通道Chi ® - 1和是背景色判別所使用的色通道Chi— 2,對於各通 道,各自記憶識別對象顏色和非識別對象顔色。又,作爲 第2相機攝影圖像用,具有指示體色判別所使用的色通道 Ch2 - 1和是背景色判別所使用的色通道Ch2 - 2,對於各通 道’各自記憶識別對象顏色和非識別對象顔色。 在S1 1,將指示體3定位於相機視野7內,又,將特 定背景區域(1)91顯示於顯示面2,以第1相機4、第2相 機5拍攝包含有指示體3的圖像》在使用特定背景區域(2)92 的情況下’將指示體3定位於相機視野7內,並拍攝相機 -14- 200947265 視野7即可。在S12,將第1相機4的攝影圖像和第2相機 5的攝影圖像記憶於攝影圖像記憶部3 1 ° 在S13,從攝影圖像記憶部31叫出第1相機4的攝影 圖像,並顯示於系統顯示裝置41。在S14,以輸入裝置42 指定系統顯示裝置上的指示體3區域。藉此操作而教導第 1指示體色Cl-Ι»在S15,將第1指示體色Cl— 1記憶於 指示體色背景色LUT12。 在S16,對系統顯示裝置41所顯示的同一圖像,指定 © 非指示體區域。藉此操作而教導第1非識別對象顏色NC1 —1— 1。在S17,將第1非識別對象顏色NC1-1-1記憶 於指示體色背景色LUT12。重複S16和S17,而教導其他的 非識別對象顏色NCI - 1 — p 在此,在S16,在系統顯示裝置41所顯示的同一圖像 無應教導之非識別對象顏色的情況下,從以下的攝影操作 重複。即,將包含有該特定之非識別對象顏色的圖像顯示 於顯示面2,或者將該特定之非識別對象顏色的物體定位 ® 於相機視野7內,並以第1相機拍攝,再進行S 1 2、S 1 3、 S16 及 S17 。 依據以上,對第1相機攝影圖像教導指示體的識別對 象顔色C1— 1、指示體的非識別對象顏色NC1-1-1、NC1 -1-2.....NCI - 1-n,並記憶於指示體色背景色LUT12。 在第2圖中,以色通道Chi— 1顯示。 接著,移至S181,對第1相機攝影圖像,就背景色重 複和S14~S18 —樣的處理。藉此處理,對第1相機攝影圖 像教導背景之識別對象顔色C 1 - 2、背景之非識別對象顏 -15- 200947265 色 NCI- 2— 1、NCI— 2— 2、…、NCI— 2— η,並記憶於指 示體色背景色LUT12。在第2圖中以色通道Chl-2顯示。 在S19,對第2相機攝影圖像,重複和S13~S18 —樣 的處理。雖然在S14所指定之第2指示體色C2-1和上述 之第1指示體色C1 一 1大致相同,但是各自是反映第2相 機和第1相機之顏色特性的顏色資料。關於背景色C2- 2 和背景色C1-2亦具有一樣的關係。 以上之識別對象顔色的教導結束時,如第2圖所示, 〇 對色通道Ch2 - 1和Ch2 — 2,各自記憶識別對象顏色和非 識別對象顏色。 在包含有3台以上之相機的簡報系統,對第3相機攝 影圖像等,教導指示體色和非識別對象顏色、背景色和非 識別對象顏色。 其次,參照第4圖,說明和圖像一空間轉換有關之環 境設定。 在S21,將已畫修正用基準點的圖像顯示於顯示面2» 〇 亦可替代顯示圖像,而將一樣的印刷物黏貼於顯示面。已 畫修正用基準點的圖像,例如是方格圖像、方格花紋圖像。 然後,以第1相機4、第2相機5拍攝該顯示。 在S22,將第1相機4的攝影圖像和第2相機5的攝影 圖像記憶於攝影圖像記憶部31。在S23’從攝影圖像記憶 部31叫出第1相機4的攝影圖像’並顯示於系統顯示裝置 41。檢測顯示圖像上之修正用基準點的圖像座標値。 在S 24,將在三維座標(宇宙座標)的修正用基準點和在 S23所檢測的圖像座標値賦予關聯。在S25 ’將三維座標値 -16 - 200947265 和圖像座標値記憶於暫時記憶體。對於其他的修正用基準 點,重複S23~S25的操作。 在S26,從暫時記憶體叫出三維座標値群和各自一對 一對應的圖像座標値群,再從兩群算出第1相機攝影圖像 的透鏡修正係數和第1相機攝影圖像的圖像-空間轉換係 數。透鏡修正係數是係數値,而圖像-空間轉換係數是矩 陣値。 在S27,將第1相機攝影圖像的透鏡修正係數記憶於 Q 透鏡修正係數記憶部14。在S28,將第1相機攝影圖像的 圖像-空間轉換係數記憶於圖像-空間轉換係數記憶部 15 ° 在S29,對第2相機攝影圖像進行和S23~S28 —樣的 操作。藉該操作,將第2相機攝影圖像的透鏡修正係數記 憶於透鏡修正係數記憶部14,並將第2相機攝影圖像的圖 像-空間轉換係數記憶於圖像-空間轉換係數記憶部15。 (穩定運轉一 1) Ο 其次,一面參照第5圖所示之穩定運轉的流程圖、第 6圖和第7圖所示之指示體3的三維座標與特定背景區域之 三維座標決定處理的流程圖1/2和2/2,一面說明穩定運轉 處理。 在S40開始穩定運轉。在穩定運轉中,指示體3根據 發表者之操作而移動或停止。 在S41,已製作圖像選擇手段21從已製作圖像記憶部 11選擇一個圖像。 在S42,顯示控制手段21將該圖像顯示於顯示面2。 -17- 200947265 在S440,第1相機4和第2相機5同時拍攝各自的視野內, 再從攝影圖像中之指示體3的位置算出指示體3的三維座 標値,又從攝影圖像中之特定背景區域的位置算出特定背 景區域的三維座標値。以下另外說明S 440之詳細的處理。 藉該計算,算出在時刻tl之指示體3的三維座標値 xl、yl、zl和特定背景區域的三維座標値、在時刻t2之座 標値x2、y2、z2。將這些指示體3的三維座標値記憶於指 示體座標値記憶部33。又,將特定背景區域的三維座標値 © 記憶於背景座標値記憶部34 » 在S44 1,指示體背景相互位置判定手段5 1從指示體 座標値記憶部3 3叫出在時刻11之指示體3的三維座標値。 又,從背景座標値記憶部34叫出在時刻tl之特定背景區 域的三維座標値。然後,指示體背景相互位置判定手段51 比較這2個三維座標値和指示體背景相互位置記憶部20所 記憶的位置關係。 指示體背景相互位置判定手段5 1所進行之判定,例如 〇 在將全部之檢測背景座標値群作爲背景區域並令背景座標 値記億部34記憶的情況下,以全部之點的組合計算指示體 座標値記憶部33所記錄之已確定的檢測指示體座標點和 檢測背景座標値的位置關係,並判定令滿足指示體背景相 互位置記憶部20所記億之位置關係之座標的組合是否存 在。在只是單純地根據指示體之檢測座標値和特定背景區 域的檢測座標値之間的距離的固定値判定的情況下,此手 法即可。 或者,顯示面是基準面,在判定背景區域所存在之特 -18- 200947265 定背景區域平面和基準面平行的情況下,若以計算預先求 得檢測背景座標値群之X軸値、γ軸値的最大値、最小値, 根據已確定之檢測指示體座標點是否位於檢測背景座標値 群之X軸値、Y軸値的最大値、最小値之間、已確定之檢 測指示體座標點的Z軸値和特定背景區域平面的Z軸値之 間的距離是否滿足位置關係的判定等,而可判定是否滿足 指示體背景相互位置記憶部20所記憶之位置關係。 又,藉座標値計算手段確定相當於檢測背景座標値群 © 之最外部的背景區域邊界點,並將由背景區域邊界點所包 圍形成的平面確定爲特定背景區域的情況,使用空間向量 手法等自背景區域邊界點以計算求得特定背景區域所在的 特定背景區域平面,再從特定背景區域平面和已確定之檢 測指示體座標點之間的距離、已確定之檢測指示體座標點 是否位於將特定背景區域平面上之特定背景區.域的邊界的 法線相連之區域的內側等,而判定是否滿足指示體背景相 互位置記憶部20所記憶之位置關係》此手法在特定背景區 © 域平面不是和基準平面平行的情況有效。 在時刻tl之2個三維座標値的關係滿足所記憶之位置 關係的情況下,移至S442。在時刻tl之2個三維座標値的 關係不滿足所記憶之位置關係的情況下,移至S500。 S442是既定之第一處理的開始處理,在本實施例,既 定之第一處理是命令控制處理。即,在S443,命令控制手 段26對現在顯示中之已製作圖像或已製作圖像選擇手段 實施既定之處理。既定之處理,例如亦可是替代顯示中之 已製作圖像,而將下一已製作圖像利用顯示控制手段22顯 -19- 200947265 示於顯示面2的處理。又,例如亦可是將顯示中之圖像放 大或縮小的處理。 S5 00是既定之第二處理的開始處理,在本實施例,既 定之第二處理是繪圖處理。 在S51,繪圖手段24從繪圖圖像種類記憶部π選擇一 種繪圖圖像種類。繪圖圖像種類例如在一種穩定運轉中, 亦可預先編輯程式成總是選擇同一繪圖圖像種類,又,亦 可編輯程式成因應於xl、yl、zl的値而選擇繪圖圖像種類。 © 在S52,繪圖手段24參照指示體座標値記憶部33,並 因應於zl的値而修正繪圖圖像。在繪圖圖像是線的情況 下,例如在z 1的値是相對上接近該顯示面的情況,進行變 成相對上粗線的圖像修正,而在zl的値是相對上遠離該顯 示面的情況,進行變成相對上細線的圖像修正。 在S53,繪圖手段24參照指示體座標値記憶部33之 xl、yl的値,而決定繪圖圖像的起點。在S54,繪圖手段 24參照指示體座標値記憶部33之x2、y2的値,而決定繪 ® 圖圖像的終點。 在S 55,繪圖手段從已製作圖像記憶部11叫出和在S41 所選擇並現在正顯示於顯示面2相同的已製作圖像。進行 將在S51~S54已製作修正之繪圖圖像和該已製作圖像重疊 的處理。該繪圖處理亦可是將繪圖圖像顯示於最前面,而 看不到重叠之已製作圖像的形態,例如亦可是以50%的透 過度畫繪圖圖像者。然後,將該圖像傳給顯示控制手段22。 在S56,顯示控制手段22將重疊之圖像顯示於顯示面 -20- 200947265 若S56結束,將指示體座標値記憶部33中之χ2、y2、 z2的資料定義爲新的xl、yl、zl。 (穩定運轉一2,三維座標決定處理) 說明在S440之指示體3的三維座標及特定背景區域的 三維座標決定處理。 在S61,以第1相機4、第2相機5同時拍攝固定視野。 在S62,將第1相機4的攝影圖像和第2相機5的攝影圖像 記憶於攝影圖像記憶部3 1。 〇 在S63,從攝影圖像記憶部31叫出第1相機4的攝影 圖像。在S64,使用透鏡修正係數記億部14所記憶之第1 相機攝影圖像的透鏡修正係數,對第1相機攝影圖像進行 透鏡修正。在S65,將第1相機已透鏡修正的圖像記憶於 已透鏡修正圖像記憶部32。 在S66,對第2相機攝影圖像進行和S62~S65 —樣的 處理。該處理所使用的透鏡修正係數是第2相機攝影圖像 的透鏡修正係數。因而,第2相機已透鏡修正圖像亦記憶 © 於已透鏡修正圖像記憶部32。 在S67,抽出三維座標記憶部1 3所記憶之三維座標空 間的第1點。將所抽出之點的座標値稱爲「抽出三維座標 値」。三維座標空間是包含有顯示面2(在X軸和Y軸方向 擴大),並從顯示面2向前方(發表者、聽眾所在之方向(Z 軸))和後方搞大的空間。同時,三維座標空間是在相機視 野7所拍攝之空間。於三維座標記憶部13,記憶將三維座 標空間區分成微小立方體或微小長方體之空間上的各座標 點。雖然三維座標値之Z軸正負係以右手座標系定義,但 -21 - 200947265 是只要因應於需要而修正z軸的正負即可 ° 在S68,座標値計算手段23使用圖像一空間轉換係數 記憶部1 5所記憶之第1相機攝影圖像的圖像-空間轉換係 數,將抽出三維座標値轉換成第1相機攝影圖像的圖像座 標値。將轉換後的圖像座標値稱爲「抽出圖像座標値」。 在S69,對象顏色判定手段叫出第1相機已透鏡修正 圖像,並一面參照指示體色背景色LUT12之係第1相機攝 影圖像用的色通道的Chl-1、Chl_2’ 一面判定抽出圖像 〇 座標値的像素是否符合識別對象顔色。在符合識別對象顏 色的情況,移至S70。在不符合識別對象顔色的情況’回 到S 6 7,抽出三維座標記憶部1 3所記憶之三維座標空間的 第2點,再進行S68、S69的處理。 在S70,對第2相機已透鏡修正圖像進行和S68、S69 一樣的處理。此時,抽出圖像座標値之轉換所使用的圖像 -空間轉換係數是第2相機攝影圖像的圖像-空間轉換係 數。又,對象顔色判定手段所參照之指示體色背景色LUT12 © 中的顏色資料係是第2相機攝影圖像用的色通道Ch2-1、 Ch2 - 2。 在S71,對象顏色判定手段27判定在第1相機攝影圖 像和第2相機攝影圖像是否檢測到識別對象顏色。即,在 將在第2相機攝影圖像之抽出圖像座標値的像素判定爲識 別對象顏色的情況,將抽出三維座標値當作指示體3或特 定背景區域的檢測座標點。而且,在指示體色的情況,作 爲檢測指示體座標點,將抽出三維座標値記憶於指示體座 標値記憶部32。在背景色的情況,作爲檢測背景座標點, -22- 200947265 將抽出三維座標値記億於背景座標値記憶部33。在第2相 機攝影圖像未檢測到識別對象顔色的情況下,回到S67, 抽出三維座標記憶部13所記憶之三維座標空間的第2點, 再進行和S68、S69 —樣的處理。 在S72,重複S67~S71的處理,在三維座標空間之全 部的點,進行識別對象顏色檢測處理。 在S73,若在上述的處理檢測到識別對象顏色,即移 至S74。若在上述的處理未檢測到識別對象顏色,即移至 © S76。 在S74,利用座標値計算手段將指示體座標値記憶部 3 3所記憶之檢測指示體座標値群確定爲一點。關於對該一 點之確定,例如,亦可求得檢測指示體座標値群的重心, 又,亦可在檢測指示體座標値群中,選擇最接近顯示面之 點的座標。 接著,自背景座標値記憶部34所記憶之檢測背景座標 値群確定背景區域。亦可將背景座標値記憶部34所記億之 〇 全部的檢測背景座標値群作爲背景區域並記憶,又,亦可 利用座標値計算手段確定相當於檢測背景座標値群之最外 部的背景區域邊界點,並將由背景區域邊界點所包圍而形 成的平面確定爲特定背景區域。背景區域邊界點包含有背 景區域頂點,由3點以上所構成。 在S75,將已確定之檢測指示體座標點作爲在時刻U 的xl、yl、zl,並記憶於指示體座標値記憶部33。又,將 已確定的背景區域記憶於背景座標値記憶部34。 在S76’回到S61,對在下一圖框(時刻t2)的攝影圖 -23- 200947265 像,進行指示體座標檢測。因而,將在時刻t2之指示體的 座標點x2、y2、z2記憶於指示體座標値記憶部33,並在時 刻t2之背景區域記憶於背景座標値記憶部34。 此外,在相機爲3台以上的簡報系統,接著S 70之對 第2相機已透鏡修正圖像之和S68、S69 —樣的處理,進行 對第3相機已透鏡修正圖像之和S68、S69 —樣的處理。 在使用巨大之顯示面2的簡報系統,和圖像一空間轉 換有關之環境設定進行上述的操作,在穩定運轉時之指示 〇 體三維座標値計算,只要將所抽出之三維座標値的範圍限 制於指示體移動之範圍即可。依此方式,可得到計算速度 提商等之效果。 (第一變形實施例) 說明係較佳實施形態之附加指示體的深度位置判斷的 簡報系統1 (第一變形實施例)。所附加之深度位置判斷是根 據在三維空間之指示體的深度位置,而判斷是否進行既定 之第二處理的處理。 Ο 依據本較佳的實施形態,在指示體位於固定範圍之深 度的情況下,進行既定之第二處理。例如,在發表者靠近 顯示面並移動指示體的情況下,進行繪圖。另一方面,在 發表者遘離顯示面,例如在講台附近移動指示體的情況 下,不進行繪圖。 一般,發表者具有在說明幻燈片等已製作圖像時,靠 近顯示面並以指示體在顯示面描等朝向固定深度方向移動 的習性。因而,本較佳之實施形態,在既定之第二處理是 繪圖處理的情況,成爲更加符合發表者之習性的簡報系統。 -24- 200947265 不過,在本發明在判斷深度位置之後所進行的處理未 限定爲繪圖處理。 第8圖係說明第一變形實施例的流程圖。在第一變形 實施例,在上述所說明的穩定運轉,在S5 00(既定之第二處 理開始)和S51(繪圖開始)之間,插入S81~S82的處理。 預先將指示體之Z軸的臨限値記憶於深度値記憶部 19。或者,預先記憶Z軸的兩端値。在臨限値的情況,記 述在zl比臨限値大(或小)的情況移至繪圖處理的主旨。在 © 兩端値的情況,記述在z 1位於兩端値之間的情況(或超出 的情況)的情況移至繪圖處理的主旨。將以上深度値記憶部 1 9所記憶之値和條件總稱爲「臨限値條件」。 在S81,深度判定手段30比較深度値記憶部19所記憶 的値和指示體座標値記憶部33所記憶之zl的値。在S82, zl滿足臨限値條件的情況下,移至S51,進行繪圖處理。 又’在zl値不滿足臨限値條件的情況下,回到S440。 (第二變形實施例) ϋ ν 說明是其他的較佳實施形態之附加指示體之移動速度 判斷的簡報系統1 (第二變形實施例)。所附加之移動速度判 斷是根據在三維空間之指示體的移動速度,而判斷是否進 行既定之第二處理的處理。 依據本較佳的實施形態,在指示體以特定的速度移動 的情況下’進行既定之第二處理。一般,發表者具有在說 明幻燈片等已製作圖像及要求繪圖時使指示體緩慢地移動 的習性。另一方面,具有在要求進給幻燈片等的情況使指 示體快速移動的習性。 -25- 200947265 因而,於本較佳的實施形態中,在既定之第二處理是 繪圖處理的情況下,成爲更加符合發表者之習性的簡報系 統。不過,在本發明在判斷深度位置之後所進行的處理未 限定爲繪圖處理。 第9圖係說明第二變形實施例的流程圖。在第二變形 實施例,在上述所說明的穩定運轉,在S500(既定之第二處 理開始)和S51(繪圖開始)之間,插入S91-S92的處理。 預先將指示體之移動距離値記憶於移動距離値記憶部 ❹ 16。 在S9 1,第一操作判定手段25算出指示體座標値記憶 部33所記憶之在時刻tl之指示體的位置和在時刻t2之指 示體的位置之距離(算出移動距離)。在S92,第一操作判定 手段25比較移動距離値記憶部16所記憶之移動距離値和 所算出之算出移動距離。 在算出移動距離比所記憶之距離値更小的情況,移至 S51,開始進行繪圖處理。在算出移動距離比所記憶之距離 © 値更大的情況下,回到S440。 移動距離値記億部1 6所記憶之値和第一操作判定手 段25所進行的判定,例如以下所示都可。 (1) 記憶單一臨限値,並進行和算出移動距離之大小比 較判定。 (2) 記憶單一範圍的兩端値,並進行算出移動距離位於 範圍內或超出範圍的比較判定。 (3) 記憶2範圍之2個兩端値,並進行算出移動距離位 於任一個範圍的比較判定等。 -26- 200947265 【圖式簡單說明】 第1圖係簡報系統1之構成圖。 第2圖係指示體色背景色LUT12之內容說明圖。 第3圖係識別對象色教導處理之流程圖》 第4圖係和圖像一空間轉換有關之環境設定的流程 第5圖係穩定運轉的流程圖。200947265 VI. Description of the Invention: [Technical Field] The present invention relates to a briefing system that displays an image on a display of a screen, a display, or the like, and moves the pointer to display when the presenter performs a briefing A command operation such as a change is performed, and a drawing process of adding a line or the like to the display image is performed at the same time. [Prior Art] In the conventional hand indicating device, an image indicating a three-dimensional space is displayed on a 0 display, and a recognition target is captured from a plurality of directions different from each other, and it is determined whether or not the hand of the recognition target has a specific shape. In the case where the other conditions are the same, the image is displayed to be enlarged or reduced (for example, refer to Patent • Document 1). [Problem to be Solved by the Invention] The conventional hand indicating device determines the shape of a hand. The size of the publisher's hand or the length of the finger varies from person to person. Therefore, it is necessary to normalize the recognition object and determine the change thereof, and the calculation processing of the device is excessively concentrated. In addition, there is a case where the photographic image changes depending on the direction in which the hand of the target person is located, and the like. In addition, at the time of the briefing, the presenter performs various actions such as explanation, display image selection, image specific position indication, and audience reaction tracking, and the spirit is also tense. The operation of the device used in this situation is as simple and intuitive as possible. 200947265 The object of the present invention is to provide a briefing system which simplifies the determination of a predetermined first processing request such as a command operation and a predetermined second processing request such as a drawing operation. SUMMARY OF THE INVENTION The object of the present invention is to provide a presentation system which is operative and easy to operate and which conforms to the publisher's intuitive method of operation. Other problems of the present invention will be apparent from the description of the present invention. [Means for Solving the Problem] A presentation system according to an aspect of the present invention has: a display surface that is enlarged in the X-axis and γ-axis directions; and an image memory unit is created, which is required to be memorized and displayed on the display surface. Image; the image selection means is selected to select the created image stored in the created image storage unit; and the display control means displays the created image selected by the created image selection means on the display © 2 cameras, which are used to capture the three-dimensional coordinate space containing the display surface; the colored indicator body may move within the three-dimensional coordinate space; the body color background memory portion is indicated, and the color data of the indicator body is memorized. And the color information of the specific background area of the specific background area which may be located in the field of view of the camera taken by the two cameras, and the specific background area of the specific field area of the camera; a fixed positional relationship between the indicator body and the specific background area; and a photographic image memory unit that records the two cameras simultaneously and simultaneously photographs 200947265 a coordinate image detecting means for calling a photographic image stored in the photographic image storage unit, and detecting a pointer image position indicating the same color as that indicated by the body color background memory portion of the body color, And calculating a three-dimensional coordinate 値 of the indicator body, and detecting a specific background image of the same color as the background color stored in the body color background memory portion, and calculating a three-dimensional coordinate 特定 of the specific background region; Determining whether the mutual positional relationship Q between the three-dimensional coordinate 该 of the pointer and the three-dimensional coordinate 该 of the specific background region satisfies a fixed positional relationship stored by the pointer background mutual position memory portion; In the case of the fixed positional relationship, the predetermined first processing is performed; and when the pointer background mutual position determining means does not satisfy the fixed positional relationship, the predetermined second processing is performed. In the present invention, the background color of the background around the pointer is determined, and when the background color is a specific color, the predetermined first processing is performed, and when the background color is other than the specific color, the predetermined second processing is performed. As a predetermined process, the drawing process of the image displayed on the display surface, the page turning of the created image, the start of the so-called animation in which the image change is displayed, the stop of the sound generation, and the like can be exemplified. In a presentation system of a preferred embodiment of the present invention, the specific background area may also be a partial area showing a contour area of the display surface of the created image. In the preferred embodiment, the color of the outline portion of the display surface is used as the background color. As the outline portion, for example, the color of the wall can be used. In the case of the 200947265 issuer requesting the predetermined first process, the pointer is moved on the outline (outside) of the created image display surface. In the case where a predetermined second process is required, the pointer is moved in the display surface. Thus, it becomes a briefing system that is more in line with the publisher's intuition. In a presentation system according to another preferred embodiment of the present invention, the specific background area may also be a partial area of the display surface area on which the image has been created. In the preferred embodiment, the specific color of the display surface is used as the background color. For example, a green portion is created on the right side of the slide in the slide display surface in advance, and even if any slide is displayed, the first processing is performed when the pointer is placed on the green portion. Therefore, it becomes a briefing system that is more in line with the publisher's intuition. In a briefing system according to another preferred embodiment of the present invention, the predetermined first processing may be a process of instructing the created image selecting means or the display control means to change, and the predetermined second processing is the display. The drawing process of the drawing image is added to the image displayed by the surface. ® This preferred embodiment performs command processing and drawing processing. A briefing system according to another preferred embodiment of the present invention may be characterized in that: the predetermined second processing is a drawing processing, and the briefing system further has: a pointing body coordinate memory portion for memorizing the three-dimensional coordinates of the indicator body And the drawing means 'refers to the three-dimensional coordinate 记忆 stored in the memory of the coordinate 値, and generates a drawing image according to the three-dimensional coordinate ;; the display control means is the drawing image and the selected one The created image is displayed on the display surface at the same time, and the coordinate calculation means is used to calculate the three-dimensional coordinates 値xl, yl of the indicator at time t1 at time t1, and the mark is in the memory portion of the coordinate mark, and the seat is utilized at the same time. Calculating the dimensional coordinates x2, y2, and z2 at time t2 after the lapse of the predetermined time from the time t1, and memorizing the coordinate 値 in the portion; the drawing image generated by the drawing means is dependent on the zl of the axis coordinate 値The image type is determined, and the 値 of the X-axis and the Y-axis coordinate 値xl, yl is determined, and the end point of the drawing of the X-axis and the Y-axis coordinate 値x2, y2 at time t2 is determined. According to the preferred embodiment, the drawing process corrects the drawing image in a manner dependent on the indication and applies it to the display surface. The present invention described above, which is capable of performing a drawing or the like, which is intended to be emphasized by the presenter, is in accordance with the intuition or habit of the present invention, and the constituent elements included in the preferred embodiments of the present invention can be combined and implemented as much as possible. Effect of the Invention According to the present invention, a predetermined second processing request is discriminated based on the background color. Therefore, the calculation of the discrimination required for the processing becomes simple. Further, the follow-up improvement is achieved. On the other hand, if the pointer is operated in the specific color portion, the first processing requires that if the pointer is operated on the image, that is, the second processing request, the publisher's operation is intuitive and has a single The effect. [Embodiment] zl, and the coordinate calculation means stores the three coordinates of the pointer at the time 11 Z [at the time t1, the starting point of the figure, the 値 of the figure, and the depth of the body is determined. The drawing chase section performs a more explicit briefing system. The configuration and the first processing are both simple and inexpensive. That is, the predetermined operation becomes a predetermined one. The simple presentation system 1 of the embodiment of the present invention will be further described below with reference to the drawings. The dimensions, materials, shapes, relative positions, and the like of the members or portions described in the embodiments of the present invention are merely illustrative examples, and the scope of the present invention is not limited to the scope of the present invention. . The first figure is a block diagram of the presentation system 1. The presentation system is composed of a screen display surface 2, a pointer 3, a first camera 4, a second camera 5, and a system controller 8. The image display of this embodiment is performed using the projector 6 and the screen. The display surface 2 is not limited to a screen, and may be, for example, a display surface of a liquid crystal display or the like. The display surface 2 is a plane that expands in the X-axis direction and the γ-axis direction. In the present invention, the plane includes a geometrically defined plane and a curved surface that is responsive to the characteristics of the projector or display. Further, in the present invention, the X-axis and the Y-axis are the relative relationship between the Z-axis indicating and expressing the distance from the display surface. The first camera 4 and the second camera 5 are cameras for capturing a three-dimensional space including the display surface 2, and may be a moving image camera or a still image camera. However, in order to perform a smooth drawing or the like, since the image is preferably obtained at a speed of, for example, about 60 frames per second or more, the first camera 4 and the second camera 5 are preferably moving image cameras. For example, a CCD camera can be used for the first camera 4 and the second camera 5. The first camera 4 and the second camera 5 preferably have the entire surface of the display surface 2 as a field of view. As shown by the field of view 7 in Fig. 1, the field of view including the contour area of the display surface 2 is preferable. The camera included in a single newsletter system is not limited to two or more than three. The indicator 3 is, for example, a red or green object. The shape of the object is for example -10- 200947265 is a sphere or cube. In order to facilitate the operation of the presentation system 1 by the movement of the pointer 3, it is preferable to attach the indicator 3 to the front end of the stick. Further, the presenter may wear a glove such as red or green to one hand, and use the glove as the indicator 3. The indicator 3 is a means of operating the presentation system 1. The pointer 3 moves or stops in the field of view of the first camera 4 and the second camera 5. Also, move to or stop beyond the field of view. The pointer in the field of view is detected and processed by the presentation system 1.指示 The indicator 3 may be positioned in front of a specific background area (91 or 92). The specific background area (1) 91 is an example in which the area is set in the display surface. The particular background area can be set to color a portion of the slide that is the rendered image into a particular color. In this case, if the number of sheets of the slide has a plurality of sheets, it is preferable to set a specific background area of the same color of all the slides to a common portion of the entire slide (for example, the upper left side). The specific background area (2) 92 is an example in which the area is provided on the outer side of the display surface. For example, the wall of the briefing place can be set to a specific background area. 〇 The system controller 8 is, for example, a computer. The system controller 8 includes an image memory unit 11 that has been created, a body color background color LUT12 (view table) indicating the body color background color storage unit, a three-dimensional coordinate memory unit 13, a lens correction coefficient unit, and an image. A space conversion coefficient unit 100, a moving distance 値 memory unit 16, a drawing image type storage unit 17, a drawing correction memory unit 18, a depth 亿 亿 19 19' indicator background mutual position memory unit 20, a photographic image The imaginary unit 31, the lens correction image storage unit 32, the indicator body coordinate storage unit 33, the background coordinates 各 亿 部 34, each memory unit, the created image selection means 21, the display control means 22, the coordinates 値 calculation The means 23, the drawing means -11 - 200947265 2 4, the first operation determining means 2 5, the command control means 26, the target color determining means 27, the depth determining means 30, and the means for indicating the body background mutual position determining means 5 1 The main control unit 28 and the peripheral device control unit 29. The above means can be realized by a program loaded by a computer, a CPU, a RAM, a ROM, and the like. Each of the above-described memory units can be realized, for example, by assigning a fixed portion of the hard memory of the computer. Further, the created image selecting means 21 can be, for example, a commercially available short message system program. The main control unit 28 performs overall operation control of the system controller 8, and the like. The peripheral device control unit 29 performs operations of the first camera 4 and the second camera 5, image processing, and control of the projector 6. Further, the system controller 8 has a system display device 41 and an input device 42. The system display device 41 is a liquid crystal display attached to a computer, and the input device 42 is a keyboard and an indicator such as a mouse. The discrimination method for indicating the body color background color LUT and the pointer or background color in the camera captured image used in the present embodiment is described in the following document. 〇田田和和:The color target detection information processing learning lecture recording "Computer vision and image media" using the closest identifier. Vol.44.No.SIG17 - 014,2002 (Start setting) Description Start of the presentation system 1 Set operation, etc. The created image displayed on the display surface 2 is created and memorized in the created image memory unit 11. The created images are text, graphics, photos, illustrations, moving images, and mixtures of these. Also, the created image may be an image of white paper. A specific example of the created image is a slide original for presentation. -12- 200947265 The drawing image type is memorized in the drawing image type storage unit 17. The types of drawing images are, for example, solid lines, broken lines, a little chain line, a circular mark, a triangular mark, a quadrangular mark, an arrow, and the like. The type of drawing image that is memorized can also be one. The correction of the drawing image is memorized in the drawing correction storage unit 18. The correction items are, for example, the thickness of the line, the shade of the color, the hue, and the like. Moreover, the relationship between z 1 and 値 is described. For example, 'If the thickness of the correction line is described in OS zl The case of <50 is the thickness of the line 100 pixels at 50$ zl <100 situation is the thickness of the line 70 pixels, at 100$Z1 The case of <150 is the thickness of the line of 40 pixels. Moreover, for example, if the color tone is corrected, it is described as 〇$ζ1 <100 is red, at 100Szl The case of <200 is orange, at 200 Szl The case of <300 is yellow. In the pointer background mutual position memory unit 20, the relationship between the detection coordinate of the pointer and the detection coordinate of the specific background area is described. When the operation of positioning the indicator 3 in a specific background region portion is performed, the Z-axis 两者 of both is close, and the X-axis Y and the Y-axis 两者 are also close to each other. The positional relationship such as the close relationship © is described in the memory unit. For example, if the display surface is set to a reference plane of z=o, and the specific background area is located in a plane of a specific background area of Z = n which is a plane parallel to the reference plane, the pointer background mutual position memory unit 20 is made to remember The relationship between the X-axis and the Y-axis of the indicator is located in a specific background area, and the Z-axis of the detecting coordinate of the indicator and the Z-axis of the detecting coordinate of the specific background area are close to each other. Within a predetermined fixed range, the positional relationship is satisfied. Further, the pointer background mutual position memory unit 20 may be memorized in advance only -13-200947265 is a fixed value of the distance between the detection coordinate of the pointer and the detection coordinate of the specific background region. In this case, although it is not a strict judgment as to whether or not the pointer is located in a specific background area, if it is set to be sufficiently close to determine that the positional relationship is satisfied, the operation is not hindered. (Environment setting) Next, set the environment. The environment setting is to set the presentation system 1 to the actual use place, and input the required data setting to the presentation system 1 in response to the illumination state, the range of the pointer movement, and the like.设置 Set the presentation system 1 to the presentation place and configure the display surface 2 and the projector 6. The first camera 4 and the second camera 5 are arranged, and the camera field of view 7 is adjusted. The camera field of view is fixed during steady operation in the subsequent environment settings. Next, the presentation system 1 is taught to recognize the object color. Fig. 2 is a diagram for explaining the content of the body color background color LUT12, and Fig. 3 is a flowchart for identifying the color teaching process of the object. The body color background color LUT 12 has a plurality of color channels. As the first camera photographic image, it has a color channel Chi ® -1 for indicating body color discrimination and a color channel Chi - 2 for background color discrimination. For each channel, each of the recognition target color and non-identification object is memorized. colour. Further, as the second camera image, the color channel Ch2 - 1 for indicating the body color discrimination and the color channel Ch2 - 2 used for the background color discrimination are provided, and the color of the recognition object and the non-identification are memorized for each channel ' Object color. In S1 1, the pointer 3 is positioned in the camera field of view 7, and the specific background area (1) 91 is displayed on the display surface 2, and the image including the indicator 3 is captured by the first camera 4 and the second camera 5. 》When the specific background area (2) 92 is used, 'Locate the indicator 3 in the camera field of view 7 and take the camera-14-200947265 field of view 7. In S12, the captured image of the first camera 4 and the captured image of the second camera 5 are stored in the captured image storage unit 3 1 ° at S13, and the captured image of the first camera 4 is called from the captured image storage unit 31. The image is displayed on the system display device 41. At S14, the pointer 3 area on the system display device is designated by the input device 42. By this operation, the first indicator body color Cl-Ι» is taught, and in S15, the first indicator body color Cl-1 is memorized in the indicator body color background color LUT12. At S16, the non-indicator area is designated for the same image displayed by the system display device 41. The first non-identifying object color NC1 - 1 - 1 is taught by this operation. At S17, the first non-recognition target color NC1-1-1 is stored in the indicator body color background color LUT12. S16 and S17 are repeated, and other non-identifying object colors NCI-1-p are taught. Here, in S16, in the case where the same image displayed by the system display device 41 has no non-identifying object color to be taught, from the following The photography operation is repeated. That is, an image including the specific non-recognition target color is displayed on the display surface 2, or the object of the specific non-recognition target color is positioned in the camera field of view 7, and is photographed by the first camera, and then S is performed. 1 2. S 1 3, S16 and S17. According to the above, the recognition target color C1 - 1 of the pointer, the non-identification target color NC1-1-1, NC1 - 1-2.....NCI - 1-n of the pointer are taught to the first camera image. And remembered to indicate the body color background color LUT12. In the second figure, it is displayed in the color channel Chi-1. Next, the process proceeds to S181, and the image of the first camera is processed in the same manner as in the case of S14 to S18. By this processing, the identification object color C 1 - 2, the non-identifying object of the background of the first camera photographic image C 1 - 200947265 color NCI - 2 - 1, NCI - 2 - 2, ..., NCI - 2 — η, and remembered to indicate the body color background color LUT12. It is shown by the color channel Ch1-2 in Fig. 2. At S19, the same processing as S13 to S18 is repeated for the second camera photographic image. The second indicator body color C2-1 designated in S14 is substantially the same as the first indicator body color C1-1 described above, but each is a color data reflecting the color characteristics of the second camera and the first camera. The background color C2- 2 and the background color C1-2 also have the same relationship. When the teaching of the color of the above-mentioned recognition object ends, as shown in Fig. 2, 〇 the color channels Ch2 - 1 and Ch2 - 2 each memorize the color of the recognition object and the color of the non-recognition object. In a presentation system including three or more cameras, for the third camera captured image, etc., the body color and the non-recognition object color, the background color, and the non-recognition object color are taught. Next, referring to Fig. 4, an environmental setting relating to image-to-space conversion will be described. In S21, the image of the reference point for correction has been displayed on the display surface 2» 亦可. Alternatively, the same printed matter may be attached to the display surface instead of the display image. The image of the reference point for correction is drawn, for example, a checkered image or a checkered image. Then, the display is captured by the first camera 4 and the second camera 5. In S22, the captured image of the first camera 4 and the captured image of the second camera 5 are stored in the captured image storage unit 31. The captured image ' of the first camera 4 is called from the captured image memory unit 31 at S23' and displayed on the system display device 41. The image coordinate 値 of the correction reference point on the display image is detected. At S24, the correction reference point for the three-dimensional coordinates (cosm coordinates) is associated with the image coordinate 检测 detected at S23. The three-dimensional coordinates 値 -16 - 200947265 and the image coordinates are stored in the temporary memory at S25 '. Repeat the operations of S23~S25 for other correction reference points. In S26, the three-dimensional coordinate group and the one-to-one corresponding image coordinate group are called from the temporary memory, and the lens correction coefficient of the first camera image and the first camera image are calculated from the two groups. Image-space conversion factor. The lens correction factor is the coefficient 値, and the image-to-space conversion coefficient is the matrix 値. At S27, the lens correction coefficient of the first camera image is stored in the Q lens correction coefficient storage unit 14. In S28, the image-space conversion coefficient of the first camera image is stored in the image-space conversion coefficient storage unit 15 °. At S29, the second camera image is subjected to operations similar to S23 to S28. By this operation, the lens correction coefficient of the second camera image is stored in the lens correction coefficient storage unit 14, and the image-space conversion coefficient of the second camera image is stored in the image-space conversion coefficient storage unit 15. . (Stable operation 1) Ο Next, the flow chart of the stable operation shown in Fig. 5, the three-dimensional coordinates of the indicator 3 shown in Fig. 6 and Fig. 7, and the flow of the three-dimensional coordinate determination process of the specific background area are referred to. Figures 1/2 and 2/2 illustrate the stable operation process. Stable operation begins at S40. In steady operation, the pointer 3 moves or stops according to the operation of the publisher. At S41, the created image selecting means 21 selects one image from the created image storage unit 11. At S42, the display control means 21 displays the image on the display surface 2. -17- 200947265 In S440, the first camera 4 and the second camera 5 simultaneously capture respective fields of view, and then calculate the three-dimensional coordinates of the pointer 3 from the position of the pointer 3 in the captured image, and from the captured image. The position of the specific background area calculates the three-dimensional coordinates of the specific background area. The detailed processing of S 440 is additionally explained below. By this calculation, the three-dimensional coordinates lxl, yl, zl of the pointer 3 at time t1 and the three-dimensional coordinates 特定 of the specific background region, and the coordinates 値x2, y2, and z2 at time t2 are calculated. The three-dimensional coordinates of these pointers 3 are stored in the indicator body coordinate storage unit 33. Further, the three-dimensional coordinate 値© of the specific background area is memorized in the background coordinate 値 memory unit 34 » In S44 1, the pointer background mutual position determining means 5 1 calls the indicator at time 11 from the indicator body 値 memory unit 3 3 3's three-dimensional coordinates 値. Further, the three-dimensional coordinates 特定 of the specific background area at time t1 are called from the background coordinate 値 memory unit 34. Then, the pointer background mutual position determining means 51 compares the positional relationship between the two three-dimensional coordinates and the pointer background mutual position memory unit 20. The determination by the pointer background mutual position determining means 51, for example, when all the detected background coordinate groups are used as the background area and the background coordinates are recorded in the memory, the combination of all the points is calculated. The positional relationship between the determined detection target body coordinate point and the detection background coordinate point recorded by the body standard storage unit 33 is determined, and it is determined whether or not the combination of the coordinates satisfying the positional relationship of the indicator background mutual position memory unit 20 is present. . This method can be used only in the case of simply determining the distance between the detecting coordinate of the pointer and the detecting coordinate of the specific background area. Alternatively, the display surface is a reference surface, and in the case where it is determined that the background area plane and the reference plane of the special -18-200947265 existing in the background area are parallel, the X-axis 値 and the γ-axis of the detected background coordinate group are obtained in advance by calculation. The maximum 値 and minimum 値 of the 値, according to the determined detection indication, whether the body punctuation point is located in the X-axis 检测 of the background coordinate group, the maximum 値 and the minimum Y of the Y-axis, and the determined detection target Whether or not the distance between the Z-axis 値 and the Z-axis 特定 of the plane of the specific background region satisfies the positional relationship or the like, it is determined whether or not the positional relationship stored by the pointer background mutual position memory unit 20 is satisfied. Further, by using the coordinate calculation means, the boundary point corresponding to the outermost background region of the background coordinate group © is detected, and the plane formed by the boundary point of the background region is determined as the specific background region, and the space vector method is used. The background area boundary point is calculated to determine a specific background area plane in which the specific background area is located, and then the distance between the specific background area plane and the determined detection indicator body coordinate point, and the determined detection indication body coordinate point is located at a specific The specific background area on the background area plane, the inner side of the area where the normal of the boundary of the field is connected, and the like, and it is determined whether or not the positional relationship remembered by the position background memory unit 20 is satisfied. This method is not in the specific background area. It is effective in parallel with the reference plane. When the relationship of the two three-dimensional coordinates 时刻 at time t1 satisfies the positional relationship stored, the process proceeds to S442. When the relationship of the two three-dimensional coordinates 时刻 at time t1 does not satisfy the stored positional relationship, the process proceeds to S500. S442 is the start processing of the predetermined first processing. In the present embodiment, the predetermined first processing is the command control processing. That is, at S443, the command control means 26 performs a predetermined process for the created image or the created image selecting means currently displayed. The predetermined processing may be, for example, a process of displaying the next created image by the display control means 22 on the display surface 2 instead of the already created image. Further, for example, it may be a process of expanding or reducing the image being displayed. S5 00 is the start processing of the predetermined second processing. In the present embodiment, the predetermined second processing is the drawing processing. In S51, the drawing means 24 selects one type of drawing image from the drawing image type storage unit π. For example, in a stable operation, the type of the drawing image may be edited in advance to always select the same drawing image type, and the editing program may be selected to select the drawing image type in response to the x1, yl, and zl. © In S52, the drawing means 24 refers to the indicator body coordinate storage unit 33, and corrects the drawing image in response to the z of zl. In the case where the drawing image is a line, for example, when 値 of z 1 is relatively close to the display surface, image correction becomes a relatively thick line, and 値 at z1 is relatively far from the display surface. In the case, image correction becomes a relatively thin line. At S53, the drawing means 24 determines the start point of the drawing image by referring to the x of x1 and yl of the body coordinate storage unit 33. At S54, the drawing means 24 refers to the x of x2 and y2 of the body coordinate storage unit 33, and determines the end point of the drawing image. At S55, the drawing means calls the created image which is selected from the created image storage unit 11 and which is selected at S41 and is now displayed on the display surface 2. A process of superimposing the corrected drawing image in S51 to S54 and the created image is performed. The drawing process may also be such that the drawing image is displayed at the forefront, and the form of the superimposed image is not seen. For example, the image may be drawn by 50% over-drawing. Then, the image is transmitted to the display control means 22. At S56, the display control means 22 displays the superimposed image on the display surface -20-200947265. If S56 ends, the data indicating χ2, y2, z2 in the body coordinate storage unit 33 is defined as a new xl, yl, zl. . (Stable Operation-2, Three-Dimensional Coordinate Determination Processing) The three-dimensional coordinates of the pointer 3 in S440 and the three-dimensional coordinate determination processing of the specific background region will be described. At S61, the first camera 4 and the second camera 5 simultaneously capture a fixed field of view. At S62, the captured image of the first camera 4 and the captured image of the second camera 5 are stored in the captured image storage unit 31. 〇 At S63, the captured image of the first camera 4 is called from the photographic image storage unit 31. In S64, the lens correction coefficient of the first camera image stored in the lens correction coefficient unit 14 is used to correct the lens of the first camera image. At S65, the image corrected by the lens of the first camera is stored in the lens corrected image storage unit 32. At S66, the second camera photographic image is processed in the same manner as S62 to S65. The lens correction coefficient used in this processing is the lens correction coefficient of the second camera image. Therefore, the lens corrected image of the second camera is also stored in the lens corrected image storage unit 32. At S67, the first point of the three-dimensional coordinate space memorized by the three-dimensional coordinate storage unit 13 is extracted. The coordinates of the extracted point are called "extracting three-dimensional coordinates". The three-dimensional coordinate space is a space that includes the display surface 2 (expanded in the X-axis and Y-axis directions) and is enlarged from the display surface 2 toward the front (the direction in which the presenter and the listener are located (Z-axis)) and the rear. At the same time, the three-dimensional coordinate space is the space captured in the camera field 7. In the three-dimensional coordinate memory unit 13, each of the coordinate points on the space in which the three-dimensional coordinate space is divided into a minute cube or a small rectangular parallelepiped is memorized. Although the Z-axis positive and negative of the three-dimensional coordinate 定义 is defined by the right-handed coordinate system, -21 - 200947265 is to correct the positive and negative of the z-axis as needed. At S68, the coordinate 値 calculation means 23 uses the image-space conversion coefficient memory. The image-space conversion coefficient of the first camera image stored by the unit 15 converts the extracted three-dimensional coordinates into the image coordinates of the first camera image. The converted image coordinates are referred to as "extracted image coordinates". In S69, the target color determination means calls the first camera lens correction image, and determines the extraction map while referring to Chl-1 and Ch1_2' of the color channel for the first camera image indicating the body color background color LUT12. Whether the pixels like the scorpion mark meet the color of the recognition object. If it matches the color of the recognized object, move to S70. When the color of the recognition target is not satisfied, the process returns to S 6 7, and the second point of the three-dimensional coordinate space memorized by the three-dimensional coordinate storage unit 13 is extracted, and the processing of S68 and S69 is performed. At S70, the lens-corrected image of the second camera is subjected to the same processing as S68 and S69. At this time, the image-space conversion coefficient used for the conversion of the extracted image coordinate 是 is the image-space conversion coefficient of the second camera photographic image. Further, the color data in the target body color background color LUT12 © referred to by the target color determination means is the color channels Ch2-1 and Ch2-2 for the second camera image. In S71, the target color determining means 27 determines whether or not the recognition target color is detected in the first camera captured image and the second camera captured image. In other words, when the pixel of the extracted image coordinate 第 of the second camera image is determined as the color of the recognition target, the three-dimensional coordinate 抽 is extracted as the detection coordinate point of the pointer 3 or the specific background region. Further, in the case of indicating the body color, as the detection target body coordinate point, the three-dimensional coordinate is extracted and stored in the indicator coordinate storage unit 32. In the case of the background color, as the detection background coordinate point, -22-200947265 will extract the three-dimensional coordinates 値 亿 in the background coordinate 値 memory unit 33. When the color of the recognition target is not detected in the second camera image, the process returns to S67, and the second point of the three-dimensional coordinate space memorized by the three-dimensional coordinate storage unit 13 is extracted, and processing similar to S68 and S69 is performed. At S72, the processing of S67 to S71 is repeated, and the color detection processing of the recognition target is performed at all points of the three-dimensional coordinate space. At S73, if the color of the recognition target is detected in the above processing, the process proceeds to S74. If the color of the recognition target is not detected in the above processing, move to ©S76. At S74, the detection indicator body coordinate group stored in the indicator body coordinate memory unit 3 is determined as a point by the coordinate 値 calculation means. Regarding the determination of the point, for example, the center of gravity of the target group of the indicator can be obtained, and the coordinates of the point closest to the display surface can be selected in the group of the indicator body. Next, the background area is determined from the detected background coordinate group stored in the background coordinate unit memory unit 34. The background coordinates 〇 〇 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値 値A boundary point, and a plane formed by the boundary point of the background area is determined as a specific background area. The background area boundary point contains the vertices of the background area and is composed of more than 3 points. At S75, the determined detection indicator body coordinate point is taken as xl, yl, zl at time U, and is stored in the indicator coordinate memory unit 33. Further, the determined background area is memorized in the background coordinate 値 memory unit 34. At S76', the process returns to S61, and the camera coordinates are detected for the image of the picture -23-200947265 in the next frame (time t2). Therefore, the coordinate points x2, y2, and z2 of the pointer at time t2 are stored in the indicator coordinate memory unit 33, and the background area at time t2 is stored in the background coordinate memory unit 34. Further, in the case where the camera is three or more presentation systems, then the processing of S68, S69 for the second camera lens correction image of the second camera is performed, and the sum of the corrected images of the third camera lens S68, S69 is performed. - kind of processing. In the briefing system using the huge display surface 2, and the environment setting related to the image-to-space conversion, the above operation is performed, and the three-dimensional coordinate calculation of the corpuscle is indicated during stable operation, as long as the range of the extracted three-dimensional coordinate 値 is limited. It can be in the range of the pointer movement. In this way, the effect of calculating the speed, etc. can be obtained. (First Modification Example) A presentation system 1 (first modified embodiment) for determining the depth position of an additional indicator of the preferred embodiment will be described. The added depth position determination is a process of determining whether or not to perform a predetermined second process based on the depth position of the pointer in the three-dimensional space. According to the preferred embodiment, when the pointer is located at a depth of a fixed range, a predetermined second process is performed. For example, in the case where the presenter approaches the display surface and moves the pointer, the drawing is performed. On the other hand, when the publisher moves away from the display surface, for example, when the pointer is moved near the podium, the drawing is not performed. In general, the presenter has a habit of moving toward the fixed depth direction on the display surface or the like in the display surface when an image is created such as a slide. Therefore, in the preferred embodiment, in the case where the predetermined second processing is the drawing processing, the present invention becomes a presentation system that more closely matches the taste of the publisher. -24- 200947265 However, the processing performed after the depth position is determined by the present invention is not limited to the drawing processing. Figure 8 is a flow chart illustrating a first modified embodiment. In the first modified embodiment, in the stable operation described above, the processing of S81 to S82 is inserted between S5 00 (the predetermined second processing start) and S51 (the drawing start). The threshold of the Z axis of the pointer is memorized in advance in the depth memory unit 19. Or, the two ends of the Z axis are memorized in advance. In the case of the threshold, it is described that the case where zl is larger (or smaller) than the threshold is moved to the subject of the drawing process. In the case where the 値 is at both ends, the case where z 1 is located between the two ends (or the case where it is exceeded) is moved to the subject of the drawing process. The above-mentioned depths and conditions stored in the memory unit are collectively referred to as "pre-limit conditions". In S81, the depth determining means 30 compares the 记忆 stored in the depth 値 memory unit 19 with the z of the z1 stored in the indicator coordinate memory unit 33. When S1 satisfies the threshold condition in S82, the process proceeds to S51, and the drawing process is performed. Further, in the case where zl does not satisfy the condition of the threshold, it returns to S440. (Second modified embodiment) ϋ ν is a briefing system 1 (second modified embodiment) for determining the moving speed of the additional indicator in the other preferred embodiment. The added moving speed determination is a process of judging whether or not to perform a predetermined second processing based on the moving speed of the pointer in the three-dimensional space. According to the preferred embodiment, the predetermined second processing is performed when the pointer moves at a specific speed. Generally, the publisher has the habit of slowly moving the pointer when the image is created such as a slide and the drawing is requested. On the other hand, there is a habit of causing the pointer to move quickly when a slide or the like is requested. -25- 200947265 Therefore, in the preferred embodiment, in the case where the predetermined second processing is the drawing processing, it becomes a presentation system that more closely matches the taste of the publisher. However, the processing performed after the depth position is judged by the present invention is not limited to the drawing processing. Figure 9 is a flow chart illustrating a second modified embodiment. In the second modified embodiment, in the stable operation described above, the processing of S91-S92 is inserted between S500 (the predetermined second processing start) and S51 (the drawing start). The moving distance of the pointer is stored in advance in the moving distance 値 memory unit ❹ 16. In S9 1, the first operation determining means 25 calculates the distance (calculated moving distance) between the position of the pointer at the time t1 and the position of the pointer at the time t2, which is stored in the body coordinate memory unit 33. In S92, the first operation determining means 25 compares the moving distance 値 the moving distance 记忆 memorized by the storage unit 16 and the calculated calculated moving distance. When it is calculated that the moving distance is smaller than the stored distance 値, the process proceeds to S51, and the drawing process is started. When it is calculated that the moving distance is larger than the stored distance © 値, the process returns to S440. The movement distance is determined by the memory of the 16th unit and the determination by the first operation determination means 25, for example, as shown below. (1) Memorize a single threshold, and compare and judge the magnitude of the moving distance. (2) The two ends of the single range are memorized, and a comparison judgment is made to calculate whether the moving distance is within or out of range. (3) The two ends of the 2 range are memorized, and a comparison judgment is made to calculate the moving distance in any one of the ranges. -26- 200947265 [Simple description of the diagram] Figure 1 is a block diagram of the presentation system 1. Fig. 2 is a view showing the contents of the body color background color LUT12. Fig. 3 is a flow chart of the color recognition processing of the recognition target. Fig. 4 is a flow of environment setting relating to image-to-space conversion. Fig. 5 is a flow chart of stable operation.

〇 第6圖係指示體3和特定背景區域之三維座標決定處 理流程圖—1/2。 第7圖係指示體3和特定背景區域之三維座標決定處 理流程圖—2/2 » 第8圖係指示體3之深度位置判斷附加部分的流程圖。 第9圖係指示體3之移動速度判斷附加部分的流程圖。 【主要元件符號說明】 1 簡報系統 2 顯示面 3 指示體 4 第1相機 5 第2相機 7 相機視野 8 系統控制器〇 Figure 6 is a three-dimensional coordinate decision flow chart for the indicator 3 and the specific background area—1/2. Fig. 7 is a three-dimensional coordinate decision processing flow chart of the indicator 3 and the specific background area - 2/2 » Fig. 8 is a flow chart showing the additional portion of the depth position determination of the body 3. Fig. 9 is a flow chart showing the additional portion of the moving speed determination of the indicator 3. [Main component symbol description] 1 Presentation system 2 Display surface 3 Indicator body 4 First camera 5 Second camera 7 Camera field of view 8 System controller

12 係指示體色背景色記憶部的指示體色背景色LUT 91 特定背景區域(1) 92 特定背景區域(2) -27-12 system indicator body color background color indicator body color background color LUT 91 specific background area (1) 92 specific background area (2) -27-

Claims (1)

200947265 七、申請專利範圍·· 1.一種簡報系統, 具有: 在X軸、Y軸方向擴大的顯示面; 已製作圖像記憶部,係記億顯示於該顯示面所需的 已製作圖像; 已製作圖像選擇手段,係選擇該已製作圖像記憶部 所記憶的已製作圖像; Φ 顯示控制手段,係將已製作圖像選擇手段所選擇之 已製作圖像顯示於該顯示面; 2台相機,係拍攝包含有該顯示面的三維座標空間; 著色指示體,係可能在該三維座標空間內移動; 指示體色背景色記憶部,係記億該指示體的顔色資 料,及對可能位於該2台相機所拍攝之相機視野內’與 是該相機視野內之部分區域的特定背景區域,記憶該特 定背景區域的顏色資料; φ 指示體背景相互位置記憶部,係記憶該指示體和該 特定背景區域之固定的位置關係;以及 攝影圖像記憶部,係各自記憶該2台相機所同時拍 攝的攝影圖像; 座標位置檢測手段,係叫出該攝影圖像記憶部所記 憶之攝影圖像,再檢測和該指示體色背景色記憶部所記 憶之指示體色同色的指示體圖像位置,並算出指示體的 三維座標値,及檢測和該指示體色背景色記憶部所記憶 之背景色同色的特定背景圖像,並算出特定背景區域的 -28- 200947265 三維座標値; 指示體背景相互位置判定手段,係判定該指示體之 三維座標値和該特定背景區域之三維座標値的相互位置 關係是否滿足該指示體背景相互位置記憶部所記憶之固 定的位置關係; 在指示體背景相互位置判定手段滿足固定之位置關 係的情況,進行既定之第一處理; 在指示體背景相互位置判定手段不滿足固定之位置 © 關係的情況,進行既定之第二處理。 2.如申請專利範圍第1項之簡報系統,其中該特定背景區 域是顯示已製作圖像之該顯示面的輪廓區域的部分區 域。 3_如申請專利範圍第1項之簡報系統,其中該特定背景區 域是顯示已製作圖像之該顯示面區域的部分區域。 4. 如申請專利範圍第1項之簡報系統,其中該既定之第一 處理是對該已製作圖像選擇手段或該顯示控制手段指示 β 變更的處理’該既定之第二處理是對該顯示面所顯示之 圖像追加繪圖圖像的繪圖處理》 5. 如申請專利範圍第1項之簡報系統,其中 該既定之第二處理是繪圖處理,該簡報系統又具有: 指示體座標値記憶部’係記憶該指示體的三維座標 値;及 繪圖手段’係參照該座標値記憶部所記憶之該三維 座標値,並因應於該三維座標値而產生繪圖圖像; 該顯示控制手段是將該繪圖圖像和所選擇之該已製 -29- 200947265 作圖像同時顯示於該顯示面; 利用該座標値計算手段計算在時刻tl之該指示體的 三維座標値X 1、y 1、z 1,並將該座標値記憶於該座標値 記憶部,同時利用該座標値計算手段計算在從時刻11經 過既定時間後的時刻t2之該指示體的三維座標値x2、 y2、z2,並將該座標値記憶於該座標値記憶部; 該繪圖手段所產生之繪圖圖像係因應於在時刻tl之 Z軸座標値的zl的値而決定圖像種類,並對應於在時刻 Φ tl之X軸和Y軸座標値xhyl的値,而決定繪圖的起點, 對應於在時刻t2之X軸和Y軸座標値x2、y2的値,而 決定繪圖的終點。 ❿ -30-200947265 VII. Patent application scope ·· 1. A briefing system, which has: a display surface that is enlarged in the X-axis and Y-axis directions; an image memory unit has been created, and the image created by the display is required for the display surface. The image selection means is selected to select the created image stored in the created image storage unit; Φ display control means to display the created image selected by the created image selection means on the display surface 2 cameras that take a three-dimensional coordinate space containing the display surface; a coloring indicator that may move within the three-dimensional coordinate space; a body color background memory portion that indicates the color information of the indicator body, and Memorize the color data of the specific background area for a specific background area that may be located in the field of view of the camera taken by the two cameras; and φ the indicator background mutual position memory, and memorize the indication a fixed positional relationship between the body and the specific background area; and a photographic image memory unit that memorizes the simultaneous shooting of the two cameras The coordinate position detecting means calls the photographic image stored in the photographic image memory unit, and detects the position of the pointer image in the same color as the body color recorded by the pointing body color background memory unit, and calculates a three-dimensional coordinate 指示 of the indicator body, and detecting a specific background image of the same color as the background color memorized by the body color background memory portion, and calculating a three-dimensional coordinate -28 of the specific background region -28-200947265; And determining whether the mutual positional relationship between the three-dimensional coordinate 该 of the indicator body and the three-dimensional coordinate 该 of the specific background area satisfies a fixed positional relationship remembered by the positional background memory portion of the indicator body; When the fixed positional relationship is satisfied, the predetermined first processing is performed; and when the pointer background mutual position determining means does not satisfy the fixed position © relationship, the predetermined second processing is performed. 2. The presentation system of claim 1, wherein the specific background area is a partial area showing a contour area of the display surface of the created image. 3_ The briefing system of claim 1, wherein the specific background area is a partial area showing the display area of the created image. 4. The briefing system of claim 1, wherein the predetermined first process is a process of instructing the image selection means or the display control means to indicate a change in the 'the predetermined second process is the display The drawing process of the image displayed by the face is added. 5. The briefing system of claim 1, wherein the predetermined second process is a drawing process, and the newsletter system has: 'memorizing the three-dimensional coordinate 该 of the indicator; and the drawing means' refers to the three-dimensional coordinate 记忆 stored in the coordinate 値 memory portion, and generates a drawing image according to the three-dimensional coordinate ;; the display control means is to The drawing image and the selected image -29-200947265 are simultaneously displayed on the display surface; the coordinate calculation means is used to calculate the three-dimensional coordinates 値X1, y1, z1 of the indicator at time t1. And storing the coordinate mark in the memory portion of the coordinate mark, and calculating the three-dimensional shape of the indicator body at time t2 after a predetermined time from the time 11 by using the coordinate calculation means The coordinates 値x2, y2, and z2 are stored in the coordinate memory of the coordinate mark; the drawing image generated by the drawing means determines the image type according to the zl of the Z-axis coordinate at time t1. And corresponding to the X of the X-axis and the Y-axis coordinate 値xhyl at the time Φ tl, and the starting point of the drawing is determined, corresponding to the 値 of the X-axis and the Y-axis coordinate 値x2, y2 at the time t2, and the end point of the drawing is determined. . ❿ -30-
TW098109826A 2008-03-27 2009-03-26 Presentation system TW200947265A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008083911A JP5134409B2 (en) 2008-03-27 2008-03-27 Presentation system

Publications (1)

Publication Number Publication Date
TW200947265A true TW200947265A (en) 2009-11-16

Family

ID=41113240

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098109826A TW200947265A (en) 2008-03-27 2009-03-26 Presentation system

Country Status (3)

Country Link
JP (1) JP5134409B2 (en)
TW (1) TW200947265A (en)
WO (1) WO2009119026A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102918828B (en) * 2010-05-31 2015-11-25 株式会社Pfu Overhead scanner device and image processing method
EP2395413B1 (en) * 2010-06-09 2018-10-03 The Boeing Company Gesture-based human machine interface
JP2012150636A (en) * 2011-01-19 2012-08-09 Seiko Epson Corp Projection type display device and information processing system
CN106454068B (en) * 2016-08-30 2019-08-16 广东小天才科技有限公司 A kind of method and apparatus of fast acquiring effective image
US10890653B2 (en) 2018-08-22 2021-01-12 Google Llc Radar-based gesture enhancement for voice interfaces
US10698603B2 (en) 2018-08-24 2020-06-30 Google Llc Smartphone-based radar system facilitating ease and accuracy of user interactions with displayed objects in an augmented-reality interface

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08242469A (en) * 1995-03-06 1996-09-17 Nippon Telegr & Teleph Corp <Ntt> Image pickup camera
JPH09311759A (en) * 1996-05-22 1997-12-02 Hitachi Ltd Method and device for gesture recognition
JP2002196873A (en) * 2000-12-27 2002-07-12 Ntt Docomo Inc Device and method for inputting handwritten data, personal certification device and its method
JP2005063225A (en) * 2003-08-15 2005-03-10 Nippon Telegr & Teleph Corp <Ntt> Interface method, system and program using self-image display
JP2007241833A (en) * 2006-03-10 2007-09-20 Kagoshima Univ Recognition device, recognition system, shape recognition method, program and computer readable recording medium

Also Published As

Publication number Publication date
JP2009237951A (en) 2009-10-15
JP5134409B2 (en) 2013-01-30
WO2009119026A1 (en) 2009-10-01

Similar Documents

Publication Publication Date Title
JP7231306B2 (en) Method, Apparatus and System for Automatically Annotating Target Objects in Images
JP6525611B2 (en) Image processing apparatus and control method thereof
JP5157647B2 (en) camera
KR101660576B1 (en) Facilitating image capture and image review by visually impaired users
TW200947265A (en) Presentation system
CN106034206B (en) Electronic device and image display method
JP5436367B2 (en) Graphic arrangement determining method, program thereof, and information processing apparatus
CN103189827A (en) Object display device and object display method
US8526741B2 (en) Apparatus and method for processing image
US11209973B2 (en) Information processing apparatus, method, and medium to control item movement based on drag operation
JP2010074405A (en) Image processing apparatus and method
JP5043767B2 (en) Image processing apparatus and image processing method
JP6723938B2 (en) Information processing apparatus, display control method, and program
JP4790080B1 (en) Information processing apparatus, information display method, information display program, and recording medium
JP5886662B2 (en) Image display device
KR102022559B1 (en) Method and computer program for photographing image without background and taking composite photograph using digital dual-camera
TW200947261A (en) A presentation system
JP2019003326A (en) Information processor, control method and program
JP2005303941A (en) Correction reference designation device and correction reference designation method
JP5366522B2 (en) Image display device and digital camera having image display device
JP2009055272A (en) Image processing apparatus, image processing method, and image processing program
JP2019179461A (en) Pocketbook management device, pocketbook management program, and pocketbook management method
KR20210029905A (en) Method and computer program for remove photo background and taking composite photograph
JP2007241370A (en) Portable device and imaging device
JP2005107045A (en) Photographed image projector, image processing method for the same, and program