TW200947261A - A presentation system - Google Patents

A presentation system Download PDF

Info

Publication number
TW200947261A
TW200947261A TW098109824A TW98109824A TW200947261A TW 200947261 A TW200947261 A TW 200947261A TW 098109824 A TW098109824 A TW 098109824A TW 98109824 A TW98109824 A TW 98109824A TW 200947261 A TW200947261 A TW 200947261A
Authority
TW
Taiwan
Prior art keywords
image
coordinate
camera
display surface
indicator
Prior art date
Application number
TW098109824A
Other languages
Chinese (zh)
Inventor
Yuichiro Takai
Original Assignee
Nissha Printing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nissha Printing filed Critical Nissha Printing
Publication of TW200947261A publication Critical patent/TW200947261A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

This invention acquires a presentation system which simplifies a three dimension coordinate calculating processing and comprises an operating method adapted to the presenter's habit. The presentation system 1 comprises: a display face 2 widened along X axis and Y axis, means for controlling display 22 for displaying the completed picture, an indicating body 3, cameras 4, 5 for photographing a three dimension coordinate space, means for calculating coordinate value 23 for calculating the coordinate value of the indicating body from the photographed picture which cameras simultaneously take, and means for drafting 24. Subsequent to calculating the coordinate values of the indicating body x1, y1, z1 in timing t1 and the coordinate values of the indicating body x2, y2, z2 in timing t2, the picture amendment in response to value z1 is proceeded, and furthermore the starting point of drafting in response to values x1, y1 and the ending point of drafting in response to values x2, y2 are set.

Description

200947261 六、發明說明: 【發明所屬之技術領域】 本發明係有關於簡報系統,其將影像顯示於銀幕、顯 示器等之顯示體,在發表者進行簡報時,使指示體移動’ 並可將線等加註於影像。 【先前技術】 以往的手指示裝置,有將表示三維空間的影像顯示於 顯示器,再從彼此相異之複數個方向拍攝識別對象者’並 〇 判斷識別對象者所指示之特定位置的座標(例如,參照專利 文獻1。)。 該手指示裝置各自求得因識別對象者使手臂彎曲伸長 而位置變化的特徵點及即使識別對象者使腕彎曲伸長而位 置亦不變化之基準點的三維座標,再判斷識別對象者所想 要之三維空間內的特定三維座標。 專利文獻1 :特開平1 1 — 1 34089號公報 【發明内容】 ® [發明所欲解決之課題] 該以往之手指示裝置是在裝置動作中,一直求得特徵 點和基準點之位置者,而裝置的計算處理過度集中。又, 需要藉由識別對象者朝向正面或朝向背面等,變更基準點 等的處置。 此外,在簡報時,發表者係同時進行說明、顯示影像 的選擇、影像特定位置的指示、聽眾之反應追蹤等各種的 行爲,此外,精神亦緊張。在這種狀況下所使用之裝置是 200947261 進行儘可能符合發表者之普遍習性之操作者較佳。 本發明之課題在於得到將三維座標計算處理簡化的簡 報系統。又’本發明之課題在於得到一種簡報系統,其具 備有符合發表者之習性的操作方法。 根據本發明的說明,將明白本發明之其他的課題。 [解決課題之手段] 本發明之一形態的簡報系統,其特徵爲: 具有: 〇 在X軸、γ軸方向擴展的顯示面; 已製作影像記憶部,係記憶顯示於該顯示面用的已製 作影像; 已製作影像選擇手段,係選擇該已製作影像記憶部所 記憶的已製作影像; 顯示控制手段,係將已製作影像選擇手段所選擇之已 製作影像顯示於該顯示面; 2台相機,係拍攝包含有該顯示面的三維座標空間; 〇 指示體,係可在該三維座標空間內移動; 攝影影像記憶部,係各自記億該2台相機所同時拍攝 的攝影影像; 座標値計算手段,係從攝影影像記憶部所記憶之該同 時拍攝的影像,計算以該三維座標空間內的一個固定點作 爲基準點之該指示體的三維座標値; 指示體座標値記憶部,係記憶該三維座標値;以及 繪圖手段,係參照該指示體座標値記憶部所記憶之該 指示體的三維座標値,並對應該指示體的三維座標値’而 200947261 產生繪圖圖像; 該顯示控制手段係將該繪圖圖像和所選擇、顯示之該 已製作影像同時顯示於該顯示面; 利用該座標値計算手段,計算在時刻u之該指示體的 三維座標値xl、yl、zl,並將該座標値記憶於該指示體座 標値記憶部,而且利用該座標値計算手段,計算在從時刻 tl經過既定時間後的時刻t2之該指示體的三維座標値x2、 y2、z2,並將該座標値記憶於該指示體座標値記憶部;200947261 VI. Description of the Invention: [Technical Field] The present invention relates to a briefing system that displays an image on a display of a screen, a display, or the like, and moves the pointer when the presenter performs a briefing. Wait for the image. [Prior Art] A conventional hand indicating device displays a video indicating a three-dimensional space on a display, and captures a target object from a plurality of directions different from each other, and determines a coordinate of a specific position indicated by the recognition target (for example, , refer to Patent Document 1.). Each of the hand indicating devices obtains a feature point that changes the position of the arm by bending the arm of the recognition target, and a three-dimensional coordinate of the reference point that does not change the position even if the person who recognizes the subject bends the wrist, and determines the target person who wants to recognize Specific three-dimensional coordinates within the three-dimensional space. [Patent Document 1] Japanese Laid-Open Patent Publication No. Hei No. Hei. No. Hei. No. Hei. No. Hei. The computational processing of the device is excessively concentrated. Further, it is necessary to change the treatment of the reference point or the like by the person who recognizes the subject toward the front or the back. In addition, at the time of the briefing, the presenter performs various actions such as explanation, display of image selection, indication of specific position of the image, and reaction tracking of the audience, and the spirit is also tense. The device used in this situation is 200947261. It is better to perform the operator who is as close as possible to the general habits of the publisher. An object of the present invention is to provide a briefing system that simplifies three-dimensional coordinate calculation processing. Further, the subject of the present invention is to provide a presentation system having an operation method conforming to the preferences of the publisher. Other problems of the present invention will be apparent from the description of the present invention. [Means for Solving the Problem] A briefing system according to an aspect of the present invention includes: a display surface that expands in the X-axis and the γ-axis direction; and an image memory unit that is created and displayed on the display surface Producing an image; the image selection means is selected to select the created image memorized by the created image memory unit; and the display control means is to display the created image selected by the created image selection means on the display surface; 2 cameras The camera captures the three-dimensional coordinate space containing the display surface; the 〇 indicator body can move in the three-dimensional coordinate space; the photographic image memory unit records the photographic images taken by the two cameras at the same time; coordinates 値 calculation The method calculates a three-dimensional coordinate 该 of the indicator body with a fixed point in the three-dimensional coordinate space as a reference point from the simultaneously captured image stored in the photographic image storage unit; and the indicator body 値 memory unit The three-dimensional coordinate 値; and the drawing means refer to the three-dimensional coordinate 该 of the indicator body memorized by the indicator body 値 memory portion, And generating a drawing image for the three-dimensional coordinates 値' of the indicator body and 200947261; the display control means displays the drawing image and the selected image displayed on the display surface at the same time; using the coordinate 値 calculation means Calculating the three-dimensional coordinates 値xl, yl, zl of the indicator at time u, and memorizing the coordinate 于 in the indicator coordinate 値 memory, and calculating the lapse time from the time t1 by using the coordinate 値 calculation means The three-dimensional coordinates 値x2, y2, and z2 of the indicator at the time t2, and the coordinates are stored in the memory portion of the indicator;

〇 該繪圖手段所產生之繪圖圖像爲,對應在時刻tl之Z 軸座標値zl的値而進行影像修正,並對應在時刻tl之X 軸和Y軸座標値xl、yl的値,而決定繪圖的起點,對應在 時刻t2之X軸和Y軸座標値x2、y2的値,而決定繪圖的 終點。 在本發明中,指示體的三維座標値以一個固定點作爲 基準點而算出。又,該繪圖手段所產生之繪圖圖像爲,對 應在時刻tl之Z軸座標値zl的値,即對應指示體之深度 〇 方向位置而決定圖像種類。 在本發明之較佳實施形態中,亦可在該簡報系統之該 繪圖圖像是線’在zl的値是相對接近該顯示面的情況下, 進行作成相對粗線的影像修正,而在zl的値是相對遠離該 顯示面的情況下’進行作成相對細線的影像修正。 根據本較佳實施形態,指示體接近顯示面時,畫相對 粗線’指不體遠離顯示面時畫細線。一般,發表者具有在 指示想強調之部分的情況下使指示體靠近顯示面的習性。 因而’於想強調之部分,畫粗的底線或粗的上記號。因而, 200947261 本較佳實施形態成爲更加符合發表者之習性的簡報系統。 不過,在本發明,影像修正未限定爲本較佳實施形態, 例如亦可是在指示體接近顯示面的情況,修正成相對細線。 在本發明之其他的實施形態,該簡報系統亦可 又具有記憶所預定之深度値的深度値記憶部和深度判 定手段; 深度判定手段比較該zl的値和該深度値,在該zl的 値滿足該深度値的情況下,該繪圖手段產生該繪圖圖像。 0 根據本較佳實施形態,在指示體位於固定範圍之深度 的情況下,進行繪圖。例如,在發表者接近顯示面而移動 指示體的情況下,進行繪圖。將在該講台附近之指示體的 操作定義爲用以輸入命令的操作,亦可如此地構成簡報系 統。 一般,發表者具有在說明幻燈片等已製作影像時,靠 近顯示面並以指示體在顯示面描等朝向固定深度方向移動 的習性。因而,本較佳之實施形態,成爲更加符合發表者 〇 之習性的簡報系統。 在本發明之其他的實施形態,該簡報系統亦可 又具有記憶所預定之移動距離値的移動距離値記憶部 和第一操作判定手段; 第一操作判定手段亦可係算出在時刻ti之該指示體的 位置和在時刻t2之該指示體的位置間的距離,並和該移動 距離値比較,在該算出之距離滿足該移動距離値的情況 下,進行第一既定之操作。 根據本較佳實施形態,在指示體以特定速度移動的情 200947261 況下,判定爲輸入既定之操作(例如輸入命令),可進行變 更顯示影像等之既定的操作。一般’發表者具有在要求進 給幻燈片等的情況下使指示體快速移動’而在說明幻燈片 時使指示體緩慢地移動的習性。因而’在本較佳的實施形 態中,藉由加入藉改變指示體之移動速度之既定操作的輸 入,而成爲更加符合發表者之習性的簡報系統。 例示既定操作,其爲已製作影像之進頁倒頁、顯示影 像變化之所謂的動畫的開始停止、音響的產生停止等。 H 以上所說明之本發明、本發明之較佳實施形態以及這 些所包含之構成要素可儘可能地組合並實施。 [發明之效果] 根據本發明,以一個固定點爲基準點而算出指示體的 三維座標値。該固定點在本簡報系統持續進行穩定動作之 間是固定。因此,僅對指示體進行三維座標値計算處理。 因而,計算效率提高。進而,成爲得到提高處理速度、防 止誤動作等的簡報系統。 Ο 又,因爲係對應在時刻tl之Z軸座標値zl的値,即 指示體之深度方向位置而進行影像修正,所以成爲可進行 對發表者想強調之部分進行更顯著的繪圖等符合發表者之 習性的操作的簡報系統。進而,易使用性、操作性提高。 【實施方式】 以下,參照圖式’進一步說明本發明之實施例的簡報 系統1。本發明的實施形態所記載之構件或部分的尺寸、 材質、形狀及其相對位置等在特別是非特定的記載範圍 內,只不過是說明例’而非旨在限定本發明之範圍者。 200947261 第1圖係簡報系統1之構成圖。簡報系統1由屬於銀 幕之顯示面2、指示體3、第1相機4、第2相機5及系統 控制器8所構成。 本實施例之影像顯示利用投影機6及銀幕進行。可 是,在本發明中,顯示面2未限定爲銀幕,例如亦可係液 晶顯示器等的顯示面。 顯示面2是在X軸方向和γ軸方向擴展的平面。在本 發明中,平面包含有幾何學上所定義的平面及因應於投影 0 機或顯示器之特性的曲面。又,在本發明中,X軸、Y軸 是表示和表達與顯示面之距離的Z軸之相對關係。 第1相機4和第2相機5是拍攝包含有顯示面2之三 維空間的相機,亦可爲動態影像相機,亦可爲靜態影像相 機。不過,爲了進行圓滑的繪圖等,因爲以例如約60圖框 /秒以上的速度得到影像較佳,所以第1相機4和第2相機 5是動態影像相機較佳。第1相機4和第2相機5可使用例 如CCD相機。 Θ 第1相機4和第2相機5將顯示面2的整個面設爲視 野較佳,如第1圖中之視野7所示,視野包含顯示面2之 輪廓區域較佳。 單一簡報系統所包含的相機未限定爲2台,亦可爲3 台以上。 指示體3例如是紅色或綠色的物體。物體之形狀例如 是球體或立方體。爲了使藉指示體3的移動之簡報系統1 的操作變得容易,將該指示體3附著於棒的前端較佳。又, 亦可發表者將例如紅色或綠色的手套戴在一隻手,而將該 -10- 200947261 手套作爲指示體3。 指示體3是操作簡報系統1的手段。指示體3在第1 相機4和第2相機5的視野內移動或停止。又,移動至或 停止於超出該視野外。檢測在視野內之指示體,並操作簡 報系統1。 系統控制器8例如是電腦。系統控制器8具有:已製 作影像記億部11、屬於對象顏色査表的對象顏色LUT12、 三維座標記憶部13、透鏡修正係數記憶部14、影像-空間 〇 轉換係數記憶部15、移動距離値記憶部16、繪圖圖像種類 記憶部17、繪圖修正記憶部18、深度値記憶部19、攝影影 像記憶部31、已透鏡修正影像記憶部32、指示體座標値記 憶部33的各記憶部、已製作影像選擇手段21、顯示控制手 段22、座標値計算手段23、繪圖手段24、第一操作判定手 段25、命令控制手段26、對象顏色判定手段27、深度判定 手段30的各手段、以及主控制部28、周邊機器控制部29。 上述各手段可藉電腦所載入的程式及CPU、RAM、ROM 〇 等來得到。上述各記憶部例如可藉由指派電腦之硬式記憶 體內的固定部分而得到。又,已製作影像選擇手段21例如 可使用市面上之簡報系統用程式。 主控制部28進行系統控制器8之整體動作控制等。周 邊機器控制部29進行第1相機4、第2相機5的動作、影 像處理及投影機6的控制等。又,系統控制器8具有系統 顯示裝置41及輸入裝置42。系統顯示裝置41是附屬於電 腦的液晶顯示器,輸入裝置42是鍵盤和指示器,例如滑鼠。 (起始設定) -11 - .200947261 說明簡報系統1之起始設定的操作等。 製作顯示於顯示面2的已製作影像,並記憶於已製作 影像記憶部11。已製作影像是文字、圖形、相片、插圖、 動態影像以及這些的混合物等。又,已製作影像亦可是白 紙的影像。已製作影像之具體例是簡報用的幻燈片原稿。 將繪圖圖像種類記憶於繪圖圖像種類記憶部17。繪圖 圖像種類例如是實線、虛線、一點鏈線、圓形記號、三角 形記號、四角形記號、箭號等。所記憶之繪圖圖像種類亦 ❹ 可是—種。 將繪圖圖像的修正事項記憶於繪圖修正記憶部18。修 正事項例如是線的粗細、顔色的濃淡、色調等。而且,記 述和zl値的關係式。例如,若是修正線的粗細,記述成在 0Szl<50的情況下爲線的粗細100像素,在50Szl<100的 情況下爲線的粗細70像素,在100Szl<150的情況下爲線 的粗細40像素。又,例如若是修正色調,記述成在〇$ zl<100的情況下爲紅,在1〇〇$ Zl<200的情況下爲橙,在 O 2 00 Szl<300的情況下爲黃。 (環境設定) 接著,進行環境設定。環境設定是將簡報系統1設置 於實際的使用場所,並對應照明狀態、指示體移動範圍等, 而向簡報系統1輸入所需的資料之設定。 將簡報系統1設置於簡報場所,並配置顯示面2、投 影機6。配置第1相機4和第2相機5,並調整相機視野7。 相機視野在接著環境設定中之穩定運轉中被固定。 接著,對簡報系統1教導識別對象顔色。第3圖係識 -12- 200947261 別對象顏色教導處理之流程圖,第2圖係對象顏色LUTl 2 之內容說明圖。 在S11’將指示體3定位於相機視野7內,再以第1 相機4、第2相機5拍攝包含有指示體3的影像。在si2, 將第1相機4的攝影影像和第2相機5的攝影影像記憶於 攝影影像記憶部3 1。 在S13’從攝影影像記憶部31叫出第1相機4的攝影 影像’並顯示於系統顯示裝置41。在S14,以輸入裝置42 φ 指定系統顯示裝置上的指示體3區域。藉此操作而教導第 1識別對象顏色C1。在S15,將第1識別對象顏色C1記憶 於對象顏色LUT12。 在S16’對系統顯示裝置41所顯示的同一影像,指定 非指示體區域。藉此操作而教導第1非識別對象顔色NC1 -1。在S 1 7 ’將第1非識別對象顏色N C 1 — 1記憶於對象 顏色LUT12。重複S16和S17,而教導其他的非識別對象顏 色。 Ο 在此’在S16,在系統顯示裝置41所顯示的同一影像 無應教導之非識別對象顏色的情況下,從以下的攝影操作 重複。即,將包含有該特定之非識別對象顏色的影像顯示 於顯示面2,或者將該特定之非識別對象顏色的物體定位 於相機視野7內,並以第1相機拍攝,再進行S12、S13、 S16 及 S17 。 依據以上,對第1相機攝影影像教導第1識別對象顏 色C1 '和第1非識別對象顏色NCI _ 1、NC1 - 2.....NC1 -η’並記憶於對象顏色LUT12。 -13- 200947261 接著,移至S19,對第2相機攝影影像,重複S13~S18 的操作。雖然在S14所指定之第2識別對象顏色C2和上述 之第1識別對象顔色C1大致相同,但是各自是反映第2相 機和第1相機之顏色特性的顏色資料。 以上識別對象顏色的教導結束時,如第2圖所示,於 對象顏色LUT 1 2內,對第1相機攝影影像記憶識別對象顏 色C1和非識別對象顏色NCl—l、NC1-2.....NCI - η > 並對第2相機攝影影像記憶識別對象顏色C2和非識別對象 ❹ 顏色 NC2— 1、NC2-2.....NC2— η。 在包含有3台以上之相機的簡報系統中,對第3相機 攝影影像等,教導識別對象顏色和非識別對象顏色。 其次,參照第4圖,說明和影像一空間轉換有關之環 境設定。 在S21,將已畫修正用基準點的影像顯示於顯示面2。 亦可替代顯示影像,而將一樣的印刷物黏貼於顯示面。已 畫修正用基準點的影像,例如是方格影像、方格花紋影像。 〇 然後,以第1相機4、第2相機5拍攝該顯示。 在S22,將第1相機4的攝影影像和第2相機5的攝影 影像記憶於攝影影像記憶部3 1。在S23,從攝影影像記憶 部31叫出第1相機4的攝影影像,並顯示於系統顯示裝置 41。檢測顯示影像上之修正用基準點的影像座標値。 於S 24,將在三維座標(宇宙座標)的修正用基準點和在 S23所檢測的影像座標値賦予關聯。在S25 ’將三維座標値 和影像座標値記億於暫時記憶體。 對於其他修正用基準點,重複S 23〜S 25的操作。 ,14 - 200947261 在S26,從暫時記憶體叫出三維座標値群和各自一對 一對應的影像座標値群,再從兩群算出第1相機攝影影像 的透鏡修正係數和第1相機攝影影像的影像-空間轉換係 數。透鏡修正係數是係數値,而影像-空間轉換係數是矩 陣値。 在S27,將第1相機攝影影像的透鏡修正係數記憶於 透鏡修正係數記憶部14。在S28,將第1相機攝影影像的 影像-空間轉換係數記憶於影像-空間轉換係數記憶部 ❹ 15。 在S29,對第2相機攝影影像進行和S23~S28 —樣的操 作。藉該操作,將第2相機攝影影像的透鏡修正係數記憶 於透鏡修正係數記憶部14,並將第2相機攝影影像的影像 -空間轉換係數記憶於影像-空間轉換係數記憶部1 5 » (穩定運轉一 1) 其次,一面參照第5圖所示之穩定運轉的流程圖、第 6圖所示之指示體3的三維座標決定處理的流程圖,一面 〇 說明穩定運轉處理。 在S40,使穩定運轉開始。在穩定運轉中,指示體3 根據發表者之操作而移動或停止。 在S41,已製作影像選擇手段21從已製作影像記憶部 11選擇一個影像。在S42,顯示控制手段22將該影像顯示 於顯示面2。 在S430,第1相機4和第2相機5同時拍攝各自的視 野內,再從攝影影像中之指示體3的位置算出指示體3的 三維座標値。以下另外說明S430之詳細的處理。 -15- .200947261 藉該計算,將在時刻tl之指示體3的三維座標値xl、 yl、zl和在時刻t2之座標値x2、y2、z2記憶於指示體座 標値記憶部33。 在S51,繪圖手段24從繪圖圖像種類記憶部17選擇一 種繪圖圖像種類。繪圖圖像種類例如在一種穩定運轉中, 亦可預先編輯程式成總是選擇同一繪圖圖像種類,又,亦 可編輯程式成對應於xl、yl、zl的値而選擇繪圖圖像種類。 在S52,繪圖手段24參照指示體座標値記憶部33,並 〇 對應於zl的値而修正繪圖圖像。在繪圖圖像是線的情況 下,例如在z 1的値是相對接近該顯示面的情況下,進行作 成相對粗線的影像修正,而在zl的値是相對遠離該顯示面 的情況下,進行作成相對細線的影像修正。 在S53,繪圖手段24參照指示體座標値記憶部33之 xl、yl的値,而決定繪圖圖像的起點。在S54,繪圖手段 24參照指示體座標値記憶部33之x2、y2的値,而決定繪 圖圖像的終點。 © 在S55,繪圖手段從已製作影像記憶部11叫出和在S41 所選擇並現在正顯示於顯示面2相同的已製作影像。進行 將在S51~S54已製作修正之繪圖圖像和該已製作影像重疊 的處理。該繪圖處理亦可是將繪圖圖像顯示於最前面,而 看不到重疊之已製作影像的形態,例如亦可是以5 0 %的透 過度畫繪圖圖像者。然後,將該圖像傳給顯示控制手段22。 在S5 6,顯示控制手段22將重疊之影像顯示於顯示面 若S56結束,將座標値記憶部32中之x2、y2、z2的 -16- 2 ° 200947261 資料定義爲新的xl、yl、zl。 (穩定運轉- 2,三維座標決定處理) 一面參照第6圖和第7圖,一面說明在S430之 3的三維座標決定處理。 在S61,以第1相機4、第2相機5同時拍攝固淀 在S62,將第1相機4的攝影影像和第2相機5的攝 記憶於攝影影像記憶部3 1。 在S63,從攝影影像記億部31叫出第1相機4 ❹ 影像。在S64,使用透鏡修正係數記憶部所記憶之】 機攝影影像的透鏡修正係數,對第1相機攝影影像 鏡修正。在S65,將第1相機已透鏡修正的影像記 透鏡修正影像記憶部32。 在S66,對第2相機攝影影像進行和S61〜S65 — 理。該處理所使用的透鏡修正係數是第2相機攝影 透鏡修正係數。因而,第2相機已透鏡修正影像亦 已透鏡修正影像記憶部32。 〇 在S67,抽出三維座標記憶部13所記憶之三維 -間的第1點。將所抽出之點的座標値稱爲「抽出三 値」。三維座標空間是包含有顯示面2(在X軸和Y 擴展),並從顯示面向前方(發表者、聽眾所在之方向 和後方擴展的空間。同時,三維座標空間是在相機 所拍攝之空間。於三維座標記憶部13,記憶將三維 間區分成微小立方體或微小長方體之空間上的各座 雖然三維座標値之Z軸正負係以右手座標系定義’ 要因應需要而修正Z軸的正負即可。 指不體 ί視野。 影影像 的攝影 I 1相 進行透 憶於已 樣的處 影像的 記憶於 座標空 維座標 軸方向 (Ζ 軸)) 視野7 座標空 標點。 但是只 -17- 200947261 在S68,座標値計算手段23使用影像-空間轉換係數記 憶部1 5所記憶之第1相機攝影影像的影像-空間轉換係 數,將抽出三維座標値轉換成第1相機攝影影像的影像座 標値。將轉換後的影像座標値稱爲「抽出影像座標値」。 在S69,對象顏色判定手段叫出第1相機已透鏡修正 影像,並一面參照對象顏色LUT12內之第1相機攝影影像 用的識別對象顏色C1、非識別對象顏色NC1-1、NC1 — 2.....NCl—n,一面判定抽出影像座標値的像素是否符合 〇 識別對象顏色。在符合識別對象顏色的情況下,移至S70。 在不符合識別對象顏色的情況下,回到S67,抽出三維座 標記憶部1 3所記憶之三維座標空間的第2點,再進行S68、 S69的處理。 在S70,對第2相機已透鏡修正影像進行和S68、S69 一樣的處理。此時,抽出影像座標値之轉換所使用的影像_ 空間轉換係數是第2相機攝影影像的影像-空間轉換係數。 又,對象顔色判定手段所參照之對象顏色LUT12中的顔色 〇 資料係是第2相機攝影影像用的識別對象顏色C2、非識別 對象顏色 NC2-1、NC2— 2.....NC2-n。 在S71,對象顔色判定手段27判定在第1相機攝影影 像和第2相機攝影影像是否檢測到識別對象P色。即’在 將在第2相機攝影影像之抽出影像座標點的像素判定爲識 別對象顏色的情況下,將抽出三維座標値當作指示體3的 檢測座標點,並作爲檢測指示體座標點,將抽出三維座標 値記憶於指示體座標値記憶部33。在第2相機攝影影像未 檢測到識別對象顏色的情況下,回到S67,抽出三維座標 -18- 200947261 記憶部13所記憶之三維座標空間的第2點,再進行和S68、 S69 —樣的處理。 在S72,重複S67-S71的處理,在三維座標空間之全部 的點,進行識別對象顔色檢測處理。 於S73,若在上述的處理檢測到識別對象顏色,即移 至S74。若在上述的處理未檢測到識別對象顔色,即移至 S76。 在S74,利用座標値計算手段將指示體座標値記憶部 0 33所記憶之檢測指示體座標値群確定爲一點。關於對該一 點之確定,例如,亦可求得檢測指示體座標値群的重心, 又,亦可在檢測指示體座標値群中,選擇最接近顯示面之 點的座標。 在S 75,將已確定之檢測指示體座標點記憶爲在時刻 tl 的 xl、 y1 ' zlo 在S76,回到S61,對在下一圖框(時刻t2)的攝影影像, 進行指示體座標檢測。因而,將在時刻t2之指示體的座標 Q 點x2、y2、z2記憶於指示體座標値記憶部33。 此外,在相機爲3台以上的簡報系統,接著S 70之對 第2相機已透鏡修正影像之和S 6 8、S 6 9 —樣的處理,進行 對第3相機已透鏡修正影像之和S68、S69 —樣的處理等。 在使用巨大之顯示面2的簡報系統,和影像-空間轉 換有關之環境設定進行上述的操作,在穩定運轉時之指示 體三維座標値計算’只要將所抽出之三維座標値的範圍限 制於指示體移動之範圍即可。依此方式,可得到計算速度 提高等之效果。 -19- 200947261 (第二實施例) 說明屬於較佳實施形態之附加指示體的深度位置判斷 的簡報系統ι(第二實施例)。所附加之深度位置判斷是根據 在三維空間之指示體的深度位置,判斷是否進行繪圖處理 的處理。 第8圖係說明第二實施例的流程圖。第二實施例於上 述所說明的穩定運轉中,在S430(決定指示體之三維座標) 和S51(繪圖開始)之間,插入S81~S84的處理。 0 預先將指示體之Z軸的臨限値記憶於深度値記憶部 19。或者,預先記憶Z軸的兩端値。在臨限値情況下,記 述在z 1比臨限値大(或小的情況)的情況下移至繪圖處理的 主旨。在兩端値情況下,記述在zl位於兩端値之間(或超 出的情況)的情況下移至繪圖處理的主旨。將以上深度値記 億部1 9所記憶之値和條件總稱爲「臨限値條件」。 在根據S430之檢測指示體座標値後,在S81,深度判 定手段30比較深度値記憶部1 9所記憶的値和指示體座標 © 値記憶部3 3所記憶之z 1的値。 在S82, zl滿足臨限値條件的情況下,移至S83。又, 在zl値不滿足臨限値條件的情況下,移至S84。在S83, 移至S51,進行繪圖處理。 在S84,例如使命令識別處理開始。在不進行命令識 別的情況下,回到S5 1 »命令識別是根據在複數個時刻tl、 t2、…、tn 之指示體座標値(xl、yl、zl) (x2、y2、z2)…(xn、 yn、zn)的組合而識別爲特定之命令的處理。 進入命令識別處理,在不識別命令的情況下,回到 -20- 200947261 S430 ° 上述的命令處理是既定之處理的一例。在不滿足臨限 値條件的情況下,亦可不進行既定之處理,而只是回到 S430 » (第三實施例) 說明屬於其他的較佳實施形態之附加指示體之移動速 度判斷的簡報系統1(第三實施例)。所附加之移動速度判斷 是根據在三維空間之指示體的移動速度,判斷是進行繪圖 0 處理或是進行既定之操作的處理。 第9圖係說明第三實施例的流程圖。在第三實施例, 在上述所說明的穩定運轉,在S430(決定指示體的三維座標) 和S51(繪圖開始)之間,插入S91~S96的處理。 預先將指示體之移動距離値記憶於移動距離値記憶部 16。在本實施例,移動距離値是單一的値(臨限値)。 於根據S4 30之檢測指示體座標値後,在S91,第一操 作判定手段25算出指示體座標値記憶部33所記憶之在時 ❹ 刻tl之指示體的位置和在時刻t2之指示體的位置之距離 (算出移動距離)。在S92,第一操作判定手段25比較移動 距離値記憶部16所記憶之移動距離値和所算出之算出移 動距離。 在算出移動距離比移動距離値(臨限値)更大的情況 下,移至S93。在算出移動距離比移動距離値(臨限値)更小 的情況下,移至S96。 在S93,第一操作判定手段向命令控制手段26傳送命 令信號。在S94,收到命令信號的命令控制手段26對現在 -21 - 200947261 顯示中之已製作影像或已製作影像選擇手段實施既定的處 理。 在S 95,顯示控制手段22將該顯示處理完了已製作影 像顯示於顯示面2。 在S96,移至S51,進行繪圖處理。 移動距離値記憶部1 6所記憶之値和第一操作判定手 段25所進行的判定,例如亦可如以下所示。 (1) 記憶單一臨限値,並進行和算出移動距離之大小比 G 較判定。 (2) 記憶單一範圍的兩端値,並進行算出移動距離位於 範圍內或超出範圍的比較判定。 (3) 記憶2範圍之2個兩端値,並進行算出移動距離位 於任一個範圍的比較判定等。 收到命令信號之命令控制手段26亦可對於已製作影 像’進行進頁倒頁、動畫的開始停止,又亦可進行音響的 產生停止β ® 【圖式簡單說明】 第1圖係簡報系統1之構成圖。 第2圖係對象顔色LUT12之內容說明圖。 第3圖係識別對象顏色教導處理之流程圖。 第4圖係和影像-空間轉換有關之環境設定的流程 圖。 圖圖 程程 流流 it ITT11 理理 處處 定定 決決 。 標 標 圖座座 ηιϋ 上三 上三 程維維 流三三 的之之 轉 3 3 運體體 定示示 穩指指 係係係 圖圖圖 5 6 7 第第第 -22- 200947261 第8圖係指示體3之深度位置判斷附加部分的流程圖。 第9圖係指示體3之移動速度判斷附加部分的流程圖。 【主要元件符號說明】 1 簡報系統 2 顯示面 3 指示體 4 第1相機 5 第2相機 © 7 相機視野 8 系統控制器The drawing image generated by the drawing means is image-corrected corresponding to the 轴 zl of the Z-axis at time t1, and is determined corresponding to the X of the X-axis and the Y-axis coordinate 値xl, yl at time t1. The starting point of the drawing corresponds to the X of the X-axis and the Y-axis coordinates 値x2 and y2 at time t2, and the end point of the drawing is determined. In the present invention, the three-dimensional coordinate 指示 of the pointer is calculated with one fixed point as a reference point. Further, the drawing image generated by the drawing means determines the image type corresponding to the position of the Z-axis coordinate 値zl at the time t1, that is, the position in the depth 〇 direction of the pointer. In a preferred embodiment of the present invention, the drawing image in the presentation system may be a line 'in the case where the z1 is relatively close to the display surface, and the image correction is performed on the relatively thick line, and in the zl In the case where the 値 is relatively far from the display surface, the image correction for making a relatively thin line is performed. According to the preferred embodiment of the present invention, when the pointer approaches the display surface, the relatively thick line is drawn to mean that the line is drawn away from the display surface. In general, the publisher has the habit of bringing the pointer closer to the display surface in the case where the portion to be emphasized is indicated. Therefore, in the part that you want to emphasize, draw a thick bottom line or a thick mark. Thus, the preferred embodiment of the present invention is a presentation system that more closely matches the preferences of the publisher. However, in the present invention, the image correction is not limited to the preferred embodiment, and may be corrected to a relatively thin line, for example, when the pointer approaches the display surface. In another embodiment of the present invention, the presentation system may further have a depth, a memory, and a depth determination means for memorizing the predetermined depth ;; the depth determination means compares the z and the depth 该 of the z1, in the zl When the depth 满足 is satisfied, the drawing means generates the drawing image. According to the preferred embodiment, the drawing is performed when the pointer is at a depth of a fixed range. For example, when the presenter moves closer to the display surface and moves the pointer, the drawing is performed. The operation of the pointer near the podium is defined as an operation for inputting a command, and the presentation system can also be constructed as such. In general, the presenter has a habit of moving toward the fixed depth direction on the display surface or the like in the display surface when the image is created such as a slide. Therefore, the preferred embodiment is a presentation system that more closely matches the publisher's habits. In another embodiment of the present invention, the presentation system may further have a moving distance 値 memory unit and a first operation determining means for storing the predetermined moving distance ;; the first operation determining means may calculate the time at the time ti The distance between the position of the pointer and the position of the pointer at time t2 is compared with the moving distance ,, and when the calculated distance satisfies the moving distance ,, the first predetermined operation is performed. According to the preferred embodiment of the present invention, when the pointer moves at a specific speed, 200947261, it is determined that a predetermined operation (e.g., an input command) is input, and a predetermined operation such as changing the display image can be performed. In general, the "publisher has the habit of quickly moving the pointer when asked to feed a slide or the like" and slowly moving the pointer when explaining the slide. Thus, in the preferred embodiment, by adding an input of a predetermined operation by changing the moving speed of the pointer, it becomes a presentation system more in line with the taste of the publisher. A predetermined operation is exemplified, which is a page inversion of a created image, a stop of a so-called animation in which a change in image is displayed, a stop of sound generation, and the like. The invention described above, the preferred embodiments of the invention, and the constituent elements contained therein may be combined and implemented as much as possible. [Effect of the Invention] According to the present invention, the three-dimensional coordinate 指示 of the pointer is calculated with one fixed point as a reference point. This fixed point is fixed between the continuous stabilization function of this presentation system. Therefore, only the three-dimensional coordinate 値 calculation processing is performed on the pointer. Therefore, the calculation efficiency is improved. Further, it is a briefing system that increases the processing speed, prevents malfunctions, and the like. Ο 因为 影像 影像 影像 影像 影像 l l l l l l l l l l l tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl tl A habitual operation briefing system. Further, the usability and operability are improved. [Embodiment] Hereinafter, a briefing system 1 according to an embodiment of the present invention will be further described with reference to the drawings. The dimensions, materials, shapes, relative positions, and the like of the members or portions described in the embodiments of the present invention are intended to be illustrative only and not intended to limit the scope of the present invention. 200947261 Fig. 1 is a block diagram of the briefing system 1. The presentation system 1 is composed of a display surface 2 belonging to a screen, a pointer 3, a first camera 4, a second camera 5, and a system controller 8. The image display of this embodiment is performed using the projector 6 and the screen. However, in the present invention, the display surface 2 is not limited to a screen, and may be, for example, a display surface of a liquid crystal display or the like. The display surface 2 is a plane that expands in the X-axis direction and the γ-axis direction. In the present invention, the plane includes a geometrically defined plane and a curved surface that is adapted to the characteristics of the projection machine or display. Further, in the present invention, the X-axis and the Y-axis are the relative relationship between the Z-axis indicating and expressing the distance from the display surface. The first camera 4 and the second camera 5 are cameras for capturing a three-dimensional space including the display surface 2, and may be a motion picture camera or a still image camera. However, in order to perform a smooth drawing or the like, since the image is preferably obtained at a speed of, for example, about 60 frames per second or more, the first camera 4 and the second camera 5 are preferably moving image cameras. For example, a CCD camera can be used for the first camera 4 and the second camera 5. Θ The first camera 4 and the second camera 5 preferably have the entire surface of the display surface 2 as a field of view. As shown by the field of view 7 in Fig. 1, the field of view including the contour area of the display surface 2 is preferable. The camera included in the single presentation system is not limited to two, and may be three or more. The indicator 3 is, for example, a red or green object. The shape of the object is, for example, a sphere or a cube. In order to facilitate the operation of the presentation system 1 by the movement of the pointer 3, it is preferable to attach the indicator 3 to the front end of the stick. Further, the presenter may wear a glove such as red or green in one hand, and the -10-200947261 glove as the indicator 3. The indicator 3 is a means of operating the presentation system 1. The pointer 3 moves or stops in the field of view of the first camera 4 and the second camera 5. Also, move to or stop beyond the field of view. The indicator in the field of view is detected and the presentation system 1 is operated. The system controller 8 is, for example, a computer. The system controller 8 includes a created image recording unit 11, a target color LUT12 belonging to the target color lookup table, a three-dimensional coordinate storage unit 13, a lens correction coefficient storage unit 14, an image-space conversion coefficient storage unit 15, and a moving distance 値 memory. The unit 16 , the drawing image type storage unit 17 , the drawing correction storage unit 18 , the depth 値 memory unit 19 , the photographic image storage unit 31 , the lens corrected image storage unit 32 , and the memory unit of the indicator suffix memory unit 33 The means for creating the image selecting means 21, the display control means 22, the coordinate calculating means 23, the drawing means 24, the first operation determining means 25, the command control means 26, the target color determining means 27, the depth determining means 30, and the main control The unit 28 and the peripheral device control unit 29. The above means can be obtained by a program loaded by a computer, a CPU, a RAM, a ROM, and the like. Each of the above-described memory sections can be obtained, for example, by assigning a fixed portion of the hard memory of the computer. Further, the created image selecting means 21 can use, for example, a program for a briefing system on the market. The main control unit 28 performs overall operation control of the system controller 8, and the like. The peripheral device control unit 29 performs operations of the first camera 4, the second camera 5, image processing, and control of the projector 6. Further, the system controller 8 has a system display device 41 and an input device 42. The system display device 41 is a liquid crystal display attached to a computer, and the input device 42 is a keyboard and an indicator such as a mouse. (Start setting) -11 - .200947261 Describes the operation of the initial setting of the presentation system 1, and so on. The created image displayed on the display surface 2 is created and stored in the created image memory unit 11. The created images are text, graphics, photos, illustrations, motion pictures, and mixtures of these. Also, the created image may be an image of white paper. A specific example of the created image is a slide original for presentation. The type of the drawing image is memorized in the drawing image type storage unit 17. The types of drawing images are, for example, solid lines, broken lines, a little chain line, a circular mark, a triangular mark, a quadrangular mark, an arrow, and the like. The type of graphic image that is memorized is also —. The correction of the drawing image is memorized in the drawing correction storage unit 18. The correction items are, for example, the thickness of the line, the shade of the color, the hue, and the like. Moreover, the relationship between zj値 and zl値 is described. For example, the thickness of the correction line is described as the thickness of the line 100 pixels in the case of 0Szl < 50, the thickness of the line 70 pixels in the case of 50Szl < 100, and the thickness of the line in the case of 100Szl < 150. Pixel. Further, for example, if the color tone is corrected, it is described as red in the case of 〇 $ zl < 100, orange in the case of 1 〇〇 $ Zl < 200, and yellow in the case of O 2 00 Szl < 300. (Environment setting) Next, set the environment. The environment setting is to set the presentation system 1 to the actual use place, and to input the desired data to the presentation system 1 in accordance with the illumination state, the range of the pointer movement, and the like. The presentation system 1 is set in the presentation place, and the display surface 2 and the projector 6 are arranged. The first camera 4 and the second camera 5 are arranged, and the camera field of view 7 is adjusted. The camera field of view is fixed during steady operation in the subsequent environment settings. Next, the presentation system 1 is taught to recognize the object color. Fig. 3 is a diagram -12- 200947261 A flowchart of the color teaching process of the object, and Fig. 2 is a description of the content of the object color LUTl 2. The pointer 3 is positioned in the camera field of view 7 at S11', and the image including the indicator 3 is captured by the first camera 4 and the second camera 5. In si2, the captured image of the first camera 4 and the captured image of the second camera 5 are stored in the captured image storage unit 31. The captured image of the first camera 4 is called from the photographic image storage unit 31 at S13' and displayed on the system display device 41. At S14, the pointer 3 area on the system display device is designated by the input device 42 φ. The first recognition target color C1 is taught by this operation. At S15, the first recognition target color C1 is stored in the target color LUT12. The non-indicator area is designated in the same image displayed on the system display device 41 at S16'. The first non-identification target color NC1 -1 is taught by this operation. The first non-recognition target color N C 1 - 1 is stored in the object color LUT 12 at S 1 7 '. S16 and S17 are repeated to teach other non-identifying object colors. Ο Here, in S16, when the same image displayed on the system display device 41 has no non-recognition target color to be taught, it is repeated from the following photographing operation. That is, the image including the specific non-identification target color is displayed on the display surface 2, or the object of the specific non-identification target color is positioned in the camera field of view 7, and is captured by the first camera, and then S12 and S13 are performed. , S16 and S17. As described above, the first recognition target color C1' and the first non-recognition target colors NCI_1, NC1 - 2.....NC1 -η' are taught to the first camera image and stored in the target color LUT12. -13- 200947261 Next, the process proceeds to S19, and the operations of S13 to S18 are repeated for the second camera image. The second recognition target color C2 designated in S14 is substantially the same as the first recognition target color C1 described above, but each is a color material reflecting the color characteristics of the second camera and the first camera. When the teaching of the above-mentioned recognition target color is completed, as shown in FIG. 2, in the target color LUT 1 2, the first camera photographic image memory recognition target color C1 and the non-identification target color NCl-1, NC1-2... ..NCI - η > and recognize the object color C2 and the non-identifying object ❹ color NC2 - 1, NC2-2.....NC2 - η for the second camera photographic image memory. In a briefing system including three or more cameras, a third camera photography image or the like is taught to recognize the target color and the non-identifying target color. Next, referring to Fig. 4, an environmental setting relating to image-to-space conversion will be described. At S21, the image of the reference point for correction has been displayed on the display surface 2. Instead of displaying the image, the same printed matter can be attached to the display surface. The image of the reference point for correction is drawn, for example, a square image or a checkered image. 〇 Then, the display is captured by the first camera 4 and the second camera 5. At S22, the captured image of the first camera 4 and the captured image of the second camera 5 are stored in the captured image storage unit 31. At S23, the captured image of the first camera 4 is called from the photographic image memory unit 31 and displayed on the system display device 41. The image coordinates of the correction reference point on the display image are detected. At S24, the correction reference point for the three-dimensional coordinates (cosm coordinates) is associated with the image coordinates detected at S23. At S25', the three-dimensional coordinates and image coordinates are recorded in the temporary memory. For the other correction reference points, the operations of S 23 to S 25 are repeated. , 14 - 200947261 In S26, the three-dimensional coordinate group and the one-to-one corresponding image coordinate group are called from the temporary memory, and the lens correction coefficient of the first camera image and the first camera image are calculated from the two groups. Image-space conversion factor. The lens correction factor is the coefficient 値, and the image-to-space conversion coefficient is the matrix 値. At S27, the lens correction coefficient of the first camera image is stored in the lens correction coefficient storage unit 14. At S28, the image-space conversion coefficient of the first camera image is stored in the image-space conversion coefficient storage unit ❹15. At S29, the second camera photographic image is operated in the same manner as S23 to S28. By this operation, the lens correction coefficient of the second camera image is stored in the lens correction coefficient storage unit 14, and the image-space conversion coefficient of the second camera image is memorized in the image-space conversion coefficient memory unit. Operation 1) Next, the steady operation processing will be described with reference to the flowchart of the steady operation shown in FIG. 5 and the flowchart of the three-dimensional coordinate determination processing of the pointer 3 shown in FIG. At S40, stable operation is started. In steady operation, the indicator 3 moves or stops according to the operation of the presenter. At S41, the created image selecting means 21 selects one image from the created image memory unit 11. At S42, display control means 22 displays the image on display surface 2. In S430, the first camera 4 and the second camera 5 simultaneously capture the respective fields of view, and the three-dimensional coordinates of the pointer 3 are calculated from the position of the pointer 3 in the captured image. The detailed processing of S430 will be additionally described below. -15-.200947261 By this calculation, the three-dimensional coordinates 値x1, yl, zl of the pointer 3 at time t1 and the coordinates 値x2, y2, z2 at time t2 are memorized in the indicator coordinate memory unit 33. In S51, the drawing means 24 selects one type of drawing image from the drawing image type storage unit 17. For example, in a stable operation, the type of the drawing image may be edited in advance to always select the same drawing image type, or the program may be edited to select the drawing image type corresponding to x1, yl, and zl. At S52, the drawing means 24 refers to the indicator body coordinate storage unit 33, and corrects the drawing image corresponding to 値 of z1. In the case where the drawing image is a line, for example, when the 値 of z 1 is relatively close to the display surface, image correction for making a relatively thick line is performed, and in the case where 値 of z1 is relatively far from the display surface, Perform image correction for relatively thin lines. At S53, the drawing means 24 determines the start point of the drawing image by referring to the x of x1 and yl of the body coordinate storage unit 33. At S54, the drawing means 24 refers to the x of x2, y2 indicating the body coordinate storage unit 33, and determines the end point of the drawing image. © In S55, the drawing means calls the created image which is selected from the created image memory unit 11 and which is selected in S41 and is now displayed on the display surface 2. The process of superimposing the corrected drawing image on S51 to S54 and the created image is performed. The drawing process may also display the drawing image at the forefront, and may not see the form of the superimposed image, for example, it may be a graphic drawing of 50% over-extraction. Then, the image is transmitted to the display control means 22. At S56, the display control means 22 displays the superimposed image on the display surface, if S56 ends, and defines the -16-2° 200947261 data of x2, y2, and z2 in the coordinate unit memory unit 32 as the new xl, yl, zl. . (Stable Operation - 2, Three-Dimensional Coordinate Determination Processing) The three-dimensional coordinate determination processing at S430-3 will be described with reference to Figs. 6 and 7. At S61, the first camera 4 and the second camera 5 simultaneously capture the image at S62, and the captured image of the first camera 4 and the second camera 5 are recorded in the captured image storage unit 31. At S63, the first camera 4 ❹ image is called from the photographic image recording unit 31. At S64, the first camera photographic image is corrected using the lens correction coefficient of the photographic image stored in the lens correction coefficient storage unit. At S65, the image recorder 32 corrects the image storage unit 32 with the lens corrected by the first camera. At S66, the second camera photographic image is subjected to S61 to S65. The lens correction coefficient used in this processing is the second camera photographic lens correction coefficient. Therefore, the second camera has the lens correction image and the lens correction image storage unit 32. 〇 At S67, the first point between the three-dimensional memories memorized by the three-dimensional coordinate storage unit 13 is extracted. The coordinates of the points extracted are referred to as "extracting three 値". The three-dimensional coordinate space is the space that contains the display surface 2 (in the X-axis and Y-expansion) and extends from the display to the front (the direction in which the presenter, the listener is located, and the rear). At the same time, the three-dimensional coordinate space is the space captured by the camera. In the three-dimensional coordinate memory unit 13, the memory divides the three-dimensional space into a small cube or a small rectangular parallelepiped space. Although the Z-axis positive and negative of the three-dimensional coordinate 定义 is defined by the right-handed coordinate system, the Z-axis positive and negative can be corrected as needed. The image is invisible. The image I 1 phase is memorized in the image of the image, and the memory is in the coordinate direction of the coordinate axis (Ζ axis). The field of view 7 coordinates the empty punctuation. However, only -17-200947261 at S68, the coordinate calculation means 23 converts the extracted three-dimensional coordinates into the first camera photography using the image-space conversion coefficient of the first camera photographic image stored in the video-space conversion coefficient storage unit 15. The image coordinates of the image. The converted image coordinates are referred to as "extracted image coordinates". In S69, the target color determination means calls the first camera lens correction image, and refers to the recognition target color C1 for the first camera image in the target color LUT 12, and the non-identification target colors NC1-1, NC1 - 2. ...NCl-n, it is determined whether the pixel of the image coordinates is extracted to match the color of the recognition object. In the case where the color of the recognition target is met, the process moves to S70. When the color of the recognition target is not satisfied, the process returns to S67, and the second point of the three-dimensional coordinate space memorized by the three-dimensional coordinate storage unit 13 is extracted, and the processing of S68 and S69 is performed. At S70, the lens-corrected image of the second camera is processed in the same manner as S68 and S69. At this time, the image_space conversion coefficient used for the conversion of the extracted image coordinates is the image-space conversion coefficient of the second camera image. Further, the color 〇 data in the target color LUT 12 referred to by the target color determination means is the recognition target color C2 for the second camera photographic image, the non-identification target color NC2-1, NC2 - 2.....NC2-n . In S71, the target color determining means 27 determines whether or not the recognition target P color is detected in the first camera captured image and the second camera captured image. In other words, when the pixel of the extracted image coordinate point of the second camera image is determined as the color of the recognition target, the three-dimensional coordinate is extracted as the detection coordinate point of the pointer 3, and is used as the detection target coordinate point. The three-dimensional coordinates are extracted and stored in the indicator coordinate memory unit 33. When the color of the recognition target is not detected in the second camera image, the process returns to S67, and the second point of the three-dimensional coordinate space memorized by the memory unit 13 is extracted, and then the same as S68 and S69. deal with. At S72, the processing of S67-S71 is repeated, and the recognition target color detection processing is performed at all points of the three-dimensional coordinate space. In S73, if the color of the recognition target is detected in the above processing, the process proceeds to S74. If the color of the recognition target is not detected in the above processing, the process moves to S76. At S74, the detection target body coordinate group stored in the indicator body 値 memory unit 0 33 is determined as a point by the coordinate 値 calculation means. Regarding the determination of the point, for example, the center of gravity of the target group of the indicator can be obtained, and the coordinates of the point closest to the display surface can be selected in the group of the indicator body. At S75, the determined detection target body coordinate point is stored as xl and y1'zlo at time t1 in S76, and the process returns to S61, and the target image is detected on the photographic image in the next frame (time t2). Therefore, the coordinates Q points x2, y2, and z2 of the pointer at time t2 are stored in the indicator coordinate memory unit 33. Further, in the case where the camera is three or more presentation systems, then the processing of the lens correction image S 6 8 and S 6 9 of the second camera pair S 70 is performed, and the sum of the lens corrected images of the third camera is performed. , S69 - like processing. In the presentation system using the huge display surface 2, the above-mentioned operation is performed with the environment setting related to the image-space conversion, and the three-dimensional coordinate of the indicator is calculated in the stable operation as long as the range of the extracted three-dimensional coordinate 値 is limited to the indication. The range of body movement can be. In this way, the effect of improving the calculation speed and the like can be obtained. -19-200947261 (Second Embodiment) A briefing system ι (second embodiment) for determining the depth position of the additional indicator of the preferred embodiment will be described. The added depth position determination is a process of determining whether or not to perform a drawing process based on the depth position of the pointer in the three-dimensional space. Figure 8 is a flow chart showing the second embodiment. In the second embodiment, in the stable operation described above, the processing of S81 to S84 is inserted between S430 (determining the three-dimensional coordinates of the pointer) and S51 (starting the drawing). 0 The threshold of the Z axis of the indicator is stored in advance in the depth memory unit 19. Or, the two ends of the Z axis are memorized in advance. In the case of the threshold, it is described that when z 1 is larger than the threshold (or small), the purpose of the drawing process is moved. In the case of both ends, the description goes to the drawing process when zl is located between the two ends (or when it is over). The above-mentioned depths are recorded in the memory and conditions of the 1 million memories. After the detection of the body coordinates in accordance with S430, the depth determination means 30 compares the 记忆 stored in the depth 値 memory unit 19 with the z of the pointer coordinate © 値 memorized by the memory unit 3 3 at S81. In S82, if zl satisfies the threshold condition, the process moves to S83. Further, if zl値 does not satisfy the threshold condition, the process moves to S84. At S83, the process moves to S51 to perform drawing processing. At S84, for example, the command recognition process is started. In the case where command recognition is not performed, return to S5 1 » Command recognition is based on the indication coordinates (xl, yl, zl) (x2, y2, z2) at a plurality of times t1, t2, ..., tn... The combination of xn, yn, zn) is recognized as the processing of a particular command. Entering the command recognition process, if the command is not recognized, return to -20- 200947261 S430 ° The above command processing is an example of the predetermined processing. In the case where the condition of the threshold is not satisfied, the predetermined processing may not be performed, but only the return to S430 » (Third embodiment) The briefing system 1 for explaining the moving speed judgment of the additional indicator belonging to the other preferred embodiment (Third embodiment). The added movement speed judgment is based on the movement speed of the pointer in the three-dimensional space, and it is judged whether the drawing 0 processing or the predetermined operation is performed. Figure 9 is a flow chart showing the third embodiment. In the third embodiment, in the stable operation described above, the processing of S91 to S96 is inserted between S430 (determining the three-dimensional coordinates of the pointer) and S51 (starting the drawing). The moving distance of the pointer is stored in advance in the moving distance 値 memory unit 16. In the present embodiment, the moving distance 値 is a single 临 (premium 値). After the detection of the body coordinates in accordance with the detection of S4 30, the first operation determining means 25 calculates the position of the pointer indicating the time stamp t1 stored in the body coordinate storage unit 33 and the pointer at the time t2 in S91. The distance of the position (calculate the moving distance). In S92, the first operation determining means 25 compares the moving distance 记忆 between the moving distance 値 memory unit 16 and the calculated calculated moving distance. When it is calculated that the moving distance is larger than the moving distance 临 (premise 値), the process proceeds to S93. When it is calculated that the moving distance is smaller than the moving distance 临 (premise 値), the process proceeds to S96. At S93, the first operation determining means transmits a command signal to the command control means 26. At S94, the command control means 26, which has received the command signal, performs the predetermined processing on the created image or the created image selection means in the -21 - 200947261 display. At S95, the display control means 22 displays the image which has been created by the display processing on the display surface 2. At S96, the process proceeds to S51, where the drawing process is performed. The movement distance 値 the memory of the memory unit 16 and the determination by the first operation determination means 25 can be, for example, as follows. (1) Memorize a single threshold, and determine and calculate the magnitude of the moving distance. (2) The two ends of the single range are memorized, and a comparison judgment is made to calculate whether the moving distance is within or out of range. (3) The two ends of the 2 range are memorized, and a comparison judgment is made to calculate the moving distance in any one of the ranges. The command control means 26 that receives the command signal can also perform page inversion for the created image, start and stop the animation, and can also stop the generation of the sound β ® [Simple description of the drawing] Fig. 1 is a briefing system 1 The composition of the figure. Fig. 2 is an explanatory diagram of the contents of the object color LUT12. Figure 3 is a flow chart for identifying the object color teaching process. Figure 4 is a flow chart of the environment setting related to image-space conversion. Tutu process flow stream IT IT11 rationale everywhere determine the decision.标图图座 ηιϋ 上三上三程维维流三三的之转3 3 体体定定定指指系系系图图5 6 7 第第第-22- 200947261 第图图The depth position of the body 3 determines the flow chart of the additional portion. Fig. 9 is a flow chart showing the additional portion of the moving speed determination of the indicator 3. [Main component symbol description] 1 Presentation system 2 Display surface 3 Indicator body 4 First camera 5 2nd camera © 7 Camera field of view 8 System controller

-23--twenty three-

Claims (1)

200947261 七、申請專利範圍· 1. 一種簡報系統’其特徵爲: 具有: 在X軸、Y軸方向擴展的顯示面; 已製作影像記憶部,係記憶顯示於該顯示面用的已製 作影像; 已製作影像選擇手段,係選擇該已製作影像記憶部所 記憶的已製作影像; * 顯示控制手段,係將已製作影像選擇手段所選擇之已 Ο 製作影像顯示於該顯示面; 2台相機,係拍攝包含有該顯示面的三維座標空間; 指示體,係可在該三維座標空間內移動; 攝影影像記憶部,係各自記憶該2台相機所同時拍攝 的攝影影像;^ 座標値計算手段,係從攝影影像記憶部所記憶之該同 時拍攝的影像,計算將該三維座標空間內的一個固定點 0 作爲基準點之該指示體的三維座標値; 指示體座標値記憶部,係記憶該三維座標値;以及 繪圖手段,係參照該指示體座標値記憶部所記憶之該 指示體的三維座標値,並因應於該指示體的三維座標 値,而產生繪圖圖像; 該顯示控制手段係將該繪圖圖像和所選擇、顯示之該 已製作影像同時顯示於該顯示面; 利用該座標値計算手段,計算在時刻11之該指示體的 三維座標値xl、yl、zl’並將該座標値記憶於該指示體 -24- 200947261 座標値記憶部,而且利用該座標値計算手段,計算在從 時刻tl經過既定時間後的時刻t2之該指示體的三維座標 値?^2、;/2、22,並將該座標値記憶於該指示體座標値記 憶部; 該繪圖手段所產生之繪圖圖像爲,對應在時刻tl之Z 軸座標値zl的値而進行影像修正,並對應在時刻tl之X 軸和Y軸座標値xl、yl的値,而決定繪圖的起點,對應 在時刻t2之X軸和Y軸座標値x2、y2的値,而決定繪 © 圖的終點。 2. 如申請專利範圍第1項之簡報系統,其中該繪圖圖像是 線,在zl的値是相對接近該顯示面的情況下,進行作成 相對粗線的影像修正,而在zl的値是相對遠離該顯示面 的情況下,進行作成相對細線的影像修正。 3. 如申請專利範圍第1項之簡報系統,其中 又具有記憶所預定之深度値的深度値記憶部和深度判 定手段; ® 深度判定手段係比較該zl的値和該深度値,在該zl 的値滿足該深度値的情況下,該繪圖手段產生該繪圖圖 像。 4. 如申請專利範圍第1項之簡報系統,其中 又具有記憶所預定之移動距離値的移動距離値記憶部 和第一操作判定手段; 第一操作判定手段係算出在時刻tl之該指示體的位置 和在時刻t2之該指示體的位置間的距離,並和該移動距 離値比較,在該算出之距離滿足該移動距離値的情況 -25- 200947261 下,進行第一既定之操作。 Ο200947261 VII. Patent application scope 1. A briefing system's features: It has: a display surface that expands in the X-axis and Y-axis directions; an image memory unit has been created, which is a memory image displayed on the display surface; The image selection means is selected to select the created image memorized by the created image memory unit; * the display control means displays the created image selected by the created image selection means on the display surface; 2 cameras, The three-dimensional coordinate space including the display surface is captured; the indicator body is movable in the three-dimensional coordinate space; the photographic image memory unit stores the photographic images captured by the two cameras at the same time; The three-dimensional coordinate 该 of the indicator body using the fixed point 0 in the three-dimensional coordinate space as a reference point is calculated from the simultaneously captured image stored in the photographic image memory unit; the indicator body coordinates the memory unit, and the three-dimensional memory is stored The coordinate 値; and the drawing means refer to the three-dimensionality of the indicator body memorized by the memory portion of the indicator body Marking a picture, and generating a drawing image according to the three-dimensional coordinates of the indicator; the display control means displaying the drawing image and the selected image displayed on the display surface at the same time; using the coordinate値 calculating means, calculating the three-dimensional coordinates 値xl, yl, zl' of the indicator at time 11 and memorizing the coordinate 于 in the indicator -24-200947261 coordinate 値 memory, and calculating the coordinate 値 calculation means What is the three-dimensional coordinate of the pointer at time t2 after a predetermined time elapses from time t1? ^2, /2, 22, and the coordinates are stored in the indicator body 値 memory; the drawing image generated by the drawing means is corresponding to the 轴 zl of the Z axis at time t1. Correction, and corresponding to the X and Y coordinate coordinates 値xl, yl at time t1, and determine the starting point of the drawing, corresponding to the X axis and the Y axis coordinate 値x2, y2 at time t2, and decide to draw © Fig. The end point. 2. In the briefing system of claim 1, wherein the drawing image is a line, and in the case where z1 is relatively close to the display surface, image correction is performed to make a relatively thick line, and in zl, When it is relatively far from the display surface, image correction for making a relatively thin line is performed. 3. For example, the briefing system of claim 1 of the patent scope has a depth, a memory, and a depth determining means for memorizing the depth 预定; the depth determining means compares the 値 and the depth 该 of the zl, in the zl The drawing means generates the drawing image in the case where the depth 値 is satisfied. 4. The briefing system of claim 1, further comprising a moving distance 値 memory portion and a first operation determining means for memorizing the predetermined moving distance ;; the first operation determining means calculating the indicator at time t1 The position between the position and the position of the pointer at time t2 is compared with the movement distance ,, and the first predetermined operation is performed under the case that the calculated distance satisfies the movement distance -25-25-200947261. Ο -26-26
TW098109824A 2008-03-27 2009-03-26 A presentation system TW200947261A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008083910A JP2009237950A (en) 2008-03-27 2008-03-27 Presentation system

Publications (1)

Publication Number Publication Date
TW200947261A true TW200947261A (en) 2009-11-16

Family

ID=41113239

Family Applications (1)

Application Number Title Priority Date Filing Date
TW098109824A TW200947261A (en) 2008-03-27 2009-03-26 A presentation system

Country Status (3)

Country Link
JP (1) JP2009237950A (en)
TW (1) TW200947261A (en)
WO (1) WO2009119025A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI410825B (en) * 2010-06-15 2013-10-01 Acer Inc Presentation control method

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6547476B2 (en) * 2015-07-14 2019-07-24 株式会社リコー INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING DEVICE, AND PROGRAM

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001306254A (en) * 2000-02-17 2001-11-02 Seiko Epson Corp Inputting function by slapping sound detection
JP4867586B2 (en) * 2005-11-25 2012-02-01 株式会社セガ Game device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI410825B (en) * 2010-06-15 2013-10-01 Acer Inc Presentation control method

Also Published As

Publication number Publication date
JP2009237950A (en) 2009-10-15
WO2009119025A1 (en) 2009-10-01

Similar Documents

Publication Publication Date Title
US10484561B2 (en) Method and apparatus for scanning and printing a 3D object
JP5822400B2 (en) Pointing device with camera and mark output
JP6525611B2 (en) Image processing apparatus and control method thereof
JP6019567B2 (en) Image processing apparatus, image processing method, image processing program, and imaging apparatus
JP5157647B2 (en) camera
WO2018045592A1 (en) Image shooting method and device, and terminal
WO2005024777A1 (en) Image display apparatus, image display program, image display method, and recording medium recording image display program therein
KR20170027266A (en) Image capture apparatus and method for operating the image capture apparatus
JP5220157B2 (en) Information processing apparatus, control method therefor, program, and storage medium
JP2022027841A (en) Electronic apparatus, program, and storage medium
JP2017139768A (en) Display device, display method, and program
TW200947265A (en) Presentation system
JP5312505B2 (en) GAME DEVICE, GAME DEVICE CONTROL METHOD, AND PROGRAM
JP2014017665A (en) Display control unit, control method for display control unit, program, and recording medium
JP4580831B2 (en) Terminal device, program, and recording medium
TW200947261A (en) A presentation system
JP2019193147A (en) Imaging device, imaging method, and program
JP5886662B2 (en) Image display device
KR20110088275A (en) Mobile communication terminal had a function of transformation for a picture
TWI252044B (en) The projection device of photograph image and image processing method using the same and recorder media with recording program
JP2005122328A (en) Photographing apparatus, image processing method and program thereof
JPH07160412A (en) Pointed position detecting method
JP2005115897A (en) Three-dimensional image creation device and method
JP4363153B2 (en) Imaging apparatus, image processing method thereof, and program
JP6630337B2 (en) Electronic device and control method thereof