TW201225658A - Imaging device, image-processing device, image-processing method, and image-processing program - Google Patents

Imaging device, image-processing device, image-processing method, and image-processing program Download PDF

Info

Publication number
TW201225658A
TW201225658A TW100130899A TW100130899A TW201225658A TW 201225658 A TW201225658 A TW 201225658A TW 100130899 A TW100130899 A TW 100130899A TW 100130899 A TW100130899 A TW 100130899A TW 201225658 A TW201225658 A TW 201225658A
Authority
TW
Taiwan
Prior art keywords
image
coordinates
subject
distance
distance information
Prior art date
Application number
TW100130899A
Other languages
Chinese (zh)
Inventor
Toshiyuki Inoko
Koki Saito
Original Assignee
Teamlab Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Teamlab Inc filed Critical Teamlab Inc
Publication of TW201225658A publication Critical patent/TW201225658A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor

Abstract

To make it possible to perform more-accurate cropping without relying on the skill of the user when cropping out a subject from image information in which the image of the subject has been captured. The present invention is characterized in comprising: an image camera (15) for generating the image of a subject by capturing the image; a distance camera (16) for measuring the distance to a target object displayed in various sections in a case in which the image is a visual range that includes the subject and a background, and generating distance information in which a correlation is established between the distance and the coordinates of the visual range on the image; a coordinates converter (101) for converting the coordinates of the distance information to the coordinates on the image of the subject and generating converted distance information; and an image-cropping unit (102) for extracting, from among the post-conversion coordinates included in the converted distance information, coordinates for which the correlated distance satisfies a predetermined condition, separating the region specified by the extracted coordinates and other regions in the image of the subject, and outputting the result.

Description

201225658 六、發明說明 【發明所屬之技術領域】 本發明係有關於攝像裝置、影像處理裝置、影像處理 方法及影像處理程式,尤其是有關於,攝像所生成之影像 中所含有之被攝體與背景的分離處理。 【先前技術】 作爲拍攝被攝體所得之影像的活用樣態,係有將被攝 體沿著輪廓而切出,藉此而可和其他背景進行合成等之樣 態。此種被攝體的切出處理,係除了由作業者的手動作業 來進行以外,有時候會藉由對光電轉換而被電子化之影像 資訊進行影像處理而實現。 作爲藉由影像處理來實現被攝體切出的樣態,被提出 有,由作業者指定輪廓所包含之影像的領域,藉由比較該 已被指定之領域的影像濃度和另行指定的基準濃度,以偵 測被攝體的輪廓線的方法(例如,參照專利文獻1)。 又,爲了降低背景影像之影響而實現更高精度的切 出,在已被指定之領域內,將特徵量類似的像素群分類成 爲叢集,將該叢集分類成被攝體之輪廓的內側與外側’以 偵測被攝體的輪廓線的方法,已被提出(例如’參照專利 文獻2)。 另一方面,基於焦點條件不同.的2張同影像的影像資 訊,來偵測影像中的背景部分與被攝體影像的方法’已被 提出(例如參照專利文獻3)。 -5- 201225658 [先前技術文獻] [專利文獻] [專利文獻1 ]日本特開昭 [專利文獻2 ]日本特開平 [專利文獻3]日本特開平 【發明內容】 [發明所欲解決之課題] 專利文獻1、2所揭露的技術 定1至2個領域。若是理解影像切 則可指定適切的領域,但對一般的 影像切出之特性再來指定領域,是 又,專利文獻1至3之任一者 子化的影像資訊來判斷,因此隨著 係等影像資訊之狀態,有時候會導 法適切執行。 本發明係考慮上記實情而硏發 攝體而成的影像資訊的被攝體與背 依靠使用者的熟練度,就能進行更 [用以解決課題之手段] 爲了解決上記課題,本發明的 像攝像部,係藉由攝像而生成顯示 體影像;和距離資訊生成部,係測 63 -5 745號公報 9- 83776號公報 10- 233919號公報 中,必須要由使用者指 出之特性等的使用者, 使用者來說,要理解到 很困難的。 中,由於是根據已被電 被攝體色與背景色之關 致被攝體的切出處理無 ,目的在於,在拍攝被 景之分離處理時,可不 高精度的分離處理。 一樣態,係爲含有:影 有被攝體與背景的被攝 定以包含被攝體與背景 -6 -201225658 VI. Description of the Invention [Technical Field] The present invention relates to an image pickup apparatus, an image processing apparatus, an image processing method, and an image processing program, and more particularly, relates to a subject contained in an image generated by imaging Separation of the background. [Prior Art] As an image of the image obtained by photographing a subject, it is possible to cut out the subject along the contour, thereby synthesizing it with other backgrounds. The cutting process of such a subject is performed by a manual operation by an operator, and sometimes by image processing by electronically converting image information by photoelectric conversion. As a form in which the subject is cut out by image processing, it is proposed that the operator specifies the field of the image included in the outline by comparing the image density of the designated field with a separately specified reference density. A method of detecting an outline of a subject (for example, refer to Patent Document 1). Further, in order to reduce the influence of the background image and achieve higher-precision cutting out, in the designated field, the pixel groups having similar feature amounts are classified into clusters, and the cluster is classified into the inner side and the outer side of the contour of the object. 'A method for detecting the outline of a subject has been proposed (for example, 'refer to Patent Document 2). On the other hand, a method of detecting a background portion and a subject image in an image based on two image information of the same image with different focus conditions has been proposed (for example, refer to Patent Document 3). [PRIOR ART DOCUMENT] [Patent Document] [Patent Document 1] Japanese Patent Laid-Open [Patent Document 2] Japanese Patent Laid-Open Patent Publication No. JP-A No. JP-A No. The techniques disclosed in Patent Documents 1 and 2 are defined in one or two fields. If you understand the image cut, you can specify the appropriate field. However, the characteristics of the general image cut-out are specified in the field, and the image information of any of the patent documents 1 to 3 is judged. The state of the image information, sometimes guided by the law. The present invention can be carried out in consideration of the subject information of the image information obtained by the actual situation and the user's proficiency in the back, and the method for solving the problem is to solve the problem of the present invention. In the imaging unit, a display image is generated by imaging; and the distance information generating unit is used in the publication of the Japanese Patent Application Publication No. Hei. No. Hei. For the user, it is difficult to understand. In the case where the subject is cut off according to the subject color and the background color, the purpose is to separate the processing from the scene when the scene is separated. The same state, contains: the subject with the subject and the background to be captured to contain the subject and background -6 -

S 201225658 之視覺性範圍爲影像時的各部分至所被顯示之對象物爲止 的距離,生成前記視覺性範圍之影像上的座標與距離所建 立關連而成的距離資訊的一種攝像裝置,其特徵爲,含 有:座標轉換部,係將前記已被取得之距離資訊之座標轉 換成前記被攝體影像上之座標,以生成轉換後距離資訊; 和座標抽出部,係在前記已被生成之轉換後距離資訊所含 有的轉換後之座標當中,將所被建立關連之距離是滿足所 定條件的座標,予以抽出;和影像分離部,係在前記被攝 體影像當中,將被前記已被抽出之座標所特定之領域與其 他領域予以分離而輸出。 又,本發明的另一樣態,係一種影像處理裝置,其特 徵爲,含有:距離資訊取得部,係測定以包含被攝體與背 景之視覺性範圔爲影像時的各部分至所被顯示之對象物爲 止的距離,取得前記視覺性範圍之影像上的座標與距離所 建立關連而成的距離資訊;和被攝體影像取得部,係取得 顯示有被攝體與背景的被攝體影像:和座標轉換部,係將 前記已被取得之距離資訊之座標轉換成前記被攝體影像上 之座標,以生成轉換後距離資訊;和座標抽出部,係在前 記已被生成之轉換後距離資訊所含有的轉換後之座標當 中,將所被建立關連之距離是滿足所定條件的座標,予以 抽出;和影像分離部,係在前記已被取得之被攝體影像當 中,將被前記已被抽出之座標所特定之領域與其他領域予 以分離。 又,本發明的再另一樣態,係一種影像處理方法,其 201225658 特徵爲,測定以包含被攝體與背景之視覺性範圍爲影像時 的各部分至所被顯示之對象物爲止的距離,取得前記視覺 性範圍之影像上的座標與距離所建立關連而成的距離資訊 並記憶至記憶媒體;取得顯示有被攝體與背景的被攝體影 像並記憶至記憶媒體;將前記已被記憶之距離資訊之座標 轉換成前記被攝體影像上之座標,以生成轉換後距離資訊 並記憶至記憶媒體;在前記已被生成之轉換後距離資訊所 含有的轉換後之座標當中,將所被建立關連之距離是滿足 所定條件的座標,予以抽出,在前記已被取得之被攝體影 像當中,將被前記已被抽出之座標所特定之領域與其他領 域予以分離而記憶至記憶媒體。 又,本發明的再另一樣態,係一種影像處理程式,其 特徵爲,係令資訊處理裝置,執行:測定以包含被攝體與 背景之視覺性範圍爲影像時的各部分至所被顯示之對象物 爲止的距離,取得前記視覺性範圍之影像上的座標與距離 所建立關連而成的距離資訊並記億至記憶媒體之步驟;和 取得顯示有被攝體與背景的被攝體影像並記憶至記憶媒體 之步驟;和將前記已被記憶之距離資訊之座標轉換成前記 被攝體影像上之座標,以生成轉換後距離資訊並記憶至記 憶媒體之步驟;和在前記已被生成之轉換後距離資訊所含 有的轉換後之座標當中,將所被建立關連之距離是滿足所 定條件的座標,予以抽出,在前記已被取得之被攝體影像 當中,將被前記已被抽出之座標所特定之領域與其他領域 予以分離而記憶至記憶媒體之步驟。 -8- 201225658 [發明效果] 若依據本發明,則在拍攝被攝體而成的影像資訊的被 攝體與背景之分離處理時,可不依靠使用者的熟練度’就 能進行更高精度的分離處理。 【實施方式】 以下,參照圖面,詳細說明本發明的實施形態。於本 實施形態中,含有拍攝影像的影像攝像機,和拍攝灰階等 被減色過的影像,並且取得被顯示在該影像上之位置的被 攝體或背景等之對象的距離(以下稱作距離資訊)的距離攝 像機;以自動執行影像攝像機所拍攝到之影像中所顯示的 被攝體之輪廓的切出處理的攝像裝置爲例子來說明。 圖1係本實施形態所述之攝像裝置1的硬體構成的區 塊圖。如圖1所示,本實施形態所示的攝像裝置1,係除 了具有和一般的伺服器或PC(Personal Computer)等之資訊 處理裝置同樣的構成以外,還含有上述的距離攝像機或影 像攝像機。亦即,本實施形態所述之攝像裝置1,係有 CPU(Central Processing Unit)10 ' RAM(Random AccessThe visibility range of S 201225658 is the distance from each part of the image to the object to be displayed, and the image of the distance information between the coordinates and the distance established on the image of the visual range is generated. For example, the coordinate conversion unit converts the coordinates of the distance information obtained from the previous record into the coordinates of the pre-recorded object image to generate the converted distance information; and the coordinate extraction unit is converted in the pre-recorded generation. Among the converted coordinates contained in the back distance information, the distance to which the connection is established is a coordinate that satisfies the predetermined condition, and is extracted; and the image separation unit is in the pre-recorded subject image, and the pre-recorded image has been extracted. The area specified by the coordinates is separated from other areas and output. According to still another aspect of the present invention, a video processing device includes: a distance information acquisition unit that measures each part when a visual range including a subject and a background is an image to be displayed The distance from the object to the distance is obtained by obtaining the distance information between the coordinates and the distance established on the image of the visual range in the front view; and the subject image acquisition unit acquires the subject image on which the subject and the background are displayed. And the coordinate conversion unit converts the coordinates of the distance information obtained by the pre-record into the coordinates of the pre-recorded object image to generate the converted distance information; and the coordinate extraction unit, which is the converted distance after the pre-record has been generated. Among the converted coordinates contained in the information, the distance to be established is the coordinate that satisfies the specified condition, and is extracted; and the image separation unit is recorded in the image of the subject that has been obtained in the pre-recorded The areas specific to the extracted coordinates are separated from other areas. Still another aspect of the present invention is an image processing method, wherein 201225658 is characterized by measuring a distance from each part when a visual range of a subject and a background is an image to an object to be displayed. Obtaining the distance information between the coordinates and the distance established on the image of the visual range of the previous record and memorizing it to the memory medium; obtaining the subject image showing the subject and the background and memorizing it to the memory medium; the pre-record has been memorized The coordinate of the distance information is converted into a coordinate on the image of the pre-recorded object to generate the converted distance information and memorized to the memory medium; among the converted coordinates contained in the converted distance information that has been generated in the pre-record, the The established distance is a coordinate that satisfies the specified conditions, and is extracted. In the image of the subject that has been obtained in the foregoing, the field specified by the coordinates from which the pre-record has been extracted is separated from other fields and memorized to the memory medium. Still another aspect of the present invention is an image processing program, characterized in that the information processing apparatus is configured to perform measurement to display a portion including a subject and a background of a visual range as an image to be displayed The distance from the object, the distance information between the coordinates and the distance established on the image of the visual range of the previous record, and the steps of recording the media to the memory medium; and obtaining the subject image showing the subject and the background And the step of memorizing to the memory medium; and converting the coordinates of the distance information that has been memorized into the coordinates of the pre-recorded subject image to generate the converted distance information and memorizing to the memory medium; and the pre-record has been generated Among the converted coordinates contained in the converted distance information, the distance to which the connection is established is a coordinate that satisfies the predetermined condition, and is extracted. In the image of the subject that has been obtained in the foregoing, the pre-recorded image has been extracted. The steps in which the area specified by the coordinates is separated from other fields and memorized to the memory medium. -8-201225658 [Effect of the Invention] According to the present invention, when the subject of the image information obtained by capturing the subject is separated from the background, the user's proficiency can be used to perform higher precision. Separation process. [Embodiment] Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. In the present embodiment, the image camera including the captured image and the subtracted image such as the gray scale are captured, and the distance of the object such as the subject or the background displayed at the position on the image is acquired (hereinafter referred to as the distance). The distance camera of the information) is described as an example of an imaging device that automatically performs a cutting process of the outline of the subject displayed in the image captured by the image camera. Fig. 1 is a block diagram showing the hardware configuration of the image pickup apparatus 1 according to the embodiment. As shown in Fig. 1, the imaging device 1 of the present embodiment includes the above-described distance camera or video camera in addition to the configuration of the information processing device such as a general server or a PC (Personal Computer). In other words, the imaging device 1 according to the present embodiment is a CPU (Central Processing Unit) 10 'RAM (Random Access).

Memory)ll、ROM(Read Only Memory)12、HDD(Hard Disk Drive)13及I/F14是透過匯流排19而被連接。又,I/F14 上係連接有,影像攝像機15、距離攝像機16、 LCD(Liquid Crystal Display)17 及操作部 18。 CPU10係爲演算手段’係控制攝像裝置i全體的動 201225658 作。RAM 11,係可高速讀寫資訊的揮發性記憶媒體,是作 爲CPU 10處理資訊之際的作業領域來使用。ROM12 ’係 爲唯讀的非揮發性記憶媒體,係儲存著軔體等之程式。 HDD13係爲可讀寫資訊的非揮發性記億媒體,儲存著 OS(Operating System)或各種之控制程式、應用程式等。 I/F 1 4,係連接匯流排1 9與各種之硬體或網路等並控制 之。 影像攝像機1 5,係含有光電轉換元件,是將所受光 之光資訊轉換成電子資訊以生成影像資訊的影像攝像部。 距離攝像機1 6,係和影像攝像機1 5同樣地,藉由光電轉 換而生成灰階的影像,並且基於投射的光線被對象反射回 來爲止的時間,測定與對象之距離,藉此以生成與被顯示 在上記灰階影像上之各位置的對象物之間之距離資訊的一 種距離資訊生成部。作爲距離攝像機1 6係可採用,例 如,OPTEX株式會社製的3維影像距離攝像機「ZC-1 000」系列。LCD17,係爲用來讓使用者確認影像形成裝 置1之狀態所需的視覺性使用者介面。操作部1 8,係爲 鍵盤或滑鼠等用來讓使用者向影像形成裝置1輸入資訊所 需的使用者介面。 在此種硬體構成中,ROM 12或HDD14或未圖示之光 學碟片等之記錄媒體中所被儲存的程式會被讀出至 RAM1 1,CPU10依照該程式而進行演算,藉此而構成了軟 體控制部。藉由如此構成之軟體控制部、與硬體之組合, 構成了實現本實施形態所述之攝像裝置1之機能的機能區 -10- 201225658 塊。 接著,參照圖2,說明本實施形態所述之攝像裝置1 的機能構成。圖2係本實施形態所述之攝像裝置1的機能 構成的區塊圖。如圖2所示,本實施形態所述之攝像裝置 1,係含有由影像處理部100所實現之機能、和由顯示控 制部11 〇所實現之機能。此外,影像處理部1 00及顯示控 制部11 〇,係如上述,是由依照已被讀出至RAM 1 1的程 式而由CPU10進行演算所實現的軟體控制部與硬體連動 而達成的機能。 影像處理部1 00,係基於已被距離攝像機1 6所取得 之距離資訊,執行將已被影像攝像機1 5所生成之影像資 訊裡所被顯示之被攝體Q的輪廓切出之影像處理。如圖2 所示,影像處理部1〇〇中係含有座標轉換部101及影像切 出部102。 此處,參照圖3,說明距離攝像機1 6所取得的距離 資訊之例子。如圖3所示,本實施形態所述之距離資訊’ 係除了由距離攝像機16所生成之影像上的水平方向之座 標“ u” (pixel)及垂直方向之座標“ v” (pixel),還含有 被“U” 、/‘V”所特定之影像上之位置上所被顯示的被攝 體或背景的從距離攝像機16之受光面起算的距離“z” (mm)。換言之,在距離資訊中,距離攝像機1 6所拍攝到 的灰階影像之影像上的座標、和影像上的各個座標上所被 顯示之對象物的距離,是被建立關連對應。藉由此種資 訊,就可辨識出,被距離攝像機1 6所拍攝而生成之影像 -11 - 201225658 中所顯示的被攝體Q或背景,在實際空間上的距離。 座標轉換部1 〇 1,係將已被距離攝像機1 6所取得的 距離資訊的座標系,從已被距離攝像機1 6所拍攝之影像 上的座標系,轉換成已被影像攝像機〗5所拍攝之影像資 訊的座標系。如圖2所示,座標轉換部1 01係含有“影像 平面/三度空間”、“旋轉•平移”、“三度空間/影像平 面”及“扭曲補正”之各種座標轉換機能。 影像切出部102,係藉由對已被座標轉換部101所轉 換之距離資訊,適用所定之閾値,以在已被影像攝像機 15所拍攝之影像中,抽出距離攝像機是在所定範圍內的 像素,以將被攝體Q的輪廓予以切出。顯示控制部110, 係將已被影像切出部1 02所切出的被攝體之影像,顯示在 LCD 17。 如此,爲了將已被距離攝像機1 6所取得之距離資 訊,適用於已被影像攝像機1 5所拍攝之影像,而轉換座 標系,這是本實施形態所述之要旨之一。藉此,即使被距 離攝像機1 6所拍攝之影像的解析度較低,而無法獲得所 望畫質等級之影像的情況,或距離攝像機1 6未支援全彩 的這類情況下’由於影像自體是用影像攝像機1 5所拍 攝,因此仍可獲得所望畫質等級之影像。 接著說明’座標轉換部1 0 1所致之各種機能。此外, 首先就“影像平面/三度空間”、“旋轉.平移”、“三 度空間/影像平面”的座標轉換機能進行說明,關於“扭 曲補正”的座標轉換機能將於後述。首先,爲了說明“影 -12- 201225658 像平面/三度空間”的座標轉換機能,針對以攝像機所拍 攝的影像上的座標、與三度空間上之座標的關係,參照圖 4來加以說明。 圖4係以距離攝像機16的受光部16a爲原點,以距 離攝像機16的光軸爲Z軸、水平方向爲Y軸、垂直方向 爲X軸的三度空間中,將被攝體之位置及所被攝像之影 像的座標,以透視投影模型來圖示。此外,圖3的“ Z” 係對應於圖4的Z軸方向之値。如圖4所示,被攝像機所 拍攝之影像,係從攝像機往光軸方向觀看時,是收容在攝 像機之焦距f的位置上所被配置的假想框(圖4中所示之 粗虛線的框)內的風景。該框內的座標,係爲已被距離攝 像機1 6所拍攝之影像上的座標“ u” 、 “ v” 。 此時,從包含被攝體Q之框內的攝像對象所反射的 光,係如圖4所示,朝著距離攝像機16的受光部16a而 被聚光。因此,當距離攝像機16的光軸與Z軸爲相同, 且影像亦即圖4中的框框的長寬比爲1: 1的時候,已被 距離攝像機16所拍攝之影像上的某一點p(Ui,,係使 用實際之被攝體Q的某一點P(Xi, Yi,Zi)及焦距f,而可 藉由以下的式(1)來表示。 P:^ P ( 1)Memory ll, ROM (Read Only Memory) 12, HDD (Hard Disk Drive) 13, and I/F 14 are connected through the bus bar 19. Further, an image camera 15, a distance camera 16, an LCD (Liquid Crystal Display) 17, and an operation unit 18 are connected to the I/F 14. The CPU 10 is a calculation means for controlling the movement of the entire imaging device i 201225658. The RAM 11, which is a volatile memory medium that can read and write information at high speed, is used as a field of work for the CPU 10 to process information. ROM12' is a read-only non-volatile memory medium, which stores programs such as scorpions. The HDD13 is a non-volatile recording medium that can read and write information, and stores an OS (Operating System) or various control programs and applications. I/F 1 4, which is connected to the bus bar and is controlled by various hardware or networks. The video camera 15 includes a photoelectric conversion element, and is an image capturing unit that converts light information of received light into electronic information to generate image information. Similarly to the video camera 16, the video camera 16 generates a grayscale image by photoelectric conversion, and measures the distance from the object based on the time when the projected light is reflected back by the object, thereby generating and A distance information generating unit that displays distance information between objects at respective positions on the grayscale image. For example, the 3D image distance camera "ZC-1 000" series manufactured by OPTEX Corporation can be used as the distance camera 16. The LCD 17 is a visual user interface required for the user to confirm the state of the image forming apparatus 1. The operation unit 18 is a user interface required for a user to input information to the image forming apparatus 1 such as a keyboard or a mouse. In such a hardware configuration, a program stored in a recording medium such as the ROM 12 or the HDD 14 or an optical disc (not shown) is read out to the RAM 1 , and the CPU 10 performs calculation according to the program, thereby constituting The software control unit. The combination of the software control unit and the hardware configured as described above constitutes a functional area -10- 201225658 which realizes the function of the imaging apparatus 1 according to the present embodiment. Next, a functional configuration of the imaging device 1 according to the present embodiment will be described with reference to Fig. 2 . Fig. 2 is a block diagram showing the functional configuration of the image pickup apparatus 1 according to the embodiment. As shown in Fig. 2, the image pickup apparatus 1 according to the present embodiment includes functions realized by the image processing unit 100 and functions realized by the display control unit 11A. Further, as described above, the video processing unit 100 and the display control unit 11 are functionally realized by the software control unit and the hardware implemented by the CPU 10 in accordance with the program that has been read into the RAM 11. . The image processing unit 100 executes image processing for cutting out the outline of the subject Q displayed on the image information generated by the video camera 15 based on the distance information acquired from the camera 16. As shown in Fig. 2, the image processing unit 1 includes a coordinate conversion unit 101 and an image cutout unit 102. Here, an example of the distance information acquired from the camera 16 will be described with reference to Fig. 3 . As shown in FIG. 3, the distance information 'described in the present embodiment is a coordinate "u" (pixel) in the horizontal direction and a coordinate "v" (pixel) in the vertical direction on the image generated by the camera 16. The distance "z" (mm) from the light receiving surface of the camera 16 containing the object or background displayed at the position on the image specified by "U" or /'V". In other words, in the distance information The distance between the coordinates on the image of the grayscale image captured by the camera 16 and the object displayed on each coordinate on the image is established as a correlation. With this information, it can be identified. The distance in the actual space of the subject Q or the background displayed in the image 11 - 201225658 generated by the camera 1-6 - 201225658. The coordinate conversion unit 1 〇1 will be the distance camera 1 6 The coordinate system of the acquired distance information is converted from a coordinate system on the image captured by the camera 16 into a coordinate system of the image information captured by the image camera 5-1. As shown in FIG. 2, the coordinate conversion is performed. Department 01 series There are various coordinate conversion functions of “image plane/three-dimensional space”, “rotation/translation”, “three-dimensional space/image plane” and “distortion correction”. The image cutout unit 102 is performed by the coordinate conversion unit. The distance information converted by 101 is applied to the threshold 値 to extract the pixels whose distance is within the predetermined range from the image captured by the image camera 15 to cut out the outline of the subject Q. Display control The portion 110 displays the image of the subject that has been cut out by the image cutout unit 102 on the LCD 17. Thus, in order to apply the distance information that has been obtained from the camera 16 to the imaged camera This is one of the gist of the present embodiment, and the resolution of the image captured by the camera 16 is low, and the desired image quality level cannot be obtained. In the case of the image, or in the case where the camera 16 does not support full color, 'because the image is taken with the video camera, the image of the desired image quality can still be obtained. Explain the various functions caused by the coordinate conversion unit 1 0 1. In addition, the coordinates conversion function of "image plane/three-dimensional space", "rotation. translation", and "three-dimensional space/image plane" can be explained first. The coordinate conversion function of the distortion correction will be described later. First, in order to explain the coordinate conversion function of "Shadow-12-201225658 image plane/three-dimensional space", the coordinates on the image taken by the camera and the three-dimensional space The relationship between the coordinates will be described with reference to Fig. 4. Fig. 4 is the third point from the light receiving portion 16a of the camera 16, with the optical axis of the camera 16 being the Z axis, the horizontal direction being the Y axis, and the vertical direction being the X axis. In the degree space, the position of the subject and the coordinates of the image to be captured are illustrated by a perspective projection model. Further, the "Z" of Fig. 3 corresponds to the Z of the Z-axis direction of Fig. 4. As shown in Fig. 4, the image captured by the camera is a virtual frame that is placed at the position of the focal length f of the camera when viewed from the camera in the direction of the optical axis (the frame of the thick dotted line shown in Fig. 4). ) The scenery inside. The coordinates in this frame are the coordinates "u" and "v" on the image that has been captured by the camera 16. At this time, the light reflected from the imaging target in the frame including the subject Q is condensed toward the light receiving portion 16a of the camera 16 as shown in Fig. 4 . Therefore, when the optical axis of the distance camera 16 is the same as the Z axis, and the image, that is, the aspect ratio of the frame in FIG. 4 is 1:1, a point p on the image taken by the camera 16 ( Ui, using a certain point P (Xi, Yi, Zi) and a focal length f of the actual subject Q, and can be expressed by the following formula (1): P: ^ P ( 1)

A 座標轉換部1 0 1,基於上記式(1 ),來計算以下的式 (2),以將距離攝像機16之攝像所致的影像上的座標p(ui5 Vi),轉換成三度空間上之座標P(Xi, Yi,Zi)。 -13- 201225658 ui Γfix 〇 C1x^ r 、 Xi vi 二 〇 fly 〇1y Yj Zj 1 1 J L〇 〇 1 J ^ 1 j 此處,式(2)中的含有 “f|x,,、“ fly” 、 “clx” 、 ‘‘ cly”的3行3列之矩陣,係爲表示距離攝像機1 6之焦 距及光軸的偏差的內部參數。“ fl χ ”、 “ f, y ”係爲距離 攝像機16的水平方向、垂直方向之焦距,如上述,若長 寬比爲1 : 1則爲相同。“ Clx” 、 “ cly”係爲距離攝像機 16的水平方向、垂直方向之光軸的偏差。 該距離攝像機16的內部參數,係例如Zhang之手 法’將距離攝像機16的焦距予以固定然後從各種角度來 拍攝檢査板’藉由演算所拍攝到之檢查板的格子點的位 置,就可求出。座標轉換部1 0 1,係將如此所求出的距離 攝像機1 6的內部參數加以記憶,藉由使用該內部參數來 進行式(2)之計算,以將距離攝像機1 6之攝像所致的影像 上的座標(Ui,Vi),轉換成三度空間上之座標(Xi,Yi,Zi)。 接著說明,“旋轉.平移”及“三度空間/影像平 面”的座標轉換機能。如上述,三度空間上之座標軸,係 隨著各台攝像機而決定。因此,在距離攝像機16與影像 攝像機1 5中,如圖5所示,座標軸係不同。“旋轉.平 移”的座標轉換機能,係將距離攝像機1 6的三度空間上 之座標系,轉換成影像攝像機1 5的三度空間上之座標系 的處理。 在將距離攝像機16的三度空間上之座標,轉換成影 -14- 201225658 像攝像機1 5的三度空間上之座標之際,座標轉換部 1 〇 1,係採用由3行3列之旋轉向量“ R ”及3列1行的平 移向量“ t ”所構成的外部參數“ R丨t ” 。甚至,座標轉 換部101係同時進行“旋轉·平移”的座標轉換機能與 “三度空間/影像平面”的座標轉換機能。' “三度空間/影像平面”的座標轉換處理,係與上記 式(2)所實現之“影像平面/三度空間”的座標轉換相反, 是將三度空間上之座標轉換成影像上之座標的處理。但是, 於本實施形態中’由於目的是將距離攝像機1 6之攝像所 致的影像上的座標,轉換成影像攝像機15之攝像所致之 影像上的座標,因此在“三度空間/影像平面”的座標轉 換處理中’係將已經藉由“旋轉•平移”之座標轉換處理 而被轉換成影像攝像機的三度空間上之座標的座標,使用 影像攝像機15的內部參數而轉換成影像攝像機15之攝像 所致之影像上的座標。該轉換係藉由以下的式(3)來實 現。The coordinate conversion unit 1 0 1 calculates the following equation (2) based on the above equation (1), and converts the coordinate p(ui5 Vi) on the image from the image captured by the camera 16 into a three-dimensional space. The coordinates P (Xi, Yi, Zi). -13- 201225658 ui Γfix 〇C1x^ r , Xi vi 二〇fly 〇1y Yj Zj 1 1 JL〇〇1 J ^ 1 j Here, the formula (2) contains "f|x,,," fly" The matrix of 3 rows and 3 columns of "clx" and ''cly" is an internal parameter indicating the deviation from the focal length and optical axis of the camera 16. "fl χ ", " f, y " is the focal length in the horizontal direction and the vertical direction from the camera 16, as described above, and the same is true if the aspect ratio is 1:1. "Clx" and "cly" are deviations from the optical axis in the horizontal direction and the vertical direction of the camera 16. The internal parameters of the distance camera 16 are, for example, the method of Zhang 'fixing the focal length of the camera 16 and then taking the inspection board from various angles' by calculating the position of the grid point of the inspection board captured by the calculation. . The coordinate conversion unit 101 is used to memorize the internal parameters of the distance camera 16 thus obtained, and the calculation of the equation (2) is performed by using the internal parameters to cause the imaging from the camera 16. The coordinates (Ui, Vi) on the image are converted into coordinates (Xi, Yi, Zi) in the three-dimensional space. Next, the coordinates conversion function of "rotation. translation" and "three-dimensional space/image plane" will be described. As mentioned above, the coordinate axis in the three-dimensional space is determined by each camera. Therefore, in the distance camera 16 and the video camera 15, as shown in Fig. 5, the coordinate axes are different. The "rotation. translation" coordinate conversion function converts the coordinate system from the three-dimensional space of the camera 16 into the coordinate system of the three-dimensional space of the image camera 15. When the coordinates on the three-dimensional space of the camera 16 are converted into coordinates on the three-dimensional space of the camera-14, the coordinate conversion unit 1 〇1 is rotated by three rows and three columns. The external parameter "R丨t" consisting of the vector "R" and the translation vector "t" of 3 columns and 1 row. Further, the coordinate conversion unit 101 performs coordinate conversion functions of the "rotation/translation" coordinate conversion function and the "three-dimensional space/image plane" at the same time. The coordinate conversion process of the "three-dimensional space/image plane" is the opposite of the coordinate conversion of the "image plane/three-dimensional space" realized by the above formula (2), which converts the coordinates in the three-dimensional space into images. Coordinated processing. However, in the present embodiment, "the purpose is to convert the coordinates on the image from the image of the camera 16 into the coordinates on the image caused by the imaging of the image camera 15, and therefore in the "three-dimensional space/image plane" The "coordinate conversion processing" is converted into a coordinate of the coordinate of the three-dimensional space of the image camera by the coordinate conversion processing of "rotation/translation", and is converted into the image camera 15 by using the internal parameters of the image camera 15. The coordinates on the image caused by the camera. This conversion is achieved by the following formula (3).

UJ f2x 0 C2? r11 r12 r13 Xi V/ S VJ 0 f2y C2y r21 「22 r23 之2 Yi L1 J 〇 1 J Vr31 r32 r33 zi L 1 J 此處’式(3)中的含有 “ f2x” 、 “f2y” 、 “C2x” 、 “ c2y”的3行3列之矩陣,係爲表示影像攝像機1 5之焦 距及光軸的偏差的內部參數。“ f2x” 、 “ f2y”係爲影像 攝像機15的水平方向、垂直方向之焦距,如上述,若長 -15- 201225658 寬比爲1 : 1則爲相同。“ c2x” 、 “ C2y”係爲影像攝像機 15的水平方向、垂直方向之光軸的偏差。該影像攝像機 15的內部參數,係和上記距離攝像機16的內部參數相 同,是藉由例如Zhang之手法就可求出。 又’式(3)中的含有“ ru”〜“r33”及“tl”〜“t3” 的矩陣,係爲上述的外部參數“R|t” 。外部參數“R| t”也是可以藉由Zhang之手法而求出。如上述,外部參 數“ R | t” ,係爲用來將距離攝像機1 6的座標系轉換成 影像攝像機1 5的座標系所需的參數。因此,爲了求出外 部參數“ R I t” ,係將影像攝像機1 5及距離攝像機1 6固 定成與實際操作攝像裝置1相同的狀態下,以影像攝像機 15及距離攝像機16雙方來拍攝面朝某方向的檢查板,以 獲得分別由影像攝像機1 5及距離攝像機1 6所拍攝到的一 組檢查板影像。 該一組檢查板之影像,由於是拍攝同一個檢査板而 得,因此其格子點的位置係可藉由上記外部參數“ R I t” 來轉換。因此,藉由將檢查板的位置做各種改變以生成複 數組影像,就可解出聯立方程式而求出外部參數“RI t” 。此外,如上述,距離攝像機16,係可藉由攝像而生 成灰階之影像。因此,在求上記內部參數及外部參數 “ R I t”之際,可使用該灰階之影像。 座標轉換部1 〇 1,係將如此所被求出的影像攝像機 1 〇 1的內部參數及外部參數“ R I t”加以記憶,使用這些 資訊來進行式(3)的演算,以同時實現"旋轉•平移’’及 201225658 “三度空間/影像平面”之座標轉換機能。 藉由此種處理,如圖3所示,以距離攝像機16之攝 像所得之影像上的座標的方式而被取得的距離資訊,會被 轉換成影像攝像機15之攝像所得之影像上的座標。座標 轉換部1 〇 1,係將如此生成的對應於影像攝像機1 5所得 之攝像影像的距離資訊(以下稱作轉換後距離資訊),輸出 至影像切出部102。 接著說明,影像切出部102所進行的切出處理。圖 6(a)係將藉由轉換後距離資訊而特定出Z軸方向距離的座 標的點,重合至含有被影像攝像機1 5所拍攝到之被攝體 之影像(以下稱作被攝體影像)之狀態的圖示。被距離攝像 機1 6取得Z軸方向距離之際的解析度,係低於以影像攝 像機1 5進行攝像所生成之影像的解析度,因此若將轉換 後距離資訊的座標,重疊於被攝體影像,則如圖6(a)所 示,轉換後距離資訊的座標係會呈現離散的點而被顯示。 影像切出部1 02,係藉由對轉換後距離資訊中的Z軸 方向之距離,適用閾値,就可僅抽出對象是遠離攝像機在 所定距離以內的點。亦即,影像切出部1 02係成爲座標抽 出部之機能。圖6(b)係爲,將如此所被抽出的點,重疊至 被攝體影像之狀態的圖示。如圖6(b)所示,重疊在被攝體 上的點,係被抽出。如圖6(b)所示所被抽出的點,以下稱 作抽出點。 影像切出部102,係將未與該抽出點重疊的部分,從 被攝體影像中予以刪除,以抽出被攝體。然而,如上述, -17- 201225658 轉換後距離資訊的各點,在被攝體影像中是呈現離散狀, 因此無法直接適用抽出點。於是,影像切出部102係把離 散的各點設成白像素、其他領域設成黑像素,重複進行影 像的膨脹處理,使離散的各點連結而形成1個領域。所謂 影像的膨脹處理係爲,若某個注目像素之周圍只要有1個 像素是白像素,則將該注目像素置換成白像素的處理。影 像切出部1 02,係重複進行該膨脹處理,直到抽出點中, 在縱、橫、斜方向上與相鄰點連結爲止。 圖7(a)係爲上記膨脹處理之結果,是在抽出點上有連 接相鄰點之狀態的圖示。此外,從圖6(b)之狀態往圖7(a) 之狀態演變之際,影像切出部102係除了重複上記膨脹處 理以外,還會進行因膨脹處理所導致之粗糙輪廓的平滑化 處理。又,有時候會因爲距離攝像機16的雜訊,而導致 與被攝體無關的位置上出現抽出點的情形,因此影像切出 部1 02係藉由標記處理,僅留下最廣領域或具有所定閾値 以上之面積的領域,進行雜訊截除處理。 影像切出部102,係於被攝體影像中,如圖7(a)所示 般地將所被生成之領域(以下稱作抽出對象領域)所對應以 外的部分予以刪除,就可如圖7(b)所示般地,使被攝體與 背景分離,而抽出被攝體。此處,藉由上記膨脹處理,抽 出對象領域會變成如圖7(b)所示比實際之被攝體的輪廓還 要寬廣的領域。於圖7(b)中,將抽出對象領域當中,從實 際之被攝體多出的部分,予以塗黑表示。 影像切出部102,係在如圖7(b)所示已被抽出的影像 201225658 中,藉由進行例如先前的邊緣偵測之處理等,將被攝體的 輪廓外側多餘領域予以刪除,較爲理想。如圖7(b)所示, 由於是沿著被攝體的輪廓而將影像予以切出,因此被攝體 的輪廓與已被切出之影像的輪廓之間的濃度,可以想作大 略一定。因此,可以比先前技術在影像攝像機15拍攝所 生成之影像中偵測被攝體輪廓的方式,進行更高精度的邊 緣偵測。 又,影像切出部102,係如圖7(a)所示般地生成了抽 出對象領域之後,在進行被攝體影像的切出之前,亦可進 行抽出對象領域的收縮處理。所謂影像的收縮處理,係與 上記膨脹處理相反,是若某個注目像素之周圍只要有1個 像素是黑像素,則將該注目像素置換成黑像素的處理。藉 此,被上記膨脹處理所膨脹的輪廓會被收縮,可減輕如圖 7(b)所示的從被攝體多出的現象。 接著說明,座標轉換部1〇1的“扭曲補正”之座標轉 換機能。圖8(a)係爲“扭曲補正”之座標轉換機能爲目的 時的課題之圖示。如圖8(a)所示,對轉換後距離資訊適用 閩値而被抽出的抽出點,有時候會與被攝體影像中的被攝 體發生偏差。一般認爲這是由於攝像機之透鏡的半徑方向 及圓周方向之扭曲所導致。因此,座標轉換部101係將距 離攝像機1 6所取得之距離資訊加以轉換而生成轉換後距 離資訊之際,先補正該扭曲然後進行轉換。此外,於本實 施形態中,是假設距離攝像機16之透鏡上有存在扭曲而 進行補正。 -19- 201225658 本實施形態所述之座標轉換部101,係於上述的式(3) 之計算中’還會進行“扭曲補正”之處理。此處,上記式 (3)的目十算’係等同於以下的式(4)〜(6)。其中,z表0。 X Xi y 二 R Yi Z > Li X =x / z V =y / z ui = :弋· X’ H :V / 一 (5) c, (4) (6) 對此’若考慮透鏡之扭曲,則上記式(6)係可被以下 的式(7)、(8)所置換。 X' = x’( l+kf+l^r4 ) + Zp/’y’ + p2( r2+2x'2) y ’ = y'( l+k^+k〆)+ 2ρ/ r2+2y’2 ) + 2p2x Υ ( 7 ) where r2 = x 2 + y,2 (8) vi - Ty y + cy 此處,式(7)中的“ kl ” 、 “ k2,’以及“ P! ” ' “ P 2 ” ’係分別爲半徑方向、圓周方向之扭曲係數。亦 即’式(7)是用來補正透鏡所致之扭曲的式子。於本實施 -20- 201225658 形態中,雖然是以考慮分別展開至2次爲止的係數爲例 子,但譯可考慮3次以上的係數。這些扭曲係數,譯可藉 由校正而求出。亦即,根據上述的距離攝像機16之內部 參數的求出之際所生成的複數檢查板之影像。將各格子點 之位置適用上記式子來進行演算,就可求出“k〆’、 “k2”以及“Pl” 、 “P2”的扭曲係數。 此外,上述係數所考慮的次方,係隨著攝像機與被攝 體之距離來加以決定,較爲理想。一般而言,攝像機與被 攝體的距離越近,則扭曲會越大。因此,攝像機與被攝體 的距離越近,就考慮越高次的係數來進行計算,就可更合 適地進行扭曲補正。 座標轉換部1〇1,係記憶著如此求出的扭曲係數,將 距離攝像機16的三維座標(Xi,Yi,Zi)予以輸入,因此依 照上述的式(3)而求出影像攝像機15之攝像所致之影像上 的座標(Uj,V〗)之際,藉由使用上記式(4)、(5)、(7)、(8) ’ 就可在透鏡之扭曲是已被補正的情況下,獲得影像攝像機 15之攝像所致之影像上的座標。藉此,如圖8(b)所示’ 可以消除抽出點與被攝體之偏差。 如以上說明,在本實施形態所述之攝像裝置1中’係 從被攝體影像切出有被攝體被顯示之部分的時候,原則1 不使用影像的濃度資訊,而是基於距離攝像機1所取得的 距離資訊來進行處理。又,在本實施形態所述之攝像裝置 1中,並沒有對使用者要求操作,而是由影像處5里部 1 00,基於所被給予之資訊而自動執行處理。因此’於影 -21 - 201225658 像的切出處理中,可不依靠使用者的熟練度,就能進行更 高精度的切出處理。 此外’於上記實施型態中,如圖2所示,雖然以含有 影像攝像機15及距離攝像機16的攝像裝置1爲例子來說 明’但亦可以影像處理部1 00單體或用來實現影像處理部 1〇〇所需之程式的方式來提供。此時,曾拍攝被攝體影像 的第1攝像機的內部參數曾取得距離資訊的第2攝像機的 內部參數以及第1攝像機與第2攝像機的外部參數,必須 要另行取得。 作爲外部參數的取得方法,係除了上述立體校正所致 的方法以外,若影像攝像機15及距離攝像機16中有搭載 GPS(Global Positioning System)之類的測位系統且爲高精 度者’則亦可使用該資訊。具體而言,影像攝像機1 5及 距離攝像機1 6 ’係分別取得被攝體影像、距離資訊之 際,藉由所搭載的測位系統而同時取得資訊取得時的位置 及方位,輸入至座標轉換部1 0 1。 藉此,座標轉換部1〇1,係基於已被輸入的位置及方 位之資訊,就可求出用來把距離攝像機16的三度空間之 座標系轉換成影像攝像機15的三度空間之座標系所需的 外部參數。此外,上記所被輸入的資訊當中,亦可分別求 出因方位差異而導致的旋轉向量R、因位置差異而導致的 平移向量t。 此外,於上記實施形態中,如圖7(a)、(b)所示,是 以從被攝體影像中刪除抽出對象領域所對應之領域以外的 -22- 201225658 部分爲例子來說明。除此以外,亦可將對應於抽出對象領 域之領域與其他領域,當做個別的圖層而加以保存。亦 即,影像切出部I 02,係成爲影像分離部之機能而至少將 上記抽出對象領域所特定的被攝體影像之領域與其他領域 予以分離而記憶至記憶媒體,藉此就可獲得本實施形態所 述之效果。藉此,在以下的操作中,可讓使用者選擇有無 背景部分影像的式樣,可提升使用者的便利性。 又,於上記實施型態中,是在對被攝體影像重疊轉換 後距離資訊的座標之際,如圖6(a)、(b)所示,將轉換後 距離資訊的座標所特定的點,以轉換後距離資訊的解析 度、亦即距離攝像機16之解析度與被攝體影像之解析度 的比率所相應的間隔,配置在被攝體影像上的情形爲例子 來加以說明。換言之,在圖6(a)、(b)的例子中,是將被 轉換後距離資訊的座標所特定的各點,以隨應於距離攝像 機1 6解析度與被攝體影像解析度之比率的間隔,對應至 被攝體影像上的各像素。亦即,由於是將轉換後距離資訊 的座標,以解析度不變的狀態,重疊至解析度較高的被攝 體影像,因此會變成如圖6(a)、(b)所示的離散狀態。 相對於此,亦可預先使轉換後距離資訊之解析度對應 於被攝體影像之解析度之後,再來進行重疊。例如,將被 轉換後距離資訊的座標所特定的各點視爲像素,藉由將各 個像素予以分割,就可使轉換後距離資訊之解析度與被攝 體影像之解析度一致。此種樣態係說明如下。 圖9 (a)係圖3所示之距離資訊是被座標轉換部1〇1轉 -23- 201225658 換成轉換後距離資訊後之狀態的圖示。如圖9(a)所示,在 圖3中已被特定成爲(Ul,V|)、(U2,v2). . ·的座標,係 被當成轉換後的座標而被特定成爲(ιΓ 1,V’ 1)、(U’ 2, v’ 2)· · •。圖9(b)係將圖9(a)所示之各個轉換後的各 座標視爲像素’將各個像素做4分割之狀態的圖示。 於圖9(a)中被特定成爲(u、,V、)的點,係對應於圖 9(b)中所示的(u’h,ν’")、(u’12,v’12)、(u,13,v’13)、(u’14, v’m)之4點。各點係如圖10(a)、(b)所示,相當於將原本 的解析度下被(u、,ν’!)所特定之像素做縱橫2分割而配置 的4個像素》 藉由如此處理,就可不是生成如圖6(a)、(b)所示的 離散的點,而是被攝體影像中全像素是被1: 1對應的方 式,生成同一解析度的距離資訊。又,如圖9(b)所示,對 分割後的4點,係分別建立關連有分割前的距離“ Ζ Γ 。 因此,影像切出部102對距離Z適用閩値而僅抽出對象是 在所定距離以內的點的結果,係在如圖6(b)所示與被攝體 輪廓一致的狀態下,會成爲被埋在相鄰圖點間的狀態,可 理想地求出抽出對象領域。 此外,在圖9(a)、(b)及圖10(a)、(b)的樣態中,都進 行像素分割所致粗糙輪廓的平滑化處理、或雜訊截除所需 之標記處理等,較爲理想。又,在圖 9(a)、(b)及圖 10(a)、(b)的樣態中,都有考量到抽出對象領域與實際之 被攝體的輪廓沒有完全一致,抽出對象領域會從被攝體之 輪廓多出的情形,因此亦可和上記實施形態同樣地,進行 -24- δ 201225658 先前之邊緣偵測之處理等,將被攝體輪廓外側多餘領域予 以刪除。即使在此情況下,由於仍是沿著被攝體之輪廓而 切出影像,因此可以比先前技術在影像攝像機15拍攝所 生成之影像中偵測被攝體輪廓的方式,進行更高精度的邊 緣偵測,這點是相同的。 【圖式簡單說明】 [圖1]本發明的實施形態所述之攝像裝置的硬體構成 的區塊層。 [圖2]本發明的實施形態所述之攝像裝置的機能構成 的圖示。 [圖3 ]本發明的實施形態所述之距離攝像機所取得的 距離資訊之例子的圖示。 [圖4]本發明的實施形態所述之影像平面/三度空間的 座標轉換機能之原理的圖示》 [圖5 ]本發明的實施形態所述之旋轉.平移的座標轉 換機能之原理的圖示。 [圖6]將本發明的實施形態所述之距離資訊重疊至被 攝體影像之狀態的圖示。 [圖7]本發明的實施形態所述之抽出對象領域的圖 示。 [圖8]本發明的實施形態所述之透鏡之扭曲的補正樣 態的圖示。 [圖9 ]本發明的其他實施形態所述之轉換後距離資訊 -25- 201225658 的例子的圖示。 [圖1 〇]本發明的其他實施形態所述之轉換後距離資訊 所特定的點之分割樣態的圖示。 【主要元件符號說明】UJ f2x 0 C2? r11 r12 r13 Xi V/ S VJ 0 f2y C2y r21 "22 r23 of 2 Yi L1 J 〇1 J Vr31 r32 r33 zi L 1 J Here, the formula (3) contains "f2x", " The matrix of 3 rows and 3 columns of f2y", "C2x", and "c2y" is an internal parameter indicating the focal length of the image camera 15 and the deviation of the optical axis. "f2x" and "f2y" are the levels of the image camera 15. The focal lengths in the direction and the vertical direction are the same as described above if the length -15 - 201225658 is 1:1. "c2x" and "C2y" are the deviations of the optical axes of the horizontal direction and the vertical direction of the image camera 15. The internal parameters of the image camera 15 are the same as the internal parameters of the distance camera 16 and can be obtained by, for example, the method of Zhang. The equation (3) contains "ru" to "r33" and "tl". The matrix of "~3" is the above-mentioned external parameter "R|t". The external parameter "R|t" can also be obtained by the method of Zhang. As mentioned above, the external parameter "R | t" is For the coordinate system used to convert the coordinate system of the distance camera 16 into the image camera 15 Therefore, in order to obtain the external parameter "RI t", the image camera 15 and the distance camera 16 are fixed in the same state as the actually operated imaging device 1, and both the image camera 15 and the distance camera 16 are used. To take an inspection board facing in a certain direction to obtain a group of inspection board images taken by the image camera 15 and the distance camera 16. The images of the group of inspection boards are obtained by shooting the same inspection board. Therefore, the position of the grid point can be converted by the external parameter "RI t". Therefore, by making various changes to the position of the inspection board to generate a complex array image, the simultaneous equation can be solved to find the external The parameter "RI t". Further, as described above, the distance camera 16 can generate an image of a gray scale by imaging. Therefore, when the internal parameter and the external parameter "RI t" are written, the gray scale can be used. The coordinate conversion unit 1 〇1 memorizes the internal parameters of the video camera 1 〇1 thus obtained and the external parameter "RI t", and uses these information to perform (3) The calculation of the coordinate conversion function of the "three-dimensional space/image plane" of both "rotation and translation" and 201225658. With this processing, as shown in Fig. 3, the video from the camera 16 is obtained. The distance information acquired by the coordinates of the image is converted into coordinates on the image obtained by the image camera 15. The coordinate conversion unit 1 〇 1 outputs the distance information (hereinafter referred to as converted distance information) corresponding to the captured image obtained by the video camera 15 in this manner, and outputs it to the image cutout unit 102. Next, the cutting process performed by the image cutout unit 102 will be described. 6(a) is a point at which a coordinate of a distance in the Z-axis direction is specified by the converted distance information, and is superimposed on an image including a subject imaged by the image camera 15 (hereinafter referred to as a subject image). An illustration of the state of ). The resolution obtained when the distance from the camera 16 is obtained in the Z-axis direction is lower than the resolution of the image generated by the image camera 15 , so that the coordinates of the converted distance information are superimposed on the subject image. Then, as shown in FIG. 6(a), the coordinate system of the converted distance information is displayed as discrete points. The image cutout unit 102 applies a threshold 对 to the distance in the Z-axis direction of the converted distance information, so that only the point at which the object is within a predetermined distance away from the camera can be extracted. That is, the image cutout unit 102 functions as a coordinate extraction unit. Fig. 6(b) is a view showing a state in which the point thus extracted is superimposed on the subject image. As shown in Fig. 6(b), the point superimposed on the subject is extracted. The point extracted as shown in Fig. 6(b) is hereinafter referred to as the extraction point. The image cutout unit 102 deletes a portion that is not overlapped with the extraction point from the subject image to extract the subject. However, as described above, each point of the distance information after the conversion of -17-201225658 is discrete in the subject image, so the extraction point cannot be directly applied. Then, the image cutout unit 102 sets each of the discrete points as white pixels and sets other areas as black pixels, repeats the expansion processing of the image, and connects the discrete points to form one field. The image expansion processing is a process in which one pixel is replaced by a white pixel when only one pixel is a white pixel around a certain pixel of interest. The image cut-out portion 102 repeats the expansion process until it is connected to an adjacent point in the vertical, horizontal, and oblique directions. Fig. 7(a) is a diagram showing the state of the expansion processing as described above, and is a state in which the adjacent points are connected at the extraction point. Further, from the state of FIG. 6(b) to the state of FIG. 7(a), the image cutout unit 102 performs smoothing processing of the rough contour due to the expansion processing in addition to the above-described expansion processing. . In addition, sometimes the extraction point appears at a position unrelated to the subject due to the noise from the camera 16, so the image cutout portion 102 is processed by the mark, leaving only the widest field or having In the field of the area above the threshold ,, the noise interception process is performed. The image cutout unit 102 is configured to delete a portion other than the area to be generated (hereinafter referred to as the extraction target area) in the subject image as shown in FIG. 7( a ). As shown in 7(b), the subject is separated from the background, and the subject is extracted. Here, by the above-described expansion processing, the extracted object area becomes an area wider than the outline of the actual subject as shown in Fig. 7(b). In Fig. 7(b), the portion of the extracted object area that is extra from the actual subject is blackened. The image cutout unit 102 deletes the redundant area outside the outline of the subject by performing, for example, the previous edge detection processing in the image 201225658 which has been extracted as shown in FIG. 7(b). Ideal. As shown in Fig. 7(b), since the image is cut out along the outline of the subject, the concentration between the outline of the subject and the outline of the image that has been cut out can be roughly determined. . Therefore, it is possible to detect the contour of the subject in the image generated by the image camera 15 in the prior art, and perform higher-precision edge detection. Further, after the image extraction unit 102 generates the extraction target area as shown in Fig. 7(a), the image cutting unit 102 can perform the contraction processing of the extraction target area before the object image is cut out. The image shrinking process is a process of replacing the pixel of interest with a black pixel as long as one pixel is a black pixel around a certain pixel of interest. As a result, the contour which is inflated by the expansion processing described above is shrunk, and the phenomenon of excessing from the subject as shown in Fig. 7(b) can be alleviated. Next, the coordinate conversion function of the "twist correction" of the coordinate conversion unit 1〇1 will be described. Fig. 8(a) is a diagram showing the problem of the coordinate conversion function of the "twist correction". As shown in Fig. 8(a), the extracted points which are applied to the converted distance information and sometimes extracted are deviated from the subject in the subject image. This is generally believed to be caused by the distortion of the lens in the radial direction and the circumferential direction of the lens. Therefore, the coordinate conversion unit 101 corrects the distance information acquired from the camera 16 to generate the converted distance information, and corrects the distortion and then converts it. Further, in the present embodiment, it is assumed that there is distortion on the lens of the camera 16 to correct it. -19-201225658 The coordinate conversion unit 101 according to the present embodiment performs the process of "twist correction" in the calculation of the above formula (3). Here, the above-mentioned equation (3) is equivalent to the following equations (4) to (6). Where z is 0. X Xi y II R Yi Z > Li X =x / z V =y / z ui = :弋· X' H :V / a (5) c, (4) (6) For this, consider the lens In the case of distortion, the above formula (6) can be replaced by the following formulas (7) and (8). X' = x'( l+kf+l^r4 ) + Zp/'y' + p2( r2+2x'2) y ' = y'( l+k^+k〆)+ 2ρ/ r2+2y' 2) + 2p2x Υ ( 7 ) where r2 = x 2 + y, 2 (8) vi - Ty y + cy Here, "kl", "k2," and "P!" in equation (7) The P 2 ′′′ system is the distortion coefficient in the radial direction and the circumferential direction, respectively. That is, the equation (7) is used to correct the distortion caused by the lens. In the present embodiment -20-201225658, although it is considered The coefficients that have been expanded to two times are examples, but the translation can consider three or more coefficients. These distortion coefficients can be obtained by correction, that is, according to the above-mentioned internal parameters of the distance camera 16. The image of the complex check panel generated by the method is calculated by applying the position of each grid point to the equation, and the distortion coefficients of "k〆", "k2", and "Pl" and "P2" can be obtained. Further, it is preferable that the power factor considered by the above-mentioned coefficient is determined by the distance between the camera and the subject. In general, the closer the camera is to the subject, the greater the distortion. Therefore, the closer the distance between the camera and the subject is, the higher the coefficient is considered for calculation, and the distortion correction can be performed more appropriately. The coordinate conversion unit 1〇1 stores the distortion coefficient thus obtained, and inputs the three-dimensional coordinates (Xi, Yi, Zi) from the camera 16, so that the imaging of the video camera 15 is obtained in accordance with the above equation (3). In the case of the coordinates (Uj, V) on the resulting image, by using the above equations (4), (5), (7), (8) ', the distortion of the lens can be corrected. The coordinates on the image caused by the imaging of the image camera 15 are obtained. Thereby, as shown in Fig. 8(b), the deviation between the extraction point and the subject can be eliminated. As described above, in the imaging device 1 according to the present embodiment, when the portion in which the subject is displayed is cut out from the subject image, principle 1 does not use the density information of the image, but is based on the distance camera 1 The distance information obtained is processed. Further, in the image pickup apparatus 1 according to the present embodiment, the user is not required to operate, but the processing is automatically performed based on the information given by the video portion 5 in the middle portion 100. Therefore, in the cutting process of the image of the image -21 - 201225658, it is possible to perform cutting processing with higher precision without relying on the user's proficiency. In addition, in the above-described embodiment, as shown in FIG. 2, the imaging device 1 including the video camera 15 and the distance camera 16 is taken as an example. However, the image processing unit 100 may be used alone or for image processing. The Department 1 provides the required program. At this time, the internal parameters of the second camera that acquired the distance information of the first camera that captured the subject image and the external parameters of the first camera and the second camera must be acquired separately. In addition to the method of the above-described stereo correction, the image camera 15 and the distance camera 16 may be used when a positioning system such as a GPS (Global Positioning System) is mounted and is highly accurate. The information. Specifically, when the image camera 15 and the distance camera 16 6 respectively acquire the subject image and the distance information, the position and the orientation at the time of information acquisition are simultaneously acquired by the mounted positioning system, and input to the coordinate conversion unit. 1 0 1. Thereby, the coordinate conversion unit 1〇1 can determine the coordinate of the three-dimensional space for converting the coordinate system of the three-dimensional space from the camera 16 into the three-dimensional space of the image camera 15 based on the information of the position and orientation that has been input. The required external parameters. In addition, among the information input in the above, the rotation vector R due to the difference in orientation and the translation vector t due to the position difference can be separately obtained. Further, in the above-described embodiment, as shown in Figs. 7(a) and 7(b), the -22-201225658 portion other than the field corresponding to the extraction target region is deleted from the subject image as an example. In addition, the fields corresponding to the extracted object area and other fields can be saved as individual layers. In other words, the image cutout unit I 02 is a function of the image separation unit, and at least the field of the subject image specified by the extraction target area is separated from other fields and stored in the memory medium. The effects described in the embodiments. Thereby, in the following operation, the user can select the pattern with or without the background portion image, thereby improving the convenience of the user. Further, in the above-described embodiment, when the coordinates of the distance information after the subject image is superimposed and converted, as shown in FIGS. 6(a) and 6(b), the coordinates of the coordinates of the converted distance information are specified. The case where the resolution of the converted distance information, that is, the distance between the resolution of the camera 16 and the resolution of the subject image is arranged on the subject image, will be described as an example. In other words, in the examples of Figs. 6(a) and (b), the points specified by the coordinates of the distance information after conversion are ratios corresponding to the resolution of the distance between the camera 16 and the resolution of the subject image. The interval corresponds to each pixel on the subject image. In other words, since the coordinate of the converted distance information is superimposed on the subject image with high resolution in a state where the resolution is constant, it becomes a dispersion as shown in FIGS. 6(a) and (b). status. On the other hand, the resolution of the converted distance information may be previously adjusted in accordance with the resolution of the subject image, and then overlapped. For example, each point specified by the coordinates of the converted distance information is regarded as a pixel, and by dividing each pixel, the resolution of the converted distance information can be made to match the resolution of the subject image. This form is described below. Fig. 9(a) is a diagram showing the state in which the distance information shown in Fig. 3 is changed by the coordinate conversion unit 1〇1 to -23-201225658 to the converted distance information. As shown in Fig. 9(a), the coordinates which have been specified as (Ul, V|), (U2, v2). in Fig. 3 are specified as (0.7) as converted coordinates. V' 1), (U' 2, v' 2)· · •. Fig. 9(b) is a diagram showing each of the converted coordinates shown in Fig. 9(a) as a state in which each pixel is divided into four. The points designated as (u, V, ) in Fig. 9(a) correspond to (u'h, ν'"), (u'12, v' shown in Fig. 9(b). 4), 4 points of (u, 13, v'13), (u'14, v'm). As shown in FIGS. 10( a ) and ( b ), each dot corresponds to four pixels arranged by dividing the pixel specified by (u, ν′!) by 2 in the vertical and horizontal directions by the original resolution. By doing so, it is not necessary to generate discrete points as shown in FIGS. 6(a) and 6(b), but the distance information of the same resolution is generated in such a manner that all pixels in the subject image are 1:1. Further, as shown in Fig. 9(b), for the four points after the division, the distance "" 关 before the division is established is established. Therefore, the image cutout unit 102 applies the distance Z and extracts only the object. As a result of the point within the predetermined distance, in a state in which the contour of the object is matched as shown in FIG. 6(b), the image is buried between adjacent dots, and the extracted target region can be ideally obtained. Further, in the aspects of FIGS. 9(a) and (b) and FIGS. 10(a) and (b), the smoothing processing of the rough contour due to pixel division or the label processing required for the noise cutoff is performed. Etc. It is ideal. Also, in the patterns of Fig. 9(a), (b) and Figs. 10(a) and (b), it is considered that the outline of the extracted object and the actual subject are not completely. Consistently, the extracted object area will be more than the contour of the subject. Therefore, in the same way as the above-described embodiment, the edge detection of the previous -24-δ 201225658 can be performed, and the redundant area outside the subject contour can be performed. Delete it. Even in this case, since the image is cut out along the outline of the subject, it can be compared The prior art detects the contour of the subject in the image generated by the image camera 15 and performs more accurate edge detection. This is the same. [Simplified Schematic] [Fig. 1] Implementation of the present invention Fig. 2 is a view showing a functional configuration of an imaging device according to an embodiment of the present invention. [Fig. 3] A distance camera according to an embodiment of the present invention [Fig. 4] A diagram showing the principle of the coordinate conversion function of the image plane/three-dimensional space according to the embodiment of the present invention. [Fig. 5] [Fig. 6] A diagram showing a state in which the distance information according to the embodiment of the present invention is superimposed on a subject image. [Fig. 7] Embodiment of the present invention FIG. 8 is a view showing a correction pattern of distortion of a lens according to an embodiment of the present invention. [FIG. 9] Distance information after conversion according to another embodiment of the present invention. Figure of an example of -25- 201225658 [Fig. 1] A diagram showing a division pattern of a point specified by the converted distance information according to another embodiment of the present invention. [Description of main component symbols]

1 :攝像裝置 10 : CPU 1 1 : RAM 12 : ROM 13: HDD 14 : I/F 1 5 :影像攝像機 1 6 :距離攝像機 17: LCD 1 8 :操作部 1 9 :匯流排 1〇〇 :影像處理部 1 0 1 :座標轉換部 102 :影像切出部 1 1 0 :顯示控制部 -26-1 : Camera unit 10 : CPU 1 1 : RAM 12 : ROM 13 : HDD 14 : I/F 1 5 : Image camera 1 6 : Distance camera 17 : LCD 1 8 : Operation unit 1 9 : Bus 1 〇〇 : Image Processing unit 1 0 1 : coordinate conversion unit 102: image cutout unit 1 1 0 : display control unit -26-

Claims (1)

201225658 七、申請專利範園 1. 一種攝像裝置’其特徵爲’含有: 影像攝像部’係藉由攝像而生成顯不有被攝體與背景 的被攝體影像;和 距離資訊生成部,係測定以包含被攝體與背景之視覺 性範圍爲影像時的各部分至所被顯示之對象物爲止的距 離,生成前記視覺性範圍之影像上的座標與距離所建立關 連而成的距離資訊;和 座標轉換部,係將前記已被取得之距離資訊之座標轉 換成前記被攝體影像上之座標,以生成轉換後距離資訊; 和 座標抽出部,係在前記已被生成之轉換後距離資訊所 含有的轉換後之座標當中,將所被建立關連之距離是滿足 所定條件的座標,予以抽出;和 影像分離部,係在前記被攝體影像中,將被前記已被 抽出之座標所特定之領域與其他領域予以分離而輸出。 2. 如請求項1所記載之攝像裝置,其中, 前記座標轉換部係含有: 第1座標轉換機能,係根據含有前記距離資訊生成部 的焦距及光軸之資訊的第1參數,將前記視覺性範圍之影 像上之座標亦即前記距離資訊之座標,轉換成以前記距離 資訊生成部爲基準的三度空間上之座標;和 第2座標轉換機能,係根據以前記距離資訊生成部爲 基準的三度空間上之座標軸與以前記影像攝像部爲基準的 -27- 201225658 三度空間上之座標軸之差異爲基礎的第2參數,將已被轉 換成以前記距離資訊生成部爲基準的三度空間上之座標的 前記距離資訊之座標,轉換成以前記影像攝像部爲基準的 三度空間上之座標;和 第3座標轉換機能,係根據含有前記影像攝像部的焦 距及光軸之資訊的第2參數,將已被轉換成以前記影像.攝 像部爲基準的三度空間上之座標的前記距離資訊之座標, 轉換成前記被攝體影像上之座標。 3. 如請求項2所記載之攝像裝置,其中,前記座標 轉換部係含有:第4座標轉換機能,係係根據含有,前記 距離資訊生成部或前記影像攝像部所含之透鏡的半徑方向 之扭曲及圓周方向之扭曲之至少一方資訊的第4參數,將 已被轉換成以前記距離資訊生成部爲基準的三度空間上之 座標或以前記影像攝像部爲基準的三度空間上之座標的前 記距離資訊之座標,轉換成已補正了前記透鏡的半徑方向 之扭曲或圓周方向之扭曲的座標。 4. 如請求項1至3之任1項所記載之攝像裝置,其 中,前記座標抽出部,係在前記已被生成之轉換後距離資 訊所含有的轉換後之座標當中,將所被建立關連之距離是 所定閾値以下之座標,予以抽出。 5. 如請求項4所記載之攝像裝置,其中,前記影像 分離部,係藉由在前記被攝體影像當中,把前記已被抽出 之座標所特定之領域以外之領域的影像資訊加以消去,以 抽出在前記被攝體影像中顯示有前記被攝體的領域。 -28- 201225658 6. 如請求項1至5之任1項所記載之攝像裝置,其 中, 前記距離資訊的座標之解析度,係低於前記被攝體影 像的解析度; 前記影像分離部,係在以前記已被抽出之座標爲像素 而描繪的影像中藉由將前記像素予以分割而使前記已被抽 出之座標的解析度對應於前記被攝體影像的解析度,藉由 前記已被分割之像素所描繪之影像而將領域加以特定。 7. 一種影像處理裝置,其特徵爲,含有: 距離資訊取得部,係測定以包含被攝體與背景之視覺 性範圍爲影像時的各部分至所被顯示之對象物爲止的距 離,取得前記視覺性範圍之影像上的座標與距離所建立關 連而成的距離資訊;和 被攝體影像取得部,係取得顯示有被攝體與背景的被 攝體影像;和 座標轉換部,係將前記已被取得之距離資訊之座標轉 換成前記被攝體影像上之座標,以生成轉換後距離資訊; 和 座標抽出部,係在前記已被生成之轉換後距離資訊所 含有的轉換後之座標當中,將所被建立關連之距離是滿足 所定條件的座標,予以抽出;和 影像分離部,係在前記已被取得之被攝體影像當中, 將被前記已被抽出之座標所特定之領域與其他領域予以分 離而輸出。 -29- 201225658 8. —種影像處理方法,其特徵爲, 測定以包含被攝體與背景之視覺性範圍爲影像時的各 部分至所被顯示之對象物爲止的距離,取得前記視覺性範 圍之影像上的座標與距離所建立關連而成的距離資訊並記 憶至記憶媒體; 取得顯示有被攝體與背景的被攝體影像並記憶至記憶 媒體; 將前記已被記憶之距離資訊之座標轉換成前記被攝體 影像上之座標,以生成轉換後距離資訊並記憶至記憶媒 體, 在前記已被生成之轉換後距離資訊所含有的轉換後之 座標當中,將所被建立關連之距離是滿足所定條件的座 標,予以抽出,在前記已被取得之被攝體影像當中,將被 前記已被抽出之座標所特定之領域與其他領域予以分離而 記憶至記憶媒體。 9. 一種影像處理程式,其特徵爲,係令資訊處理裝 置,執行: 測定以包含被攝體與背景之視覺性範圍爲影像時的各 部分至所被顯示之對象物爲止的距離,取得前記視覺性範 圍之影像上的座標與距離所建立關連而成的距離資訊並記 憶至記億媒體之步驟;和 取得顯示有被攝體與背景的被攝體影像並記憶至記憶 媒體之步驟;和 將前記已被記憶之距離資訊之座標轉換成前記被攝體 -30- 201225658 影像上之座標,以生成轉換後距離資訊並記憶至記憶媒體 之步驟;和 在前記已被生成之轉換後距離資訊所含有的轉換後之 座標當中’將所被建立關連之距離是滿足所定條件的座 標,予以抽出’在前記已被取得之被攝體影像當中,將被 前記已被抽出之座標所特定之領域與其他領域予以分離而 記憶至記憶媒體之步驟。 -31 ·201225658 VII. Application for Patent Park 1. An imaging device 'characterized as 'containing: image capturing unit' generates a subject image showing no subject and background by imaging; and distance information generating unit Measuring a distance from each part of the image when the visual range of the subject and the background is the image to the object to be displayed, and generating distance information between the coordinates and the distance established on the image of the pre-recorded visual range; And the coordinate conversion unit converts the coordinates of the distance information that has been obtained in the previous record into the coordinates on the pre-recorded object image to generate the converted distance information; and the coordinate extraction unit, which is the converted distance information that has been generated in the pre-record Among the converted coordinates, the distance to be established is the coordinate that satisfies the predetermined condition, and is extracted; and the image separation unit is in the pre-recorded subject image, which is specified by the coordinates of the pre-recorded object. The field is separated from other fields and output. 2. The image pickup apparatus according to claim 1, wherein the front coordinate conversion unit includes: a first coordinate conversion function, wherein the first parameter is based on a first parameter including information on a focal length and an optical axis of the front distance information generating unit; The coordinates on the image of the sexual range, that is, the coordinates of the distance information, are converted into coordinates on the three-dimensional space based on the previous distance information generation unit; and the second coordinate conversion function is based on the previous distance information generation unit. The second parameter based on the difference between the coordinate axis on the three-dimensional space and the coordinate axis on the three-dimensional space of the previous record image camera is -27-201225658, which has been converted into the previous reference distance generation unit. The coordinates of the coordinates of the coordinates of the space in front of the space are converted into the coordinates of the three-dimensional space based on the previous image capturing unit; and the third coordinate conversion function is based on the information of the focal length and the optical axis of the image capturing unit of the preceding image. The second parameter, which has been converted into a previously recorded image. The camera is based on the coordinates of the coordinates of the three-dimensional space. Converted into coordinates in the object image referred to before. 3. The image pickup device according to claim 2, wherein the front coordinate conversion unit includes: a fourth coordinate conversion function, and the system is based on a radial direction of a lens included in the front distance information generating unit or the front image capturing unit. The fourth parameter of at least one of the distortion and the distortion in the circumferential direction is converted into a coordinate on the three-dimensional space based on the coordinates of the three-dimensional space or the previously recorded image capturing unit based on the previous distance information generating unit. The coordinates of the distance information are converted into coordinates that have corrected the distortion of the radial direction of the front lens or the distortion of the circumferential direction. 4. The image pickup apparatus according to any one of claims 1 to 3, wherein the front coordinate extraction unit is associated with the converted coordinates included in the converted distance information that has been generated beforehand. The distance is the coordinate below the predetermined threshold and is extracted. 5. The imaging device according to claim 4, wherein the pre-recording image separating unit erases image information in a field other than the area specified by the coordinates from which the pre-recorded coordinates are extracted, in the pre-recorded subject image. The field in which the pre-recorded subject is displayed in the pre-recorded subject image is extracted. -28-201225658. The imaging device according to any one of claims 1 to 5, wherein the resolution of the coordinates of the distance information is lower than the resolution of the pre-recorded subject image; In the image in which the coordinates extracted as pixels have been previously recorded, the resolution of the coordinates of the pre-recorded object is determined by dividing the pre-recorded pixels, and the pre-recorded image has been The field is specified by the image depicted by the segmented pixels. 7. An image processing apparatus comprising: a distance information acquisition unit that measures a distance from each part when a visual range including a subject and a background is an image to an object to be displayed, and obtains a pre-record The distance information between the coordinates on the image of the visual range and the distance is established; and the subject image acquisition unit acquires the subject image on which the subject and the background are displayed; and the coordinate conversion unit is a pre-record The coordinate of the distance information that has been obtained is converted into a coordinate on the image of the pre-recorded object to generate the converted distance information; and the coordinate extraction unit is among the converted coordinates contained in the converted distance information that has been generated in the preceding note. The distance to be established is the coordinate that satisfies the specified condition, and is extracted; and the image separation unit is the area of the subject image that has been extracted from the pre-recorded object, and the other areas specified by the coordinates that have been extracted. The fields are separated and output. -29-201225658 8. The image processing method of the present invention is characterized in that the distance from each part when the visual range of the subject and the background is the image to the object to be displayed is measured, and the front visual range is obtained. The distance information between the coordinates and the distance established by the image is memorized and memorized to the memory medium; the image of the subject displaying the subject and the background is obtained and memorized to the memory medium; the coordinates of the distance information that has been memorized are recorded Converted to the coordinates of the pre-recorded subject image to generate the converted distance information and memorized to the memory medium. The distance between the converted coordinates contained in the converted distance information that has been generated in the pre-recording is The coordinates satisfying the predetermined conditions are extracted, and in the image of the subject that has been obtained in the foregoing, the field specified by the coordinates from which the pre-record has been extracted is separated from other fields and memorized to the memory medium. 9. An image processing program for causing an information processing apparatus to perform: measuring a distance from a portion including a subject and a background visual range to an object to be displayed, and obtaining a pre-record a step in which the coordinates on the image of the visual range are related to the distance information established by the distance and memorized to the recording of the media; and the step of obtaining the subject image showing the subject and the background and memorizing to the memory medium; Converting the coordinates of the previously recorded distance information into the coordinates of the pre-recorded subject -30-201225658 image to generate the converted distance information and memorizing to the memory medium; and the converted distance information after the pre-recorded generation Among the converted coordinates, 'the distance to be established is the coordinate that satisfies the specified condition, and is extracted. 'In the image of the subject that has been obtained in the previous record, the area to be extracted by the pre-recorded coordinates is specified. The steps to separate from other areas and memorize to the memory medium. -31 ·
TW100130899A 2010-08-30 2011-08-29 Imaging device, image-processing device, image-processing method, and image-processing program TW201225658A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2010192717A JP2012050013A (en) 2010-08-30 2010-08-30 Imaging apparatus, image processing device, image processing method, and image processing program

Publications (1)

Publication Number Publication Date
TW201225658A true TW201225658A (en) 2012-06-16

Family

ID=45772748

Family Applications (1)

Application Number Title Priority Date Filing Date
TW100130899A TW201225658A (en) 2010-08-30 2011-08-29 Imaging device, image-processing device, image-processing method, and image-processing program

Country Status (3)

Country Link
JP (1) JP2012050013A (en)
TW (1) TW201225658A (en)
WO (1) WO2012029658A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016181672A1 (en) * 2015-05-11 2016-11-17 ノーリツプレシジョン株式会社 Image analysis device, image analysis method, and image analysis program
JP6574461B2 (en) * 2016-08-04 2019-09-11 株式会社フォーディーアイズ Point cloud data conversion system and method
WO2018025842A1 (en) * 2016-08-04 2018-02-08 株式会社Hielero Point group data conversion system, method, and program
JP7369588B2 (en) 2019-10-17 2023-10-26 Fcnt株式会社 Imaging equipment and imaging method
CN113469872B (en) * 2020-03-31 2024-01-19 广东博智林机器人有限公司 Region display method, device, equipment and storage medium
CN112669382A (en) * 2020-12-30 2021-04-16 联想未来通信科技(重庆)有限公司 Image-based distance determination method and device
CN113645378B (en) * 2021-06-21 2022-12-27 福建睿思特科技股份有限公司 Safe management and control portable video distribution and control terminal based on edge calculation
CN116401484B (en) * 2023-04-18 2023-11-21 河北长风信息技术有限公司 Method, device, terminal and storage medium for processing paper material in electronization mode

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012307A (en) * 2003-06-17 2005-01-13 Minolta Co Ltd Imaging apparatus
JP2005300179A (en) * 2004-04-06 2005-10-27 Constec Engi Co Infrared structure diagnosis system
JP4836067B2 (en) * 2005-05-23 2011-12-14 日立造船株式会社 Deformation measurement method for structures
JP2008112259A (en) * 2006-10-30 2008-05-15 Central Res Inst Of Electric Power Ind Image verification method and image verification program
JP2010109923A (en) * 2008-10-31 2010-05-13 Nikon Corp Imaging apparatus

Also Published As

Publication number Publication date
WO2012029658A1 (en) 2012-03-08
JP2012050013A (en) 2012-03-08

Similar Documents

Publication Publication Date Title
TW201225658A (en) Imaging device, image-processing device, image-processing method, and image-processing program
CN110809786B (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
JP4886716B2 (en) Image processing apparatus and method, and program
WO2014044126A1 (en) Coordinate acquisition device, system and method for real-time 3d reconstruction, and stereoscopic interactive device
CN109076200A (en) The calibration method and device of panoramic stereoscopic video system
US20080158340A1 (en) Video chat apparatus and method
KR102225617B1 (en) Method of setting algorithm for image registration
JP2015035658A (en) Image processing apparatus, image processing method, and imaging apparatus
JP5852093B2 (en) Video processing apparatus, video processing method, and program
JP2011160421A (en) Method and apparatus for creating stereoscopic image, and program
CN107209949B (en) Method and system for generating magnified 3D images
TWI501193B (en) Computer graphics using AR technology. Image processing systems and methods
JPWO2020075252A1 (en) Information processing equipment, programs and information processing methods
US10621694B2 (en) Image processing apparatus, system, image processing method, calibration method, and computer-readable recording medium
JP6853928B2 (en) 3D moving image display processing device and program
JP2011095131A (en) Image processing method
JP5925109B2 (en) Image processing apparatus, control method thereof, and control program
JP2014002489A (en) Position estimation device, method, and program
JP2011118767A (en) Facial expression monitoring method and facial expression monitoring apparatus
JP2009186369A (en) Depth information acquisition method, depth information acquiring device, program, and recording medium
JP6641485B2 (en) Position specifying device and position specifying method
JP4351090B2 (en) Image processing apparatus and image processing method
JP6292785B2 (en) Image processing apparatus, image processing method, and program
JP6843319B2 (en) Information processing equipment, programs and information processing methods
JP2009237652A (en) Image processing apparatus and method, and program