TW201015491A - 3D depth generation by vanishing line detection - Google Patents

3D depth generation by vanishing line detection Download PDF

Info

Publication number
TW201015491A
TW201015491A TW97138280A TW97138280A TW201015491A TW 201015491 A TW201015491 A TW 201015491A TW 97138280 A TW97138280 A TW 97138280A TW 97138280 A TW97138280 A TW 97138280A TW 201015491 A TW201015491 A TW 201015491A
Authority
TW
Taiwan
Prior art keywords
depth information
boundary
information according
generating
dimensional image
Prior art date
Application number
TW97138280A
Other languages
Chinese (zh)
Inventor
Liang-Gee Chen
Chao-Chung Cheng
Yi-Min Tsai
Ling-Hsiu Huang
Original Assignee
Himax Tech Ltd
Univ Nat Taiwan
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Himax Tech Ltd, Univ Nat Taiwan filed Critical Himax Tech Ltd
Priority to TW97138280A priority Critical patent/TW201015491A/en
Publication of TW201015491A publication Critical patent/TW201015491A/en

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A system and method of generating three-dimensional (3D) depth information is disclosed. The vanishing point of a two-dimensional (2D) input image is detected based on vanishing lines. The 2D image is classified and segmented into structures based on detected edges. The classified structures are then respectively assigned depth information.

Description

201015491 七、指定代表圖: (一) 本案指定代表圖為:第(一)圖。 (二) 本代表圖之元件符號簡單說明: 100 立體深度資訊產生系統 10 輸入裝置 11 線偵測單元 12 消失點偵測單元 13 邊界特徵擷取單元 14 結構分類單元 15 深度分派單元 16 輸出裝置 八、 本案若有化學式時,請揭示最能顯示發明特徵的化學式: 九、 發明說明: 【發明所屬之技術領域】 本發明係有關立體深度(3D depth)資訊的產生,特 別是關於偵測消失線(vanishing line )以產生立體深度 資訊的技術。 【先前技術】 201015491 當三維物體藉由照相機或攝影機而投影映射至二維影 像平面時,由於此種投射係為非唯一的多對一轉換,因此 會失去立體深度資訊。換句話說,無法藉由投射後的影像 點來決定其深度。為了得到一個完整重現或近似的立體表 現,必須恢復或產生這些立體深度資訊,用以進行影像強 化(enhancement)、影像復原(restoration)、影像 • 合成或影像的顯示。 傳統立體深度資訊的產生係偵測影像中消失線 (vanishing line )所聚合的消失點(vanishing point), 再以此肩失點為準’愈接近消失點者給予較大的深度值。 換句話說,立體深度資訊係以梯度(gradient)方式來分 旅的。此方法的缺點在於其未能有效考慮影像中各種不同 魯 區域之事前知識(prior knowledge)。因此,與消失點 具有相同距離但位於不同區域的影像點,此方法將會單調 地給予相同的深度值。 另一種傳統立體深度資訊的產生方法係根據像素值的 大小及顏色來區分出不同區域,再對於不同區域給予不同 的深度值。例如,具有較大像素值及顏色的區域會分派以 較大的深度值。此方法的缺點在於其未有考慮人類視覺系 201015491 統(human visual system )中相當重要的邊界資訊。因 此,對於原本為相異深度的影像點,會因為具有相同像素 值及顏色而錯誤地給予相同的深度值。 鑑於上述傳統方法未能忠實地或正確地產生立體深度 資訊’因此亟需提出一種立體深度資訊的產生系統及方 法’以忠實地且正破地重現或近似出立體表現。 【發明内容】 鑑於上述’本發明的目的之一在於提出一種新穎立體深度資訊 的產生系統及方法,以忠實地且正確地重現或近似出立體表現。 根據本發明實施例,本發明提供一種立體深度資訊之產生系 統及方法。根據二轉像中的消失線,以綱出消失點^艮據偵 測邊界’將二_像分敝分鶴多健構。賴分派深度資訊 給各這些分類結構,因*可忠實地且正確地纽或近似出立體 表現。 【實施方式】 201015491 第圖顯示本發明實施例之立體(3D)深度資訊產生 系統100。為了便於暸解本發明,包含有原始影像、處理 中影像、結果影像之例示影像也同時附屬顯示於圖式中。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 步驟。 _ 輸入裝置1〇提供或接收一或多張二維(平面)輸入 以像(步驟20),用以進行本實施例之影像/視訊處理之 用。輸入裝置10可以是一種光電裝置,用以將三維物體 杈影映射至二維影像平面,在本實施例中,輸入裝置1〇 可以是照相機,用以取像得到二維影像;或者可以是攝影 機,用以取得多幅影像。在另一實施例中,輸入裝置10 可以疋前置處理裝置,用以進行一或多個影像處理工作, 魯例如影像強化、影像復原、影像分析、影像壓縮或影像合 成。再者,輸入裝置10可更包含一儲存裝置(例如半導 體記憶體或硬碟),用以儲存經前置處理裝置所處理的影 像如刖所述,當二維物體投影映射至二維影像平面時, 會失去立體深度資訊,因此,以下將詳述本發明實施例之 立體深度資訊產生系統100的其他方塊是如何用來處理輸 入裝置10所提供的二維影像。 201015491 線偵測單元11處理二維影像,以偵測或識別出影像 中的線(步驟21 ),特別是消失線(vanishing line )。在 本說明書中,”單元”一詞可用以表示一電路、一程式或其 組合。圖式中線偵測單元11的相關附屬影像顯示出重疊 於原始影像上的消失線。在一較佳實施例中,消失線的偵 測係使用哈夫轉換(Hough transform ),其屬於一種頻 域(frequency domain)處理。然而,也可以使用其他的 ❹ 頻域轉換(例如快速傅立葉轉換(FFT))或者是空間域 (spatial domain )處理。哈夫轉換為一種特徵擷取 (feature extraction )技術,其揭露於美國專利第 3,069,654 號’題為“Method and Means for Recognizing Complex Patterns”,發明人為 paui Hough;及揭露於“Use of the Hough Transformation to Detect Lines and Curves in Pictures”,Comm. ACM, ❿ Vol. 15, pp. 11-15 (January 1972),作者為 Richard201015491 VII. Designated representative map: (1) The representative representative of the case is: (1). (2) Brief description of the component symbols of the representative diagram: 100 stereoscopic depth information generation system 10 input device 11 line detection unit 12 vanishing point detection unit 13 boundary feature extraction unit 14 structure classification unit 15 depth assignment unit 16 output device eight If there is a chemical formula in this case, please disclose the chemical formula that best shows the characteristics of the invention: IX. Description of the invention: [Technical field of the invention] The present invention relates to the generation of 3D depth information, particularly regarding the detection of vanishing lines. (vanishing line) A technique for generating stereoscopic depth information. [Prior Art] 201015491 When a three-dimensional object is projected onto a two-dimensional image plane by a camera or a camera, since the projection is a non-unique multi-to-one transformation, the stereoscopic depth information is lost. In other words, the depth of the image point cannot be determined by the projected image point. In order to obtain a fully reproducible or approximate stereoscopic representation, these stereoscopic depth information must be restored or generated for image enhancement, image restoration, image synthesis, or image display. The traditional stereo depth information is generated by detecting the vanishing point of the vanishing line in the image, and then the shoulder loss point is taken. The closer to the vanishing point, the larger the depth value is. In other words, the stereo depth information is divided in a gradient manner. The disadvantage of this method is that it fails to effectively consider the prior knowledge of the various Lu areas in the image. Therefore, for image points that have the same distance from the vanishing point but are in different regions, this method will monotonically give the same depth value. Another method for generating conventional stereoscopic depth information is to distinguish different regions according to the size and color of the pixel values, and then give different depth values for different regions. For example, areas with larger pixel values and colors are assigned a larger depth value. The disadvantage of this method is that it does not take into account the very important boundary information in the human visual system 201015491. Therefore, for image points that are originally different depths, the same depth value is erroneously given because of the same pixel value and color. In view of the fact that the above conventional methods fail to faithfully or correctly generate stereoscopic depth information, it is therefore necessary to propose a system and method for generating stereoscopic depth information to faithfully and continually reproduce or approximate stereoscopic representation. SUMMARY OF THE INVENTION In view of the above, one object of the present invention is to provide a system and method for generating novel stereoscopic depth information to faithfully and correctly reproduce or approximate stereoscopic representation. According to an embodiment of the present invention, the present invention provides a system and method for generating stereoscopic depth information. According to the vanishing line in the two-turn image, the two-images are divided into multiple structures by the vanishing point. Lai distribution of deep information to each of these classification structures, because * can be faithfully and correctly new or approximate stereoscopic performance. [Embodiment] 201015491 The figure shows a stereoscopic (3D) depth information generating system 100 according to an embodiment of the present invention. In order to facilitate the understanding of the present invention, an exemplary image including the original image, the processed image, and the resulting image is also attached to the drawing. The second figure shows the flow steps of the stereo depth information generating method in the embodiment of the present invention. The input device 1 provides or receives one or more two-dimensional (planar) inputs to image (step 20) for performing the image/video processing of the present embodiment. The input device 10 can be a photoelectric device for mapping a three-dimensional object image to a two-dimensional image plane. In this embodiment, the input device 1 can be a camera for acquiring a two-dimensional image; or can be a camera. Used to capture multiple images. In another embodiment, the input device 10 can be used to perform one or more image processing operations, such as image enhancement, image restoration, image analysis, image compression, or image synthesis. Furthermore, the input device 10 can further include a storage device (such as a semiconductor memory or a hard disk) for storing images processed by the pre-processing device, such as ,, when the two-dimensional object is projected onto the two-dimensional image plane. The stereoscopic depth information is lost. Therefore, how the other blocks of the stereoscopic depth information generating system 100 of the embodiment of the present invention are used to process the two-dimensional image provided by the input device 10 will be described in detail below. 201015491 The line detection unit 11 processes the two-dimensional image to detect or recognize the line in the image (step 21), particularly the vanishing line. In this specification, the term "unit" can be used to mean a circuit, a program, or a combination thereof. The associated auxiliary image of the line center detecting unit 11 of the figure shows an obscured line superimposed on the original image. In a preferred embodiment, the disappearance line detection uses a Hough transform, which belongs to a frequency domain process. However, other 频 frequency domain conversions (such as Fast Fourier Transform (FFT)) or spatial domain processing can be used. Huff is converted to a feature extraction technique, which is disclosed in U.S. Patent No. 3,069,654 entitled "Method and Means for Recognizing Complex Patterns", inventor Paui Hough; and disclosed in "Use of the Hough Transformation to Detect Lines and Curves in Pictures", Comm. ACM, ❿ Vol. 15, pp. 11-15 (January 1972) by Richard

Duda和Peter Hart。哈夫轉換特別適用於含雜訊之瑕疵 影像中,以識別出直線或曲線。在本實施例中,哈夫轉換 可有效地偵測或識別出影像中的線,特別是消失線。 第三圖顯示本發明另一實施例之消失線偵測。在此實 施例中,首先進行邊界(edge)偵測110,例如使用sobel 201015491 邊界偵測技術。接著,使用高斯(Gaussian)低通濾波器 以降低雜訊(方塊j 12)。於接下來的方塊114中,保留 大於預設臨界值的邊界,而刪除其餘邊界。再者,將相鄰 但互不連接的像素聚集起來(方塊η6>於方塊118中, 藉由聚集像素的端點(endpoint)而將聚集像素作進—步 的連結(linking),因而得到所要的消失線。 Φ 接下來,消失點偵測單元12根據線偵測單元U (步 驟22)所偵測到的消失線,以決定出消失點(vanishing point)。一般來說,消失點係為各偵測線或其延伸所相交 聚合的點。圖式中消失點偵測單元12的相關附屬影像顯 示出重疊於原始影像上的消失點。第四A圖至第四圮圖顯 示肩失線如何聚合至消失點的各種例子。其中,第四A圖 之消失點位於左侧,第四B圖之消失點位於右側,第四c 馨 圖之消失點位於頂端,第四D圖之消失點位於底端,第四 E圖之消失點位於内部。 在立體深度資訊產生系統1〇〇 (第一圖)的另一(底 部)路徑中,二維影像經由邊界特徵擷取單元13處理, 以偵測或識別出結構或物體間的邊界或分界線 (boundary)(步驟23)。由於線偵測單元丄丄和邊界特 201015491 徵擷取單元13具有一些重疊功能,因此,這兩個單元可 以合併共享使用單一的線/邊界偵測單元。 在一較佳實施例中,邊界的擷取係使用坎尼邊界濾波 器(Canny edge filter)。坎尼邊界濾波器為一種較佳的 邊界特徵擷取或偵測演算法,係由j〇hn F. canny於 1986 所發展並發表於 a Computational Approach to © Edge Detection^ IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679_714。坎尼邊界濾波器特 別適用於含雜訊之邊界。在本實施例中,坎尼邊界濾波器 可有效地擷取邊界特徵’如邊界特徵擷取單元13的相關 附屬影像所示者。 接下來’結構分類(structure classification)單元 ® 14根據邊界特徵擷取單元13所提供的邊界/分界線特 徵’將整個影像分割(segment)成數個結構(步驟24)。 特別的是’結構分類單元14係為採用分類相關 (classification-based)之分割技術,因而使得具有類似 材質(texture)的物體被連接為同一結構。如圖式中結構 分類單元14的相關附屬影像所示,其整個影像被分割成 四個結構或片段:天花板(ceiling)、地面(ground)、右 11 201015491 垂直(vertical)面、左垂直面。此種分類相關之分割技術 所使用的樣式(pattern)並不侷限於上面所述者。例如, 對於戶外所取的影像,其整個影像可以分類成以下結構: 天空、地面、垂直面及水平面。 在一較佳實施例中,可使用群聚(Clustering)技術 (例如K平均演算法(k-means))來進行結構分類單元 ❹ 14之分割或分類。首先,根據影像的亮度分佈 (histogram )以決定出一些群(cluster )。接著,決定 每一像素之距離,使得具短距離之類似像素被聚集於同一 群(cluster) ’因而形成分割或分類結構。 之後’深度分派單元15分派深度資訊給每一分類結 構(步驟25)。一般來說,每一分類結構之深度資訊的分 ❹/浓方式彼此不同,不過,二或多個結構也可以採用相同的 刀派方式°根據事前知識(prior knowledge ),地面被分 派的深度值小於天花板/天空的深度值。深度分派單元15 通常乂 '肖失點為準’採用梯度(gradient)方式使得靠近 消失點的像素具有較大的深度值。 12 201015491 輸出裝置16從深度分派單元15接收立體深度資訊, 並產生輸出影像(步驟26)'在—實施例中,輸出裝置 可以為顯示裝置,用以顯示或供觀看所接收的深度資訊。 在另-實施例中,輸出裝置16可以為儲存裝置,例如半 導體記憶體或硬碟,用以儲存所接收的深度資訊。再者, 輸出裝置16也可更包含一後置處理裝置,用以進行一或 多種影像處理,例如影像強化、影像復原、影像分析、影 ❹ 像壓縮或影像合成。 〜 根據上述之本發明實施例,與先前技術所述之傳統立 體深度資訊產生方法比較起來,本發明實施例較能忠實地 且正確地重現或近似出立體表現。 以上所述僅為本發明之較佳實施例而已,並非用以限 • 定本發明之申請專利範圍;凡其它未脫離發明所揭示之精 神下所完成之等效改變或修飾,均應包含在下述之申請專 利範圍内。 【圖式簡單說明】 第一圖顯示本發明實施例之立體深度資訊產生系統。 第二圖顯示本發明實施例之立體深度資訊產生方法的流程 13 201015491 步驟。 第三圖顯示本發明另一實施例之消失線偵測。 第四A圖至第四E圖顯示消失線如何聚合至消失點的各種 例子。 【主要元件符號說明】 100 立體深度資訊產生系統Duda and Peter Hart. Hough conversion is especially useful in images with noise to identify lines or curves. In this embodiment, the Hough transform can effectively detect or recognize lines in the image, especially vanishing lines. The third figure shows the vanishing line detection of another embodiment of the present invention. In this embodiment, edge detection 110 is first performed, for example using the sobel 201015491 boundary detection technique. Next, a Gaussian low pass filter is used to reduce the noise (block j 12). In the next block 114, the boundary above the preset threshold is retained and the remaining boundaries are deleted. Furthermore, pixels adjacent to each other but not connected to each other are aggregated (block η6> in block 118, the aggregated pixels are joined by the endpoints of the aggregated pixels, thereby obtaining the desired The disappearance line Φ Next, the vanishing point detecting unit 12 determines the vanishing point according to the vanishing line detected by the line detecting unit U (step 22). Generally, the vanishing point is The points at which the detection lines or their extensions intersect to be aggregated. In the figure, the associated auxiliary image of the vanishing point detection unit 12 displays a vanishing point that overlaps the original image. The fourth to fourth figures show the shoulder loss line. How to aggregate to various examples of vanishing points, wherein the vanishing point of the fourth A picture is on the left side, the vanishing point of the fourth B picture is on the right side, the vanishing point of the fourth c picture is at the top end, and the vanishing point of the fourth D picture Located at the bottom end, the vanishing point of the fourth E map is located inside. In the other (bottom) path of the stereoscopic depth information generating system 1 (first map), the two-dimensional image is processed via the boundary feature extracting unit 13 to Detect or identify structures or objects The boundary or boundary between the bodies (step 23). Since the line detection unit 丄丄 and the boundary feature 201015491 have some overlapping functions, the two units can be combined and shared using a single line/ Boundary Detection Unit. In a preferred embodiment, the boundary is extracted using a Canny edge filter. The Caney boundary filter is a preferred boundary feature extraction or detection algorithm. Developed by j〇hn F. canny in 1986 and published in a Computational Approach to © Edge Detection^ IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679_714. The Cannes boundary filter is particularly suitable for noise-containing boundaries. In the present embodiment, the Caney boundary filter can effectively capture the boundary features as shown by the associated subsidiary image of the boundary feature extraction unit 13. Next, the 'structure classification unit>> 14 is based on the boundary features. Taking the boundary/delimit line feature provided by unit 13 'segment the entire image into several structures (step 24). In particular, 'structure classification unit 1 The 4 series is a classification-based segmentation technique, so that objects having similar textures are connected to the same structure. As shown in the related subsidiary image of the structure classification unit 14, the entire image is Divided into four structures or segments: ceiling, ground, right 11 201015491 vertical, left vertical. The pattern used by this classification-related segmentation technique is not limited to the ones described above. For example, for images taken outdoors, the entire image can be classified into the following structures: sky, ground, vertical, and horizontal. In a preferred embodiment, clustering techniques (e.g., K-means) may be used to segment or classify the structural classification unit. First, a cluster is determined based on the histogram of the image. Next, the distance of each pixel is determined such that similar pixels having a short distance are clustered in the same cluster' thus forming a segmentation or classification structure. The 'deep dispatch unit 15 then assigns depth information to each of the classification structures (step 25). In general, the depth/density of depth information for each classification structure is different from each other. However, two or more structures can also adopt the same knife style. According to prior knowledge, the depth value assigned to the ground. Less than the ceiling/sky depth value. The depth dispatch unit 15 is generally 乂 'the point of the missing point' in a gradient manner such that pixels near the vanishing point have a larger depth value. 12 201015491 The output device 16 receives stereoscopic depth information from the depth dispatch unit 15 and produces an output image (step 26). In an embodiment, the output device can be a display device for displaying or viewing the received depth information. In another embodiment, output device 16 can be a storage device, such as a semiconductor memory or hard disk, for storing received depth information. Furthermore, the output device 16 may further comprise a post-processing device for performing one or more image processing, such as image enhancement, image restoration, image analysis, image compression or image synthesis. ~ According to the embodiment of the present invention described above, the embodiment of the present invention faithfully and correctly reproduces or approximates stereoscopic performance as compared with the conventional stereo depth information generating method described in the prior art. The above are only the preferred embodiments of the present invention, and are not intended to limit the scope of the present invention; any equivalent changes or modifications made without departing from the spirit of the invention should be included in the following. Within the scope of the patent application. BRIEF DESCRIPTION OF THE DRAWINGS The first figure shows a stereoscopic depth information generating system according to an embodiment of the present invention. The second figure shows the flow of the stereo depth information generating method in the embodiment of the present invention. 13 201015491 The steps. The third figure shows the vanishing line detection of another embodiment of the present invention. The fourth to fourth E diagrams show various examples of how the vanishing line is aggregated to the vanishing point. [Main component symbol description] 100 stereo depth information generation system

10 輸入裝置 11 線偵測單元 12 消失點偵測單元 13 邊界特徵擷取單元 14 結構分類單元 15 深度分派單元 16 輸出裝置 20-26 實施例之流程步驟 110 邊界偵測 112 高斯(Gaussian )低通濾波器 114 臨界值 116 聚集相鄰像素 118 連結10 Input device 11 Line detection unit 12 Vanishing point detection unit 13 Boundary feature extraction unit 14 Structure classification unit 15 Depth distribution unit 16 Output device 20-26 Process step 110 of the embodiment Boundary detection 112 Gaussian low pass Filter 114 Threshold 116 Aggregate adjacent pixels 118 Link

Claims (1)

201015491 十、申請專利範圍: 1. 一種立體深度資訊之產生系統,包含: 一消失點決定裝置,用以決定二維影像中的消失點; 一分類裝置,用以分類出複數個結構; 一深度分派單元,用以分派深度資訊給各該分類結構。 2. 如申請專利範圍第i項所述立體深度資訊之產生系統, 其中上述之消失點決定裝置包含: 一線偵測單元,用以偵測該二維影像中的消失線;及 一消失點偵測單元,其根據該偵測之消失線以決定該消 失點。 3. 如申請專利範圍第2項所述立體深度資訊之產生系統, ❿其中上述偵測之消失線或其延伸聚合於該消失點。 4. 如申請專利範圍第2項所述立體深度資訊之產生系統, 其中上述之線债測單元使用哈夫轉換(H〇ugh transform)以進行消失線的偵測。 5.如申請專利範圍第2項所述立體聽資訊之產生系統, 其中上述之線偵測單元包含: 201015491 一邊界偵測單元,用以偵測該二維影像中的邊界; 一高斯(Gaussian)低通濾波器,用以降低該偵測邊 界之雜訊; 一臨界值裝置,用以刪除小於預設臨界值的邊界,而保 留大於該預設臨界值的邊界; 一I集裝置’用以將該偵測邊界之相鄰但互不連接的像 素聚集起來;及 ⑬一端點連結裝置,用以將該聚集像素之端點連結起來, 以形成該消失線。 6.如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之分類裝置包含: 一邊界特徵擷取單元,用以偵測該二維影像之邊界;及 一結構分類單元,其根據該偵測邊界,而將該二維影像 • 分割為複數個結構。 7·如申請專利範圍第6項所述立體深度資訊之產生系統, 其中上述之邊界特徵掏取單元使用坎尼邊界濾波器 (Canny edge filter)以進行邊界之偵測。 16 201015491 8. 如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之結構分類單元使用群聚(clustering)技術以 進行分割。 9. 如申請專利範圍第1項所述立體深度資訊之產生系統, 其中上述之深度分派單元分派給底端結構之深度資訊值小 於頂端結構。 ❹ 10. 如申請專利範圍第1項所述立體深度資訊之產生系 統,更包含一輸入裝置,其將三維物體投影映射至二維影 像平面。 11. 如申請專利範圍第10項所述立體深度資訊之產生系 統,其中上述之輸入裝置更儲存該二維影像。 12. 如申請專利範圍第1項所述立體深度資訊之產生系 統,更包含一輸出裝置,其接收該深度資訊。 13. 如申請專利範圍第12項所述立體深度資訊之產生系 統,其中上述之輸出裝置更儲存或顯示該深度資訊。 17 201015491 14. 一種立體深度資訊之產生方法,包含: 決定二維影像中的消失點; 分類出複數個結構; 分派深度資訊給各該分類結構。 15. 如申請專利範圍第14項所述立體深度資訊之產生方 法,其中上述之消失點決定步驟包含: ⑩ 偵測該二維影像中的消失線;及 根據該偵測之消失線以決定該消失點。 16. 如申請專利範圍第15項所述立體深度資訊之產生方 法,其中上述偵測之消失線或其延伸聚合於該消失點。 17. 如申請專利範圍第15項所述立體深度資訊之產生方 ® 法,其中上述之消失線偵測步驟使用哈夫轉換(Hough transform) 〇 18. 如申請專利範圍第15項所述立體深度資訊之產生方 法,其中上述之消失線偵測步驟包含: 偵測該二維影像中的邊界; 降低該偵測邊界之雜訊; 18 201015491 刪除小於預設臨界值的邊界,而保留大於該預設臨界值 的邊界; 將該偵測邊界之相鄰但互不連接的像素聚集起來;及 將》亥聚集像素之端點連結起來,以形成該消失線。 19·如申請專利範圍第14項所述立贿度資訊之產生方 法,其中上述之分類步驟包含: 偵測該二維影像之邊界;及 根據該偵測邊界,將該二維影像分割為複數個結構。 20.如申π專利範圍第19項所述立體深度資訊之產生方 法其中上狀邊界㈣步驟使崎尼邊界 edge filter ) 〇 21·如申請專·圍第14項所述立體深度資訊之產生方 法,其中上述之分類步驟使用群聚(cUistering) 技術。 22·如申請專利範圍第14項所述立體深度資訊之產生方 法’於上述深度資訊之分派步驟中,底端結構被分派之深 度資訊值小於頂端結構。 19 201015491 23.如申請專利範圍第14項所述立體深度資訊之產生方 法,更包含一步驟,用以將三維物體投影映射至二維影像 平面。 24.如申請專利範圍第23項所述立體深度資訊之產生方 法,更包含一步驟,以儲存該二維影像。 ⑩ 25.如申請專利範圍第24項所述立體深度資訊之產生方 法,更包含一步驟,以接收該深度資訊。 26.如申請專利範圍第25項所述立體深度資訊之產生方 法,更包含一步驟,以儲存或顯示該深度資訊。201015491 X. Patent application scope: 1. A stereoscopic depth information generation system, comprising: a vanishing point determining device for determining a vanishing point in a two-dimensional image; a sorting device for classifying a plurality of structures; A dispatch unit that dispatches depth information to each of the classification structures. 2. The system for generating stereoscopic depth information according to item i of the patent application, wherein the vanishing point determining device comprises: a line detecting unit for detecting an disappearing line in the two-dimensional image; and a vanishing point detection The measuring unit determines the vanishing point according to the disappearing line of the detection. 3. The system for generating stereoscopic depth information according to item 2 of the patent application, wherein the disappearance line of the above detection or its extension is aggregated at the vanishing point. 4. The system for generating a stereoscopic depth information according to claim 2, wherein the above-mentioned line debt detecting unit uses a Hughing transform to detect the vanishing line. 5. The system for generating stereoscopic information according to claim 2, wherein the line detecting unit comprises: 201015491 a boundary detecting unit for detecting a boundary in the two-dimensional image; a Gaussian a low-pass filter for reducing noise of the detection boundary; a threshold device for deleting a boundary smaller than a preset threshold while retaining a boundary greater than the predetermined threshold; The adjacent but non-connected pixels of the detection boundary are gathered together; and 13 an endpoint linking device is used to connect the end points of the aggregated pixels to form the vanishing line. 6. The system for generating stereoscopic depth information according to claim 1, wherein the classification device comprises: a boundary feature extraction unit for detecting a boundary of the two-dimensional image; and a structural classification unit. According to the detection boundary, the two-dimensional image is divided into a plurality of structures. 7. The system for generating stereoscopic depth information according to claim 6, wherein the boundary feature extraction unit uses a Canny edge filter to detect a boundary. The method of generating a stereoscopic depth information according to claim 1, wherein the structural classification unit uses a clustering technique for segmentation. 9. The system for generating a stereoscopic depth information according to claim 1, wherein the depth information assigned by the depth dispatching unit to the bottom structure is smaller than the top structure. 10. The system for generating stereoscopic depth information according to claim 1, further comprising an input device that maps the three-dimensional object projection to the two-dimensional image plane. 11. The system for generating stereoscopic depth information according to claim 10, wherein the input device further stores the two-dimensional image. 12. The system for generating stereoscopic depth information according to claim 1 of the patent application, further comprising an output device that receives the depth information. 13. The system for generating stereoscopic depth information according to claim 12, wherein the output device further stores or displays the depth information. 17 201015491 14. A method for generating stereoscopic depth information, comprising: determining a vanishing point in a two-dimensional image; classifying a plurality of structures; and assigning depth information to each of the classification structures. 15. The method for generating stereoscopic depth information according to claim 14, wherein the vanishing point determining step comprises: 10 detecting an vanishing line in the two-dimensional image; and determining the disappearing line according to the detecting Vanishing point. 16. The method of generating stereoscopic depth information according to claim 15, wherein the disappearance line of the detection or its extension is aggregated at the vanishing point. 17. The method of generating a stereoscopic depth information according to claim 15 wherein the vanishing line detecting step uses a Hough transform 〇18. The solid depth is as described in claim 15 The method for generating information includes the following steps: detecting a boundary in the two-dimensional image; reducing noise of the detection boundary; 18 201015491 deleting a boundary smaller than a preset threshold, and retaining is greater than the preset Setting a boundary of the threshold; gathering adjacent pixels of the detection boundary but not connected to each other; and joining the endpoints of the clustered pixels to form the vanishing line. 19. The method for generating bribery information according to claim 14 of the patent application scope, wherein the step of classifying comprises: detecting a boundary of the two-dimensional image; and dividing the two-dimensional image into plural numbers according to the detection boundary Structure. 20. The method for generating stereoscopic depth information according to claim 19, wherein the upper boundary (four) step causes an edge filter) 〇 21 · the method for generating the stereo depth information according to claim 14 , wherein the above classification step uses cUistering technology. 22. The method for generating stereoscopic depth information according to claim 14 of the patent application section, wherein in the dispatching step of the depth information, the bottom end structure is assigned a depth information value smaller than the top structure. 19 201015491 23. The method for generating stereoscopic depth information according to claim 14 of the patent application, further comprising a step of mapping the three-dimensional object projection to the two-dimensional image plane. 24. The method of generating stereoscopic depth information according to claim 23, further comprising a step of storing the two-dimensional image. 10 25. The method for generating stereoscopic depth information according to claim 24, further comprising a step of receiving the depth information. 26. The method of generating stereoscopic depth information as described in claim 25, further comprising a step of storing or displaying the depth information.
TW97138280A 2008-10-03 2008-10-03 3D depth generation by vanishing line detection TW201015491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW97138280A TW201015491A (en) 2008-10-03 2008-10-03 3D depth generation by vanishing line detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW97138280A TW201015491A (en) 2008-10-03 2008-10-03 3D depth generation by vanishing line detection

Publications (1)

Publication Number Publication Date
TW201015491A true TW201015491A (en) 2010-04-16

Family

ID=44830064

Family Applications (1)

Application Number Title Priority Date Filing Date
TW97138280A TW201015491A (en) 2008-10-03 2008-10-03 3D depth generation by vanishing line detection

Country Status (1)

Country Link
TW (1) TW201015491A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9066086B2 (en) 2010-12-08 2015-06-23 Industrial Technology Research Institute Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same
CN106469445A (en) * 2015-08-18 2017-03-01 青岛海信医疗设备股份有限公司 A kind of calibration steps of 3-D view, device and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9066086B2 (en) 2010-12-08 2015-06-23 Industrial Technology Research Institute Methods for generating stereoscopic views from monoscopic endoscope images and systems using the same
CN106469445A (en) * 2015-08-18 2017-03-01 青岛海信医疗设备股份有限公司 A kind of calibration steps of 3-D view, device and system

Similar Documents

Publication Publication Date Title
US11080932B2 (en) Method and apparatus for representing a virtual object in a real environment
US10373380B2 (en) 3-dimensional scene analysis for augmented reality operations
CN108257139B (en) RGB-D three-dimensional object detection method based on deep learning
US10573018B2 (en) Three dimensional scene reconstruction based on contextual analysis
JP5822322B2 (en) Network capture and 3D display of localized and segmented images
US20100079453A1 (en) 3D Depth Generation by Vanishing Line Detection
US7680323B1 (en) Method and apparatus for three-dimensional object segmentation
US9741121B2 (en) Photograph localization in a three-dimensional model
CN102509104B (en) Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
WO2013073167A1 (en) Image processing device, imaging device, and image processing method
WO2018040982A1 (en) Real time image superposition method and device for enhancing reality
CN110363179B (en) Map acquisition method, map acquisition device, electronic equipment and storage medium
JP2011134012A (en) Image processor, image processing method for the same and program
Pahwa et al. Locating 3D object proposals: A depth-based online approach
US9087381B2 (en) Method and apparatus for building surface representations of 3D objects from stereo images
Ji et al. An automatic 2D to 3D conversion algorithm using multi-depth cues
TW201015491A (en) 3D depth generation by vanishing line detection
CN115063578A (en) Method and device for detecting and positioning target object in chip image and storage medium
Kowdle et al. Scribble based interactive 3d reconstruction via scene co-segmentation
Engels et al. Automatic occlusion removal from façades for 3D urban reconstruction
Kim et al. Wide-baseline image matching based on coplanar line intersections
KR20060055536A (en) Image object processing
Talker et al. independent book spine segmentation
Toldo et al. Photo-consistent planar patches from unstructured cloud of points
Bobkov et al. Noise-resistant Unsupervised Object Segmentation in Multi-view Indoor Point Clouds.