TWM535848U - Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image - Google Patents

Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image Download PDF

Info

Publication number
TWM535848U
TWM535848U TW105210334U TW105210334U TWM535848U TW M535848 U TWM535848 U TW M535848U TW 105210334 U TW105210334 U TW 105210334U TW 105210334 U TW105210334 U TW 105210334U TW M535848 U TWM535848 U TW M535848U
Authority
TW
Taiwan
Prior art keywords
image
depth
wavelet transform
edge detection
unit
Prior art date
Application number
TW105210334U
Other languages
Chinese (zh)
Inventor
Yu-Hsiang Chen
Der-Feng Huang
Ting-Wen Huang
Original Assignee
Lunghwa Univ Of Science And Tech
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lunghwa Univ Of Science And Tech filed Critical Lunghwa Univ Of Science And Tech
Priority to TW105210334U priority Critical patent/TWM535848U/en
Publication of TWM535848U publication Critical patent/TWM535848U/en

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

結合小波轉換及邊緣偵測建立單張影像深度圖的裝置 Device for establishing single image depth map combined with wavelet transform and edge detection

本創作有關於視訊系統,尤指一種深度圖(Depth Map)產生裝置,用以將二維影像資料轉換成三維影像資料。 This creation relates to video systems, and more particularly to a Depth Map generation device for converting 2D image data into 3D image data.

自從2009年阿凡達3D電影的上市以來,人們開始追求於3D顯示技術的娛樂效果,2010年3D轉播世界盃足球賽,一直到了2016年的虛擬實境頭盔,皆顯示著我們的娛樂產業由2D轉像了3D,人們不在滿足於2D所帶來的影像效果,開始追求於3D顯示技術,目前,因為3D顯示技術的商業化,以及有關3D內容的服務也日益增加,相對的使用者對於3D的需求也跟著增加,然而,對於3D內容的開發並沒有顯著的進展,對比之下,現有相當龐大數量的2D影像或視訊,且個人所拍攝之影像也屬於2D影像,正等著被有效的利用,以便轉換成3D視訊應用。 Since the launch of Avatar 3D movies in 2009, people have begun to pursue the entertainment effects of 3D display technology. In 2010, 3D broadcasted the World Cup soccer game, and until 2016, the virtual reality helmets all showed that our entertainment industry has turned from 2D. Like 3D, people are not satisfied with the image effects brought by 2D, and are beginning to pursue 3D display technology. At present, because of the commercialization of 3D display technology and the services related to 3D content, the relative users are concerned with 3D. Demand has also increased. However, there has been no significant progress in the development of 3D content. In contrast, there is a considerable amount of 2D video or video, and the images taken by individuals are also 2D images, which are waiting to be effectively utilized. In order to convert to a 3D video application.

緣此,有發明人發明如中國專利公開號第CN 103559701 A「基於DCT係數熵的二維單視圖像深度估計方法」中,其提出以具有景深的單張影像中,進行深度的預測,其對擷取待處理的影像中的每個像素點,以該像素點為中心擷取窗口作為子影像,並對這些子影像進行小波轉換後,對影像中的小波係數值進行量化,然後計算其係數熵以做為該像素點的模糊度,接著透過線性映射把熵值映射到一8bit的深度值域,以得到一像 素級的深度圖。又,有發明人提出中國專利公告號第CN 10247539B號「視頻圖像2D轉3D的方法」,其利用小波轉換對單張具有景深的影像進行深度的預測,透過對原始影像進行小波轉換,以提取影像中的高頻係數,並將影像分為數個區塊,接著統計每個區塊中非零係數的數目為該區塊的模糊度,同時,基於原始影像的顏色特徵,對原始影像進行顏色分割成三類像素集合,然後比較每一個像素集合的模糊度以統計平均值,最大值對應的像素集合做為前景,次大值對應的像素集合看作中景,最小對應的像素值則看作背景,最後由預設景深的系統對前景、中景以及背景分別賦予不同的深度值,以得到深度圖。 For this reason, the inventors have invented, as disclosed in Chinese Patent Publication No. CN 103559701 A, "Two-dimensional single-view image depth estimation method based on DCT coefficient entropy", which proposes depth prediction in a single image having a depth of field, For each pixel in the image to be processed, the window is taken as a sub-image centered on the pixel, and wavelet transform is performed on the sub-images, and the wavelet coefficient values in the image are quantized, and then calculated. The coefficient entropy is used as the ambiguity of the pixel, and then the entropy value is mapped to an 8-bit depth range through a linear mapping to obtain an image. A depth map of the prime level. In addition, the inventor has proposed Chinese Patent Publication No. CN 10247539B "Method for Video Image 2D to 3D", which uses wavelet transform to predict the depth of a single image having a depth of field, and performs wavelet conversion on the original image. Extract the high-frequency coefficients in the image, and divide the image into several blocks, then count the number of non-zero coefficients in each block as the blur of the block, and based on the color features of the original image, perform the original image. The color is divided into three types of pixel sets, and then the ambiguity of each pixel set is compared to obtain a statistical average value, and the pixel set corresponding to the maximum value is used as the foreground, and the pixel set corresponding to the next largest value is regarded as the middle scene, and the minimum corresponding pixel value is As the background, the system with the preset depth of field finally assigns different depth values to the foreground, the middle scene and the background to obtain the depth map.

由上述所揭之習知技術可知先前技術,習知之深度圖產生方法具有相關缺點,如對於以像素點為中心之窗口,其窗口設定之大小需人工設立,且無法根據不同的影像自動進行調整。再者,使用原始影像的顏色特徵將影像分割為三類像素的集合,僅僅將影像分為前景、中景以及背景,顯然在我們平時所看到的豐富影像具有多層次的深度資訊不同,導致無法產生正確的深度圖。 The prior art is known from the prior art. The conventional depth map generation method has related disadvantages. For a window centered on a pixel, the size of the window setting needs to be manually set, and the image cannot be automatically adjusted according to different images. . Furthermore, using the color features of the original image to divide the image into a collection of three types of pixels, the image is only divided into the foreground, the middle scene and the background. Obviously, the rich images we usually see have different levels of depth information, resulting in different Unable to produce the correct depth map.

有鑑於此,本創作人係依據多年從事相關行業及研究,針對現有的深度圖產生方法進行研究及分析,期能創作出改善習知缺點之深度圖產生方法,緣此,本創作之主要目的在於不需人工干涉,且符合人眼所觀看之深度資訊的結合小波轉換及邊緣偵測建立單張影像深度圖的裝置。 In view of this, the creator is based on years of research in related industries and research, and researches and analyzes existing depth map generation methods, and can create a depth map generation method for improving the shortcomings of the conventional knowledge. Therefore, the main purpose of the creation is It is a device that combines wavelet transform and edge detection to establish a single image depth map without manual intervention and conforming to the depth information viewed by the human eye.

為達上述目的,本創作所述之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,具有一影像擷取單元,用以擷取或輸入一原始影 像,一影像分析單元,與該影像擷取單元呈資訊連結,用以執行一影像分析演算法,該影像分析演算法可為一小波轉換或一邊緣偵測其中之一種或其組合,並分析該影像擷取單元擷取或輸入之該原始影像,一影像合成單元,與該影像分析單元呈資訊連結,用以將該影像分析單元分析該原始影像後之複數個分析結果,將該複數個分析結果執行一影像合成,而產生一散焦圖,一深度計算單元,與該影像合成單元呈資訊連結,可依據該複數個分析結果執行一深度預測演算法,完成後,執行一深度擴散演算法,以及,該深度擴散演算法執行後,可產生一深度圖。 In order to achieve the above object, the device for creating a single image depth map combined with wavelet transform and edge detection according to the present invention has an image capturing unit for capturing or inputting a primary image. For example, an image analysis unit is coupled to the image capture unit for performing an image analysis algorithm, and the image analysis algorithm can be one of wavelet transform or edge detection or a combination thereof, and analyze The image capturing unit captures or inputs the original image, and an image synthesizing unit is coupled with the image analyzing unit for analyzing the plurality of analysis results after the image analyzing unit analyzes the original image. The analysis result performs an image synthesis to generate a defocus map, and a depth calculation unit is coupled with the image synthesis unit to perform a depth prediction algorithm according to the plurality of analysis results, and after performing a depth diffusion calculation The method, and, after the depth diffusion algorithm is executed, generates a depth map.

1‧‧‧結合小波轉換及邊緣偵測建立單張影像深度圖的裝置 1‧‧‧Device for establishing a single image depth map in combination with wavelet transform and edge detection

11‧‧‧影像擷取單元 11‧‧‧Image capture unit

12‧‧‧影像分析單元 12‧‧‧Image Analysis Unit

13‧‧‧影像合成單元 13‧‧‧Image synthesis unit

14‧‧‧深度計算單元 14‧‧‧Deep calculation unit

S1‧‧‧輸入原始影像步驟 S1‧‧‧Input original image steps

S2‧‧‧影像分析步驟 S2‧‧‧ image analysis steps

S22‧‧‧邊緣偵測裝置步驟 S22‧‧‧Edge detection device steps

S23‧‧‧小波轉換裝置步驟 S23‧‧‧wavelet conversion device steps

S231‧‧‧轉換為灰階影像 S231‧‧‧ converted to grayscale image

S232‧‧‧尋找局部最大直步驟 S232‧‧‧ Looking for the local maximum straight step

S233‧‧‧局部最大值對應小波係數值 S233‧‧‧Local maximum corresponds to wavelet coefficient value

S234‧‧‧閥值計算結果 S234‧‧‧ threshold calculation results

S3‧‧‧建立散焦圖步驟 S3‧‧‧ Establishing a defocusing step

S4‧‧‧深度預測裝置步驟 S4‧‧‧Deep prediction device steps

S41‧‧‧直方圖局部最大值之個數步驟 S41‧‧‧Steps for the local maximum of the histogram

S42‧‧‧依據個數建立窗口步驟 S42‧‧‧Create window steps based on number

S43‧‧‧對小波轉換結果進行模糊度計算步驟 S43‧‧‧The ambiguity calculation step for wavelet transform results

S44‧‧‧深度預測結果步驟 S44‧‧‧Deep prediction results steps

S5‧‧‧深度擴散裝置步驟 S5‧‧‧Deep diffusion device steps

S6‧‧‧產生深度圖步驟 S6‧‧‧ Generate depth map steps

第1圖,為本創作之結構示意圖。 Figure 1 is a schematic diagram of the structure of the creation.

第2圖,為本創作之步驟流程圖。 Figure 2 is a flow chart of the steps of the creation.

第3圖,為本創作之實施示意圖。 Figure 3 is a schematic diagram of the implementation of this creation.

第4圖,為本創作之實施示意圖(一)。 Figure 4 is a schematic diagram of the implementation of this creation (1).

第5圖,為本創作之實施示意圖(二)。 Figure 5 is a schematic diagram of the implementation of the creation (2).

第6圖,為本創作之實施例示意圖。 Figure 6 is a schematic view of an embodiment of the present creation.

第7圖,為本創作之實施例示意圖(一)。 Figure 7 is a schematic view (I) of an embodiment of the present invention.

第8圖,為本創作之實施例示意圖(二)。 Figure 8 is a schematic view (2) of an embodiment of the present invention.

於以下說明書的描述中,「深度圖」一詞是指深度值的二維矩陣,而該矩陣中的每一深度值,分別對應一場景的相對位置,以及每一深度值代表一特定參考位置至該場景之各相對位置的距離,若一2D影像的 每一像素具有各自的深度值,則該2D影像就能使用3D技術來顯示。 In the description of the following description, the term "depth map" refers to a two-dimensional matrix of depth values, and each depth value in the matrix corresponds to a relative position of a scene, and each depth value represents a specific reference position. The distance to each relative position of the scene, if a 2D image Each pixel has its own depth value, and the 2D image can be displayed using 3D technology.

茲為使 貴審查委員得以對本創作所欲達成之目的、技術手段及功效等有進一步了解與認識,謹佐以較佳實施例搭配圖式說明。 In order to enable your review committee to have a better understanding and understanding of the purpose, technical means and efficacy of this creation, please refer to the preferred embodiment with a schematic description.

請參閱「第1圖」,圖中所示為本創作之結構示意圖,如圖所示,本創作之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置1,主要係由一影像擷取單元11、一影像分析單元12、一影像合成單元13、一深度計算單元14所組構而成,其中,該影像擷取單元11係可擷取一原始影像,而所述的該原始影像為2D影像或視訊,該影像分析單元12,與該影像擷取單元11呈資訊連結,用以接收該原始影像後,執行複數個影像分析演算法,其中,所述的影像分析演算法可為一小波轉換、一邊緣偵測其中之一種或其組合,又,所述的該小波轉換可為離散小波轉換或連續小波轉換,又,該邊緣偵測裝置可為Roberts Cross算子、Prewitt算子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種,但凡可偵測該原始影像中之邊緣偵測皆為本創作之實施範疇內,但並不以此為限。該影像合成單元13與該影像分析單元12呈資訊連結,用以將該影像分析單元12所分析之影像結果執行一影像合成,進而產生一散焦圖。該深度計算單元14與該影像合成單元13呈資訊連結,係依據該影像分析單元12所分析之影像結果執行一深度預測演算法後,經該深度預測裝置至結果透過該影像合成單元13進行合成,爾後,該深度計算單元14接續執行一深度擴散演算法,產生與該原始影像搭配之一深度圖。 Please refer to "Figure 1". The figure shows the structure of the creation. As shown in the figure, the device 1 for creating a single image depth map combined with wavelet transform and edge detection is mainly composed of an image. The image capturing unit 11 is configured to capture an original image, and the original image is captured by the image capturing unit 12, an image combining unit 13, and a depth calculating unit 14. The image analysis unit 12 is coupled to the image capturing unit 11 for receiving the original image, and then performing a plurality of image analysis algorithms, wherein the image analysis algorithm can be a wavelet transform, an edge detection or a combination thereof, wherein the wavelet transform can be a discrete wavelet transform or a continuous wavelet transform, and the edge detecting device can be a Roberts Cross operator and a Prewitt operator. , Sobel operator, Canny operator, compass operator, Marr-Hildreth, wavelet transform, but the detection of edge detection in the original image is within the scope of the implementation of the creation, but not Limited. The image synthesizing unit 13 and the image analyzing unit 12 are connected to each other for performing image synthesizing on the image result analyzed by the image analyzing unit 12, thereby generating a defocusing map. The depth calculation unit 14 is connected to the image synthesis unit 13 and performs a depth prediction algorithm according to the image result analyzed by the image analysis unit 12, and then the result is transmitted through the image synthesis unit 13 through the depth prediction device. Then, the depth calculation unit 14 successively performs a depth diffusion algorithm to generate a depth map that is matched with the original image.

承上所述,並請參閱「第2圖」,圖中所示為本創作之步驟流程圖,如圖所示,本創作實施步驟如下:一輸入原始影像步驟S1,其為該 影像擷取單元11所輸入之該原始影像。一影像分析步驟S2,其包含有一小波轉換裝置步驟S23及一邊緣偵測裝置步驟S22,係對該原始影像進行一小波轉換分析及一邊緣偵測分析,其中,該小波轉換裝置步驟S23係由影像分析單元12執行一小波轉換演算法,以產生一小波轉換分析結果,且所述的小波轉換演算法可為一離散小波轉換或一連續小波轉換其中之一種,並不以此為限,及該邊緣偵測裝置步驟S22係由影像分析單元12執行一邊緣偵測演算法,以產生一邊緣偵測之結果,且該邊緣偵測可為Roberts Cross算子、Prewitt算子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種,並請搭配參閱「第3圖」,圖中所示為本創作之實施示意圖,如圖所示為小波轉換分析二值化之結果。一建立散焦圖步驟S3,其為該影像合成單元13將該小波轉換分析結果及該邊緣偵測之結果進行合成,以產生一散焦圖,請搭配參閱「第4圖」,圖中所示為本創作之實施示意圖(一),如圖所示為該散焦圖。所述的合成為該邊緣偵測之結果對應於該小波轉換分析結果,以提取該邊緣偵測結果中像素點於該小波轉換分析結果之係數值。一深度預測裝置步驟S4,為該深度計算單元14依據該影像分析步驟S2對該原始影像進行小波轉換後之小波轉換結果執行一深度預測演算法,以對原始影像進行一深度預測,該深度計算單元14執行該深度預測演算法後,與該建立散焦圖步驟S3所產生之該散焦圖,透過該影像合成單元13進行合成,以產生一散焦深度圖。所述的合成為該深度預測之結果對應於該散焦圖之結果,並將深度預測之結果替換至散焦圖中。一深度擴散裝置步驟S5,為依據該散焦深度圖執行一深度擴散演算法。且所述之該深度擴散演算法可為該拉普拉斯插值技術或該全域插值演算法,最後,產生深度圖 步驟S6,該深度計算單元14執行完深度擴散演算法後,係會產生一深度圖,請參閱「第5圖」,圖中所示為本創作之實施示意圖(二),如圖所示為該深度圖。 As mentioned above, please refer to "Fig. 2", which shows the flow chart of the steps of the creation. As shown in the figure, the implementation steps of the creation are as follows: an input original image step S1, which is The original image input by the image capturing unit 11. An image analysis step S2 includes a wavelet conversion device step S23 and an edge detection device step S22, performing a wavelet transform analysis and an edge detection analysis on the original image, wherein the wavelet transform device step S23 is performed by The image analysis unit 12 performs a wavelet transform algorithm to generate a wavelet transform analysis result, and the wavelet transform algorithm may be one of a discrete wavelet transform or a continuous wavelet transform, and is not limited thereto. The edge detecting device performs an edge detection algorithm by the image analyzing unit 12 to generate an edge detection result, and the edge detection may be a Roberts Cross operator, a Prewitt operator, a Sobel operator, Canny operator, compass operator, Marr-Hildreth, wavelet conversion, and please refer to "3rd picture", the figure shows the implementation of the creation, as shown in the figure for wavelet transformation analysis binarization The result. a defocusing step S3 is established, wherein the image synthesizing unit 13 synthesizes the wavelet transform analysis result and the edge detection result to generate a defocus map, please refer to "4th figure", Shown as a schematic diagram of the implementation of the creation (1), the defocus map is shown in the figure. The synthesizing is that the result of the edge detection corresponds to the wavelet transform analysis result, so as to extract a coefficient value of the pixel point in the edge detection result of the wavelet transform analysis result. a depth prediction device step S4, wherein the depth calculation unit 14 performs a depth prediction algorithm on the wavelet transform result of the wavelet transform on the original image according to the image analysis step S2, to perform a depth prediction on the original image, and the depth calculation After the unit 14 executes the depth prediction algorithm, the defocus map generated by the establishing the defocus map step S3 is synthesized by the image synthesizing unit 13 to generate a defocus depth map. The synthesis is that the result of the depth prediction corresponds to the result of the defocus map, and the result of the depth prediction is replaced with a defocus map. A depth diffusing device step S5 is performed to perform a depth diffusion algorithm according to the defocus depth map. And the depth diffusion algorithm may be the Laplacian interpolation technique or the global interpolation algorithm, and finally, the depth map is generated. In step S6, after the depth calculation unit 14 executes the depth diffusion algorithm, a depth map is generated. Please refer to "figure 5", which shows the implementation diagram of the creation (2), as shown in the figure. The depth map.

承上所述,並請同時搭配參閱「第6圖」,圖中所示為本創作之實施例示意圖,如圖所示,該深度預測裝置步驟S4,由該深度計算單元14執行之該深度預測演算法之步驟流程進一步包含有一直方圖局部最大值個數步驟S41,係依據深度計算單元14找出該原始影像之灰階值直方圖於該灰階值直方圖中之峰值個數,一依據個數建立窗口步驟S42,依據該峰值個數建立一計算窗口,一對小波轉換結果進行模糊度計算S43,係依據該小波轉換結果以該窗口之中心像素點為中心執行一鄰域計算,計算該中心像素點鄰域之小波轉換結果,一深度預測結果步驟S44,係依據計算中心點像素鄰域之結果,即模糊度,進行深度預測。 As described above, please also refer to "FIG. 6", which shows a schematic diagram of an embodiment of the creation. As shown, the depth prediction means step S4, the depth performed by the depth calculation unit 14 The step of the prediction algorithm further includes a step of the local maximum value of the histogram step S41, and the depth calculation unit 14 is used to find the peak value of the grayscale value histogram of the original image in the histogram of the grayscale value. A window is established according to the number of steps S42, a calculation window is established according to the number of peaks, and a pair of wavelet transformation results are used for ambiguity calculation S43, and a neighborhood calculation is performed centering on the central pixel of the window according to the wavelet transformation result. The wavelet transform result of the neighborhood of the central pixel point is calculated, and a depth prediction result step S44 is performed according to the result of calculating the pixel neighborhood of the center point, that is, the ambiguity.

承上所述,並請同時搭配參閱「第7圖」,圖中所示為本創作之實施例示意圖(一),如圖所示,小波轉換步驟S23進一步包含有小波轉換閥值設立步驟,其由影像分析單元12執行之步驟包含有一轉換為灰階影像步驟S231,係將該原始影像轉換為一灰階影像。一尋找局部最大值步驟S232,係依據該原始影像之灰階影像建立一灰階值直方圖,並於該灰階值直方圖中,尋找峰值所在之灰階值。一局部最大值對應小波係數值步驟S233,將所尋找到的峰值所在之所有灰階值,其位於於該原始影像之位置對應至小波轉換結果中之係數值所在之位置,並將所有係數值擷取出來。一閥值計算結果步驟S234,將所擷取之係數值透過數值分析,進行小波轉換閥值設立,其中所述的數值分析可為辛普森法則,且閥值計算函式如下: 其中,f(x)為所擷取之所有係數值,並將所有係數值中,取前三大之係數值,透過方程式(1)進行閥值之計算,當閥值計算完成後,滿足下列函式: 其中,I(m,n)為小波轉換結果,Th為計算之閥值,並請搭配參閱「第3圖」,圖中所示為本創作之實施示意圖,如圖所示為小波轉換後並二值化之結果,即為尚未設立閥值之結果,並請參閱「第8圖」,圖中所示為本創作之實施例示意圖(二),如圖所示為閥值設立後之結果,將大於等於閥值之結果設為255,即白色部分,小於閥值之結果為0,即黑色部分。 In the above, please refer to "Figure 7" at the same time. The figure shows a schematic diagram (1) of the embodiment of the creation. As shown in the figure, the wavelet conversion step S23 further includes a step of setting a wavelet switching threshold. The step performed by the image analyzing unit 12 includes a step of converting to a grayscale image, S231, which converts the original image into a grayscale image. A searching for the local maximum step S232 is to establish a grayscale value histogram according to the grayscale image of the original image, and find the grayscale value of the peak in the grayscale value histogram. A local maximum corresponds to the wavelet coefficient value step S233, and all the grayscale values of the found peak are located at the position of the original image corresponding to the coefficient value in the wavelet transform result, and all the coefficient values are Take it out. A threshold value calculation result step S234, the coefficient value obtained is subjected to numerical analysis, and a wavelet switching threshold is established, wherein the numerical analysis may be a Simpson's rule, and the threshold calculation function is as follows: Where f(x) is the value of all the coefficients taken, and the coefficient values of the first three major coefficients of all the coefficient values are calculated by the equation (1). When the threshold value is calculated, the following is satisfied. Function: Among them, I(m,n) is the wavelet conversion result, Th is the calculation threshold, and please refer to “3rd picture”. The figure shows the implementation diagram of the creation, as shown in the figure after wavelet transformation. The result of the binarization is the result of not setting the threshold, and please refer to "8th figure". The figure shows the schematic diagram (2) of the embodiment of the creation. The figure shows the result after the threshold is established. The result of setting the value greater than or equal to the threshold is 255, that is, the white portion, and the result smaller than the threshold is 0, that is, the black portion.

綜上所述,本創作之結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,主要係藉一影像分析演算法對一原始影像進行影像分析,以使一深度計算單元可執行深度預測演算法後,執行深度擴散演算法,以產生深度圖,由於影像分析演算法執行快速,且準確率高,不需複雜計算,因此量測效率佳,且又因本創作不需複雜且龐大的運算,因此成本亦相對減少,又,可達到本創作之主要目的不需人工干涉,且符合人眼所觀看之深度資訊的結合小波轉換及邊緣偵測建立單張影像深度圖的裝置。 In summary, the device for combining a wavelet transform and edge detection to create a single image depth map mainly performs image analysis on an original image by an image analysis algorithm, so that a depth calculation unit can perform depth prediction. After the algorithm, the deep diffusion algorithm is executed to generate the depth map. Since the image analysis algorithm is fast and accurate, it does not require complicated calculations, so the measurement efficiency is good, and the creation does not need complicated and huge. The operation, so the cost is relatively reduced, and the device that can achieve the main purpose of the creation without manual intervention and conforming to the depth information viewed by the human eye, combined with wavelet transform and edge detection, establishes a single image depth map.

雖本創作已以較佳實施例揭露如上,然,其並非用以限定本創作之申請專利範圍,任何熟習此技藝者,再不脫離本創作之精神和範圍內,當可作些許更動及修改,因此本創作之保護範圍並不以此為限。 Although the present invention has been disclosed in the above preferred embodiments, it is not intended to limit the scope of the present invention. Anyone skilled in the art can make some changes and modifications without departing from the spirit and scope of the present invention. Therefore, the scope of protection of this creation is not limited to this.

1‧‧‧結合小波轉換及邊緣偵測建立單張影像深度圖的裝置 1‧‧‧Device for establishing a single image depth map in combination with wavelet transform and edge detection

11‧‧‧影像擷取單元 11‧‧‧Image capture unit

12‧‧‧影像分析單元 12‧‧‧Image Analysis Unit

13‧‧‧影像合成單元 13‧‧‧Image synthesis unit

14‧‧‧深度計算單元 14‧‧‧Deep calculation unit

Claims (6)

一種結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其包括有:一影像擷取單元,用以輸入一原始影像;一影像分析單元,與該影像擷取單元呈資訊連結,用以執行一影像分析演算法,並分析該影像擷取單元輸入之該原始影像;一影像合成單元,與該影像分析單元呈資訊連結,用以將該影像分析單元分析該原始影像後之複數個分析結果,將該複數個分析結果執行一影像合成,而產生一散焦圖;一深度計算單元,與該影像合成單元呈資訊連結,可依據該複數個分析結果執行一深度預測演算法,完成後,執行一深度擴散演算法,以及,該深度擴散演算法執行後,可產生一深度圖。 An apparatus for establishing a single image depth map by combining wavelet transform and edge detection includes: an image capturing unit for inputting an original image; and an image analyzing unit for linking information with the image capturing unit for use Performing an image analysis algorithm and analyzing the original image input by the image capturing unit; an image synthesizing unit is coupled with the image analyzing unit for analyzing the plurality of images after the image analyzing unit analyzes the original image The result of the analysis is performed by performing an image synthesis on the plurality of analysis results to generate a defocus map; a depth calculation unit is coupled with the image synthesis unit to perform a depth prediction algorithm according to the plurality of analysis results, and completes Thereafter, a depth diffusion algorithm is performed, and after the depth diffusion algorithm is executed, a depth map can be generated. 如申請專利範圍第1項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該影像分析演算法可為一小波轉換或一邊緣偵測其中之一種或其組合。 The apparatus for establishing a single image depth map by combining wavelet transform and edge detection according to the first aspect of the patent application, wherein the image analysis algorithm may be one of wavelet transform or edge detection or a combination thereof. . 如申請專利範圍第2項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該小波轉換可為離散小波轉換或連續小波轉換其中之一種。 The apparatus for establishing a single image depth map by combining wavelet transform and edge detection according to the second aspect of the patent application, wherein the wavelet transform may be one of discrete wavelet transform or continuous wavelet transform. 如申請專利範圍第2項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該邊緣偵測可為Roberts Cross算子、Prewitt算子、Sobel算子、Canny算子、羅盤算子、Marr-Hildreth、小波轉換其中之一種。 The apparatus for establishing a single image depth map by combining wavelet transform and edge detection according to the second aspect of the patent application, wherein the edge detection may be a Roberts Cross operator, a Prewitt operator, a Sobel operator, and a Canny calculation. Sub, compass, Marr-Hildreth, wavelet transform one of them. 如申請專利範圍第1項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該深度計算單元所執行之該深度預測演算法其步步驟為依據該原始影像之一灰階值直方圖之局部最大值個數,建立一窗 口,並依據該影像分析之結果執行一模糊度之計算。 The apparatus for establishing a single image depth map by combining the wavelet transform and the edge detection according to the first aspect of the patent application, wherein the step of performing the depth prediction algorithm performed by the depth calculation unit is based on the original image a local maximum number of grayscale value histograms, creating a window And calculate a ambiguity based on the result of the image analysis. 如申請專利範圍第1項所述之該結合小波轉換及邊緣偵測建立單張影像深度圖的裝置,其中,該深度計算單元所執行之深度擴散演算法可為一拉普拉斯插值技術或一全域插值演算法,其中之一種。 The apparatus for establishing a single image depth map by combining wavelet transform and edge detection according to claim 1 of the patent application scope, wherein the depth diffusion algorithm performed by the depth calculation unit may be a Laplacian interpolation technique or A global interpolation algorithm, one of which.
TW105210334U 2016-07-11 2016-07-11 Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image TWM535848U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW105210334U TWM535848U (en) 2016-07-11 2016-07-11 Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW105210334U TWM535848U (en) 2016-07-11 2016-07-11 Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image

Publications (1)

Publication Number Publication Date
TWM535848U true TWM535848U (en) 2017-01-21

Family

ID=58400534

Family Applications (1)

Application Number Title Priority Date Filing Date
TW105210334U TWM535848U (en) 2016-07-11 2016-07-11 Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image

Country Status (1)

Country Link
TW (1) TWM535848U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI613903B (en) * 2016-07-11 2018-02-01 龍華科技大學 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image
TWI657431B (en) * 2017-04-10 2019-04-21 鈺立微電子股份有限公司 Dynamic display system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI613903B (en) * 2016-07-11 2018-02-01 龍華科技大學 Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image
TWI657431B (en) * 2017-04-10 2019-04-21 鈺立微電子股份有限公司 Dynamic display system

Similar Documents

Publication Publication Date Title
US8718356B2 (en) Method and apparatus for 2D to 3D conversion using scene classification and face detection
US9030469B2 (en) Method for generating depth maps from monocular images and systems using the same
Liu et al. Subjective and objective video quality assessment of 3D synthesized views with texture/depth compression distortion
TWI524734B (en) Method and device for generating a depth map
US8059911B2 (en) Depth-based image enhancement
WO2017120981A1 (en) Compression method and apparatus for panoramic stereo video system
WO2015169137A1 (en) Image data collection processing method and related device
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN108513131B (en) Free viewpoint video depth map region-of-interest coding method
Wang et al. Quaternion representation based visual saliency for stereoscopic image quality assessment
JP2020129276A (en) Image processing device, image processing method, and program
TWI620149B (en) Method, device, and system for pre-processing a video stream for subsequent motion detection processing
US20230394833A1 (en) Method, system and computer readable media for object detection coverage estimation
CN111465937B (en) Face detection and recognition method employing light field camera system
TWM535848U (en) Apparatus for combining with wavelet transformer and edge detector to generate a depth map from a single image
TWI613903B (en) Apparatus and method for combining with wavelet transformer and edge detector to generate a depth map from a single image
US9171357B2 (en) Method, apparatus and computer-readable recording medium for refocusing photographed image
Islam et al. Robust enhancement of depth images from depth sensors
Yang et al. A depth map generation algorithm based on saliency detection for 2D to 3D conversion
KR20130002090A (en) Image processing system and method using multi view image
CN110570441B (en) Ultra-high definition low-delay video control method and system
TWI610271B (en) Apparatus and method for combining with wavelet transformer and corner point detector to generate a depth map from a single image
TWM542833U (en) Apparatus for combining with wavelet transformer and corner point detector to generate a depth map from a single image
KR101626679B1 (en) Method for generating stereoscopic image from 2D image and for medium recording the same
Liu et al. Stereoscopic view synthesis based on region-wise rendering and sparse representation

Legal Events

Date Code Title Description
MM4K Annulment or lapse of a utility model due to non-payment of fees