TWI639135B - Restoration method for blurred image - Google Patents

Restoration method for blurred image Download PDF

Info

Publication number
TWI639135B
TWI639135B TW106139584A TW106139584A TWI639135B TW I639135 B TWI639135 B TW I639135B TW 106139584 A TW106139584 A TW 106139584A TW 106139584 A TW106139584 A TW 106139584A TW I639135 B TWI639135 B TW I639135B
Authority
TW
Taiwan
Prior art keywords
image
point
blur
blurred
length
Prior art date
Application number
TW106139584A
Other languages
Chinese (zh)
Other versions
TW201923704A (en
Inventor
陳昭和
陳聰毅
王翔麟
Original Assignee
國立高雄科技大學
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 國立高雄科技大學 filed Critical 國立高雄科技大學
Priority to TW106139584A priority Critical patent/TWI639135B/en
Application granted granted Critical
Publication of TWI639135B publication Critical patent/TWI639135B/en
Publication of TW201923704A publication Critical patent/TW201923704A/en

Links

Landscapes

  • Image Processing (AREA)

Abstract

本發明係揭露一種模糊影像之復原方法,係包含下列步驟:(1)點擴散函數估計:先將模糊影像做傅立葉轉換,接著將傅立葉轉換後的數值取其倒頻譜(Cepstrum),再從倒頻譜域中計算點擴散函數(Point Spread Function,PSF)之模糊角度(Blur Angle)與模糊長度(Blur Length);(2)單張影像去模糊:將估測之模糊角度與模糊長度對點擴散函數做正規化,再對正規化後的點擴散函數做單張影像的反摺積(Deconvolution);(3)清晰影像暫存器:選取清晰影像或做過單張影像去模糊的影像存入清晰影像暫存器(Clear Image Register);(4)多張影像去模糊:首先搜索當前影像和存儲在清晰影像暫存器中的最新影像之間的對應特徵點,利用這些特徵點推導透視變換(Perspective Transformation)所需的單應性矩陣(Homography Matrix),用以校正這兩張影像的相應像素,然後依據時域資訊(Temporal Information)來計算這兩張影像中的像素的權重(Weight)並進行比較,以高權重像素取代低權重像素的方式來產生復原像素,最後再輸出此復原的清晰影像。。 The invention discloses a method for restoring a blurred image, which comprises the following steps: (1) Point spread function estimation: first performing a Fourier transform on the blurred image, and then taking the Fourier transformed value into its Cepstrum, and then from the inverted Calculate the Blur Angle and Blur Length of the Point Spread Function (PSF) in the spectral domain; (2) Deblur the single image: Spread the estimated blur angle and blur length to the point spread The function is normalized, and then the deconvolution of the single image is performed on the normalized point spread function; (3) the clear image register: the clear image is selected or the image is blurred by a single image. Clear Image Register; (4) Multiple image deblurring: first search for the corresponding feature points between the current image and the latest image stored in the clear image register, and use these feature points to derive the perspective transformation (Holeography Transformation) required to correct the corresponding pixels of the two images, and then calculate these two according to Temporal Information. Right image pixel weight (Weight) and compared to the weight of low substituted weights high pixel weight of the pixel way to generate restored pixel, and finally output a clear image of this recovery. .

Description

模糊影像之復原方法 Fuzzy image restoration method

本發明是有關於一種模糊畫面之復原方法,特別是有關於一種模糊影像之復原方法。 The invention relates to a method for restoring a blurred picture, in particular to a method for restoring a blurred image.

由於近幾年來科技進步發展,電腦視覺技術已經被廣泛運用在重要機關場所,或是應用於車牌辨識之交通監控系統,以達到維持治安、預防犯罪之智慧型監控系統,然而在拍攝的同時可能會因為一些外力因素,造成攝取影像時產生運動模糊(Motion Blur)的現象,然而這些模糊的問題也可能會使得後續辨識應用的誤判,因此勢必需要對模糊影像作修復及補償處理。 Due to the advancement of science and technology in recent years, computer vision technology has been widely used in important institutions or traffic monitoring systems for license plate identification to achieve a smart surveillance system for maintaining law and order and crime prevention. Motion blur can occur due to some external factors, but these fuzzy problems may also cause misjudgment of subsequent identification applications, so it is necessary to repair and compensate the blurred images.

在影像模糊還原的領域中大多對於單張的影像去模糊(Image Deblurring),然而單張模糊還原中對於人工的模糊圖(Artificial Motion-Blurred Image)有較好的還原結果,又或是等速運動(Uniform Motion)模糊影像的結果為佳,反而在非等速運動(Non-Uniform Motion)模糊影像的還原上則不會有較好的結果,且模糊還原之計算量太過龐大,而對相關裝置設備的要求相對提高,成本也隨之增加。 In the field of image blur reduction, most of the images are deblurred (Image Deblurring), but in the single fuzzy reduction, there is a good reduction result for the artificial motion-Blurred Image, or constant velocity. The result of motion blur (Uniform Motion) is better, but it does not have better results in the restoration of non-uniform motion (Non-Uniform Motion) blurred image, and the calculation of fuzzy reduction is too large, but The requirements of related equipment and equipment are relatively increased, and the cost is also increased.

有鑑於上述習知之問題,本發明的目的在於提供一種模糊影像之復原方法,用以解決習知技術中所面臨之問題。 In view of the above-mentioned problems, it is an object of the present invention to provide a method for restoring a blurred image to solve the problems faced by the prior art.

基於上述目的,本發明係提供一種模糊影像之復原方法,係包含下列步驟:將模糊影像做傅立葉轉換以產生轉換後數值,接著取得轉換後數值之倒頻譜,再從倒頻譜域中計算點擴散函數之模糊角度與模糊長度;依據模糊角度與模糊長度正規化點擴散函數,再對正規化後的點擴散函數進行單張影像的反摺積以產生復原影像;選取清晰影像或復原影像存入清晰影像暫存器;以及搜索當前影像和儲存在清晰影像暫存器中的復原影像之間的對應特徵點,利用特徵點推導透視變換所需的單應性矩陣,並據以校正當前影像及復原影像之相應像素,再依據時域資訊來計算這當前影像及復原影像中的像素之權重並進行比較,以高權重像素取代低權重像素以產生復原像素,而輸出另一復原影像。 Based on the above object, the present invention provides a method for restoring a blurred image, which comprises the steps of: performing a Fourier transform on a blurred image to generate a converted value, and then obtaining a inverse spectrum of the converted value, and then calculating a point spread from the cepstral domain. The fuzzy angle and the fuzzy length of the function; the point spread function is normalized according to the blur angle and the blur length, and then the normalized point spread function is subjected to the deconvolution of the single image to generate the restored image; the clear image or the restored image is selected. Clear image buffer; and searching for corresponding feature points between the current image and the restored image stored in the clear image register, using the feature points to derive the homography matrix required for the perspective transformation, and correcting the current image and The corresponding pixels of the image are restored, and the weights of the pixels in the current image and the restored image are calculated according to the time domain information and compared, and the low-weight pixels are replaced by high-weight pixels to generate the restored pixels, and another restored image is output.

較佳地,清晰影像暫存器在輸入影像序列時,係儲存影像序列中之清晰影像,而影像序列中之模糊影像係藉由模糊影像之復原方法復原為復原影像。 Preferably, the clear image buffer stores the clear image in the image sequence when the image sequence is input, and the blurred image in the image sequence is restored to the restored image by the restoration method of the blurred image.

較佳地,點擴散函數可以下列公式表示: 其中,L係為模糊長度,θ係為模糊角度。 Preferably, the point spread function can be expressed by the following formula: Among them, L is the blur length, and θ is the blur angle.

較佳地,模糊影像之倒頻譜域定義可以下列公式表示:C(g(x,y))=F -1{log|F(g(x,y))|}其中,F表示傅立葉變換,F-1表示逆傅立葉變換,g(x,y)係為模糊影像。 Preferably, the cepstrum domain definition of the blurred image can be expressed by the following formula: C ( g ( x , y )) = F -1 {log| F ( g ( x , y ))|} where F represents a Fourier transform, F -1 represents an inverse Fourier transform, and g(x, y) is a blurred image.

較佳地,計算點擴散函數之模糊長度及模糊角度可包含下列步驟:(1)輸入倒頻譜圖,找尋中心點;(2)由中心點之90°、45°及0°三個方向中選擇最大倒頻譜值之點,以將其作為搜索點;(3)移動至搜索點上,並使用一個5*5遮罩,遮罩中心對應搜索點;(4)將遮罩內的倒頻譜值全部相加並儲存搜索點的座標;(5)判斷搜索點之方向是否偏移,若否則進入步驟(6),若是則進入步驟(7);(6)計算搜索點與中心點的距離是否大於閥值,若否則回步驟(2)以重複步驟(2)至步驟(5),若是則進入步驟(7);(7)從軌跡數值分佈圖中找出下降再上升之轉折點;(8)依據轉折點與中心點計算模糊長度及模糊角度;其中,轉折點之座標為(x1,y1),中心點之座標為(x0,y0),L為模糊長度,代入(x1,y1)及(x0,y0)以取得模糊長度係以下列公式表示: R為由轉折點、中心點及X軸所構成之三角形之底邊長度,代入x1及x0以取得底邊長度係以下列公式表示:R=|x 1-x 0|θ為模糊角度,代入模糊長度及底邊長度以取得模糊角度係以下列公式表示: Preferably, calculating the blur length and the blur angle of the point spread function may include the following steps: (1) inputting a cepstrum map to find a center point; (2) from a center point of 90°, 45°, and 0° Select the point of the largest cepstrum value as the search point; (3) Move to the search point and use a 5*5 mask to mask the center corresponding search point; (4) Cut the cepstrum in the mask The values are all added and the coordinates of the search point are stored; (5) determining whether the direction of the search point is offset, if otherwise, proceeding to step (6), if yes, proceeding to step (7); (6) calculating the distance between the search point and the center point Whether it is greater than the threshold, otherwise return to step (2) to repeat steps (2) to (5), and if so, proceed to step (7); (7) find the turning point of the falling and rising from the trajectory value distribution map; 8) Calculate the blur length and the blur angle according to the turning point and the center point; where the coordinates of the turning point are (x 1 , y 1 ), the coordinates of the center point are (x 0 , y 0 ), L is the blur length, and substituting (x 1 , y 1 ) and (x 0 , y 0 ) to obtain the fuzzy length are expressed by the following formula: R is the length of the base of the triangle formed by the turning point, the center point and the X-axis. Substituting x 1 and x 0 to obtain the length of the base is expressed by the following formula: R =| x 1 - x 0 |θ is the blur angle, Substituting the blur length and the length of the base to obtain the blur angle is expressed by the following formula:

承上所述,本發明之模糊影像之復原方法可應用於線性運動模糊畫面之復原,所提出來的視訊去模糊方法可以實作於常見的影像監控系統中,或是一般移動載具之拍攝裝置,可以透過視訊串流的方式作即時去模糊之功 能,由於本案所提出的演算法計算複雜度小,因此可以嵌入一般安全監控攝影系統中,以提升相關產品的附加價值。 As described above, the method for restoring a blurred image of the present invention can be applied to the restoration of a linear motion blur picture, and the proposed video deblurring method can be implemented in a common image monitoring system or a general moving vehicle. Device, which can be used to instantly blur the video through video streaming Yes, because the algorithm proposed in this case has a small computational complexity, it can be embedded in a general security surveillance camera system to enhance the added value of related products.

S11至S14、S61至S68‧‧‧步驟 S11 to S14, S61 to S68‧‧‧ steps

第1圖係為本發明之模糊影像之復原方法之第一流程圖。 FIG. 1 is a first flowchart of a method for restoring a blurred image of the present invention.

第2圖係為等速運動模糊影像之倒頻譜圖。 Figure 2 is a cepstrum of a constant velocity motion blur image.

第3圖係為非等速運動模糊影像之倒頻譜圖。 Figure 3 is a cepstrum of a non-equal motion blurred image.

第4圖係為本發明之模糊影像之復原方法之第二流程圖。 Figure 4 is a second flow chart of the method for restoring a blurred image of the present invention.

第5圖係為倒頻譜圖之搜尋軌跡圖。 Figure 5 is a search trajectory map of the cepstrum.

第6圖係為倒頻譜域之3D圖。 Figure 6 is a 3D diagram of the cepstral domain.

第7圖係為非等速運動模糊倒頻譜圖之搜尋軌跡圖。 Figure 7 is a search trajectory of the non-equal motion blur cepstogram.

第8圖係為軌跡數值分佈圖。 Figure 8 is a trajectory numerical distribution map.

第9圖係為單張影像還原之結果示意圖。 Figure 9 is a schematic diagram showing the results of single image restoration.

第10圖係為單應性矩陣執行兩影像間之變換示意圖。 Figure 10 is a schematic diagram of the transformation between two images performed by the homography matrix.

第11圖係為角點匹配之結果示意圖:(a)部分為第t-1畫面;(b)部分為第t畫面;(c)部分為第t-1畫面與第t畫面之角點的匹配結果。 Figure 11 is a schematic diagram showing the results of corner matching: (a) part is the t-1 picture; (b) part is the tth picture; (c) part is the corner point of the t-1 picture and the tth picture Match the result.

第12圖係為透視變換之結果示意圖。 Figure 12 is a schematic diagram showing the results of perspective transformation.

第13圖係為多張影像復原處理後之結果示意圖。 Figure 13 is a schematic diagram showing the results of multiple image restoration processes.

為利瞭解本發明之特徵、內容與優點及其所能達成之功效,茲將本發明配合圖式,並以實施例之表達形式詳細說明如下,而其中所使用之圖式,其主旨僅為示意及輔助說明書之用,未必為本發明實施後之真實比例與精準配 置,故不應就所附之圖式的比例與配置關係解讀、侷限本發明於實際實施上的權利範圍。 In order to understand the features, contents, and advantages of the present invention, and the advantages thereof, the present invention will be described in conjunction with the drawings, and the description of the embodiments will be described in detail below. The use of instructions and supplementary instructions is not necessarily true to the actual ratio and accuracy of the invention. It is to be understood that the scope of the present invention is not to be construed as limiting the scope of the present invention.

本發明之優點、特徵以及達到之技術方法將參照例示性實施例及所附圖式進行更詳細地描述而更容易理解,且本發明或可以不同形式來實現,故不應被理解僅限於此處所陳述的實施例,相反地,對所屬技術領域具有通常知識者而言,所提供的實施例將使本揭露更加透徹與全面且完整地傳達本發明的範疇,且本發明將僅為所附加的申請專利範圍所定義。 The advantages and features of the present invention, as well as the technical methods of the present invention, are described in more detail with reference to the exemplary embodiments and the accompanying drawings, and the present invention may be implemented in various forms and should not be construed as limited thereby. The embodiments of the present invention, and the embodiments of the present invention are intended to provide a more complete and complete and complete disclosure of the scope of the present invention, and The scope of the patent application is defined.

請參閱第1圖,其係為本發明之模糊影像之復原方法之第一流程圖。如圖所示,本發明之模糊影像之復原方法100包含了下列步驟: Please refer to FIG. 1 , which is a first flowchart of a method for restoring a blurred image of the present invention. As shown, the blurred image restoration method 100 of the present invention comprises the following steps:

在步驟S11中:將模糊影像做傅立葉轉換以產生轉換後數值,接著取得轉換後數值之倒頻頻譜,再從倒頻譜域中計算點擴散函數之模糊角度與模糊長度。 In step S11, the blurred image is subjected to Fourier transform to generate a converted value, and then the scrambled spectrum of the converted value is obtained, and then the blur angle and the blur length of the point spread function are calculated from the cepstral domain.

在步驟S12中:依據模糊角度與模糊長度正規化點擴散函數,再對正規化後的點擴散函數進行單張影像的反摺積以產生復原影像。 In step S12, the point spread function is normalized according to the blur angle and the blur length, and then the normalized point spread function is subjected to deconvolution of the single image to generate a restored image.

在步驟S13中:選取清晰影像或復原影像存入清晰影像暫存器。 In step S13: a clear image or a restored image is stored in the clear image register.

在步驟S14中:搜索當前影像和儲存在清晰影像暫存器中的復原影像之間的對應特徵點,利用特徵點推導透視變換所需的單應性矩陣,並據以校正當前影像及復原影像之相應像素,再依據時域資訊來計算這當前影像及復原影像中的像素之權重並進行比較,以高權重像素取代低權重像素以產生復原像素,而輸出另一復原影像。 In step S14: searching for corresponding feature points between the current image and the restored image stored in the clear image register, and deriving the homography matrix required for the perspective transformation by using the feature points, and correcting the current image and the restored image according to the feature point. Corresponding pixels are used to calculate and compare the weights of the pixels in the current image and the restored image according to the time domain information, and replace the low-weight pixels with high-weight pixels to generate restored pixels, and output another restored image.

再請參閱第2至13圖;第2圖係為等速運動模糊影像之倒頻譜圖;第3圖係為非等速運動模糊影像之倒頻譜圖;第4圖係為本發明之模糊影像之復 原方法之第二流程圖;第5圖係為倒頻譜圖之搜尋軌跡圖;第6圖係為倒頻譜域之3D圖;第7圖係為非等速運動模糊倒頻譜圖之搜尋軌跡圖;第8圖係為軌跡數值分佈圖;第9圖係為單張影像還原之結果示意圖;第10圖係為單應性矩陣執行兩影像間之變換示意圖;第11圖係為角點匹配之結果示意圖:(a)部分為第t-1畫面;(b)部分為第t畫面;(c)部分為第t-1畫面與第t畫面之角點的匹配結果;第12圖係為透視變換之結果示意圖;第13圖係為多張影像復原處理後之結果示意圖。以下將配合各圖式將對上述各步驟之進行詳細說明。 Please refer to Figures 2 to 13 again; Figure 2 is a cepstrum image of a constant velocity motion blur image; Figure 3 is a cepstrum image of a non-equal motion blur image; Figure 4 is a blurred image of the present invention. Complex The second flowchart of the original method; the fifth graph is the search trajectory map of the cepstrum map; the sixth graph is the 3D map of the cepstrum domain; the seventh graph is the search trajectory map of the non-equal motion blur cepstogram Figure 8 is a trajectory numerical distribution map; Figure 9 is a schematic diagram of the results of single image restoration; Figure 10 is a schematic diagram of the transformation between two images for the homography matrix; Figure 11 is a corner matching The result is a schematic diagram: (a) part is the t-1 picture; (b) part is the tth picture; (c) part is the matching result of the corner point of the t-1 picture and the tth picture; the 12th picture is the perspective Schematic diagram of the result of the transformation; Figure 13 is a schematic diagram of the results of multiple image restoration processes. The above steps will be described in detail below in conjunction with the various drawings.

1、點擴散函數估計 1. Point spread function estimation

點擴散函數可視為光學系統中的脈衝函數,在數學上點光源(輸入)可用點脈衝函數代表,而輸出的光場分布可稱為脈衝響應(Impulse Response),並用此代表影像所受到的脈衝響應。若成像系統產生一個倒置(Inverted)的影像,則可以簡單地把影像平面座標軸從物件平面座標軸逆轉,在沒有失真情況時,計算影像平面摺積積分僅僅是一個簡單的過程。 The point spread function can be regarded as a pulse function in an optical system. In mathematics, the point source (input) can be represented by a point pulse function, and the output light field distribution can be called an impulse response (Impulse Response), and this represents the pulse received by the image. response. If the imaging system produces an inverted image, the image plane coordinate axis can simply be reversed from the object plane coordinate axis. Calculating the image plane fold integral is a simple process without distortion.

1.1、影像轉換 1.1, image conversion

在多數的擴散函數估計方法多是在空間域上做處理,其主要的方式為通過影像局部的特徵點如:點、邊、線來初估,在這些方法中如果要估計點擴散函數必須要事先知道模糊的類型,並且當模糊較為嚴重時這些方法的準確度將會下降,因此本系統是基於頻率域的方法,來計算出模糊的長度(L)及角度(θ)。 Most of the diffusion function estimation methods are processed in the spatial domain. The main way is to estimate the local feature points such as points, edges and lines. In these methods, if you want to estimate the point spread function, you must The type of blur is known in advance, and the accuracy of these methods will decrease when the blur is more serious. Therefore, the system is based on the frequency domain method to calculate the length ( L ) and angle (θ) of the blur.

其中,線性運動模糊的點擴散函數之近似為式(1)。 Among them, the point spread function of linear motion blur is approximated by equation (1).

其中,L係為模糊長度,θ係為模糊角度。 Among them, L is the blur length, and θ is the blur angle.

1.2、倒頻譜域轉換 1.2, cepstral domain conversion

倒頻譜域最先在一維訊號處理領域被提出,後來被引進到影像處理中,模糊影像g(x,y)的倒頻譜域定義如公式(2),其中FF -1分別表示傅立葉變換、逆傅立葉變換,由式中可知影像的倒頻譜域是對原影像功率譜的對數再求逆傅立葉變換。 The cepstrum domain was first proposed in the field of one-dimensional signal processing and was later introduced into image processing. The cepstrum domain of the blurred image g ( x, y ) is defined as equation (2), where F and F -1 respectively represent Fourier Transform, inverse Fourier transform, from which the cepstrum domain of the image is the logarithm of the original image power spectrum and then inverse Fourier transform.

C(g(x,y))=F -1{log|F(g(x,y))|} (2) C ( g ( x , y ))= F -1 {log| F ( g ( x , y ))|} (2)

1.3、計算模糊長度及角度 1.3, calculate the blur length and angle

在本發明的實驗中發現,若是等速運動模糊時,在倒頻譜圖中會有兩個明顯的黑點,如第2圖白圈所示,而我們只要計算中心點和該點之間的距離也就是影像的模糊長度,接著再計算該點和X軸所形成的夾角θ,而這個θ也就是模糊角度;但如果是非等速運動模糊則不會有明顯的黑點,如第5圖所示,因此本發明提出一個演算法,不管是等速或是非等速運動模糊,都能計算模糊長度及角度,如第4圖所示,其演算法流程包含下列步驟: In the experiment of the present invention, it is found that in the case of constant velocity motion blur, there are two distinct black points in the cepstrum diagram, as shown by the white circle in Fig. 2, and we only need to calculate between the center point and the point. The distance is also the blur length of the image, and then the angle θ formed by the point and the X axis is calculated, and this θ is also the blur angle; but if it is non-equal motion blur, there will be no obvious black spots, as shown in Fig. 5. As shown, the present invention therefore proposes an algorithm that calculates the blur length and angle whether it is a constant velocity or a non-equal motion blur. As shown in FIG. 4, the algorithm flow includes the following steps:

在步驟S61中:輸入倒頻譜圖,找尋中心點。 In step S61: input a cepstrum map to find a center point.

在步驟S62中:由中心點之90°、45°及0°三個方向中選擇最大倒頻譜值之點,以將其作為搜索點。 In step S62, the point of the largest cepstrum value is selected from the three directions of 90°, 45°, and 0° of the center point to use it as a search point.

在步驟S63中:移動至搜索點上,並使用一個5*5遮罩,遮罩中心對應搜索點。 In step S63: moving to the search point and using a 5*5 mask, the mask center corresponds to the search point.

在步驟S64中:將遮罩內的倒頻譜值全部相加並儲存搜索點的座標。 In step S64: all the cepstrum values in the mask are added and the coordinates of the search point are stored.

在步驟S65中:判斷搜索點之方向是否偏移,若否則進入步驟S66,若是則進入步驟S67。 In step S65, it is judged whether or not the direction of the search point is shifted. If not, the process proceeds to step S66, and if yes, the process proceeds to step S67.

在步驟S66中:計算搜索點與中心點的距離是否大於閥值,若否則回步驟S62以重複步驟S62至步驟S65,若是則進入步驟S67。 In step S66, it is calculated whether the distance between the search point and the center point is greater than the threshold value. Otherwise, the process returns to step S62 to repeat steps S62 to S65, and if yes, the process proceeds to step S67.

在步驟S67中:從軌跡數值分佈圖中找出下降再上升之轉折點。 In step S67, the turning point of the falling and then rising is found from the trajectory value distribution map.

在步驟S68中:依據轉折點與中心點計算模糊長度及模糊角度。 In step S68, the blur length and the blur angle are calculated according to the turning point and the center point.

承上所述,假設輸入一張非等速運動模糊的倒頻譜圖,第一步先從中找尋中心點,第二步由該中心點的90°、45°、0°三個方向中選擇具有最大倒頻譜值之點作為搜索點,第三步移動到搜索點上,接著使用一個5*5的遮罩,遮罩中心對應搜索點,第四步將遮罩內的倒頻譜值全部相加並儲存搜索點座標,第五步判斷搜索點方向是否偏移,若未偏移則計算搜索點與中心點的距離是否大於閥值,若否則在回去第二步繼續重複上述步驟,若是在第五步發生偏移就停止搜尋,如第5圖白圈中之軌跡所示。 As stated above, assuming that a cepstrum diagram of non-equal motion blur is input, the first step is to find the center point, and the second step is selected from the three directions of 90°, 45°, and 0° of the center point. The point of the largest cepstrum value is used as the search point, the third step is moved to the search point, then a 5*5 mask is used, the mask center corresponds to the search point, and the fourth step adds all the cepstrum values in the mask. And storing the search point coordinates, the fifth step is to determine whether the search point direction is offset, if not offset, calculate whether the distance between the search point and the center point is greater than the threshold, if otherwise, repeat the above steps in the second step, if it is in the first The search is stopped when the five-step offset occurs, as shown by the trace in the white circle in Figure 5.

會造成偏轉的原因是因為在下降點的值多為負的,因此在搜尋的時候才會沿著周圍的正值偏轉,從第6圖中的3D圖中可以清楚的看到凹陷處,但是非等速運動模糊通常不會有明顯的下降點,因此在搜尋第五步時不會發生偏轉,如第7圖所示,這時進入第六步判斷搜索點與中心點的距離若超過所設的閥值則停止搜尋,並進入第七步透過每次遮罩內所儲存的點,可以計算當數值漸漸下降後轉為上升的轉折點,從第8圖的軌跡數值分佈圖可以清楚的看出來,最後計算該轉折點與中心點之間的距離,假設轉折點為(x 1,y 1),中心點為(x 0,y 0),再套用歐幾里德距離公式(3)計算出兩點之間的距離,為模糊長度L,並可將兩 點和x軸看作是一個三角形,L為斜邊而底邊為R,可以透過公式(4)計算出底邊長度,再透過公式(5)反餘弦公式即可計算出角度。 The reason for the deflection is because the value at the falling point is mostly negative, so it will deflect along the positive value around the search. The depression can be clearly seen from the 3D image in Fig. 6, but Non-constant motion blur usually does not have a significant drop point, so there will be no deflection when searching for the fifth step, as shown in Figure 7, then enter the sixth step to determine if the distance between the search point and the center point exceeds The threshold is stopped, and the seventh step is passed through the points stored in each mask, and the turning point that turns to rise when the value gradually decreases can be calculated. It can be clearly seen from the numerical distribution map of the trajectory in FIG. Finally, calculate the distance between the turning point and the center point, assuming that the turning point is ( x 1 , y 1 ), the center point is ( x 0 , y 0 ), and then calculate the two points by using the Euclidean distance formula (3). The distance between them is the blur length L , and the two points and the x-axis can be regarded as one triangle, L is the oblique side and the bottom side is R. The length of the base can be calculated by the formula (4), and then the formula is passed. 5) The inverse cosine formula can be used to calculate the angle.

R=|x 1-x 0| (4) R =| x 1 - x 0 | (4)

2、單張影像去模糊 2, single image to blur

在此步驟主要分為兩個部份,分別為點擴散函數正規化、單張影像反摺積,由於在還原之前點擴散函數(PSF)是未知的,而進行迭代運算需要一個初始化種子點,因此可以透過上述所計算出的模糊長度L及角度θ來初始點擴散函數,再利用Richardson-Lucy迭代復原算法計算,透過公式(6)進行Richardson-Lucy計算,在公式(6)中B代表一個模糊影像,根據點擴散函數於不同位置之清晰影像I m 平均後的結果,CM是清晰影像I m 的總數。而Lucy-Richardson演算法對I m 之更新公式可推導為公式(7),式中I t+1代表迭代運算中更新後的影像,I t 為更新前的影像,表示摺積運算(Convolution Operation),而Richardson-Lucy演算法是假設影像雜訊是根據卜瓦松分佈(Poisson Distribution),而本發明方法假設影像雜訊是屬高斯分佈(Gaussian Distribution),則公式(7)可使用公式(8)表示。 In this step, it is mainly divided into two parts, namely, the point spread function normalization and the single image deconvolution. Since the point spread function (PSF) is unknown before the restoration, the iterative operation requires an initialization seed point. Therefore, the initial point spread function can be obtained by the above-mentioned calculated blur length L and angle θ , and then calculated by the Richardson-Lucy iterative restoration algorithm, and the Richardson-Lucy calculation is performed by the formula (6). In the formula (6), B represents a blurred image, the point spread function in accordance with the result of the sharp image I m average of the different positions, the CM image I m is the total number of clarity. The update formula of Lucy-Richardson algorithm for I m can be derived into formula (7), where I t +1 represents the updated image in the iterative operation, and I t is the image before the update. Representing the Convolution Operation, and the Richardson-Lucy algorithm assumes that the image noise is based on Poisson Distribution, and the method of the present invention assumes that the image noise is a Gaussian distribution. (7) can be expressed using equation (8).

2.1、點擴散函數正規化 2.1, point spread function normalization

透過上述公式得知,要將模糊影像進行摺積,可以將點擴散函數估計所得到的模糊長度及角度視為一個二維矩陣,而為了使Richardson-Lucy的方法能順利執行摺積運算,將利用公式(9)將二維矩陣中的數值正規化於0~1之間,且總和為1,式中分母為矩陣內各元素值之總和。 According to the above formula, to deconvolve the blurred image, the fuzzy length and angle obtained by the point spread function estimation can be regarded as a two-dimensional matrix, and in order for the Richardson-Lucy method to perform the folding operation smoothly, The formula (9) is used to normalize the values in the two-dimensional matrix between 0 and 1, and the sum is 1, where the denominator is the sum of the values of the elements in the matrix.

2.2單張影像反摺積 2.2 single image deconvolution

透過公式(8)對模糊影像進行反摺積,由於Richardson-Lucy方法迭代算法一開始I t+1未知,因此先將原始模糊影像初始為I t ,影像再透過上述公式進行迭代並復原出清晰影像,如第9圖所示為復原過後的結果。 The fuzzy image is deconvolved by the formula (8). Since the Richardson-Lucy method iterative algorithm starts with I t +1 unknown, the original blurred image is first initialized to I t , and the image is iterated through the above formula and restored to clear. The image, as shown in Figure 9, is the result of the restoration.

3、清晰影像暫存器 3, clear image register

一開始在輸入影像序列的時候,會先判斷是否為模糊影像,若影像為清晰畫面則儲存至清晰影像暫存器中,並輸出清晰影像,若輸入時為模糊影像且模糊程度大時,則會先進行單張影像去模糊,處理完之後的結果傳入清晰影像暫存器中儲存,暫存器會儲存K張清晰影像或是還原過後的影像,並且當有新的畫面進入時剃除最舊的畫面,除此之外再影像進入暫存器前,會有記數器作累加的動作,當進入暫存器前記數器中的數值超過K,則代表在目前的暫存器中的畫面都是沒有作過單張影像還原的,因此將強制進行單張影像還原,再存入清晰影像暫存器,如此一來可以防止模糊畫面錯誤的累加。 When inputting the image sequence, it will first determine whether it is a blurred image. If the image is a clear image, it will be stored in the clear image buffer and output a clear image. If the image is blurred and the blur is large, then The single image will be deblurred first, and the result will be stored in the clear image register. The scratchpad will store K clear images or restored images, and will be shaved when a new screen enters. The oldest picture, in addition to before the image enters the scratchpad, there will be a counter for the cumulative action. When the value in the counter before entering the register exceeds K, it means in the current register. The pictures have not been restored by a single image, so the single image will be forced to be restored and then stored in the clear image register, thus preventing the accumulation of blurred picture errors.

4、多張影像去模糊 4, multiple images to blur

在實際的影像序列中,通常是包含清晰影像以及模糊影像,若將每張模糊影像畫面均使用單張影像去模糊處理,將十分費時且耗費效能而不易 作即時應用,因此本發明之方法利用時域(Temporal)資訊的概念來達到即時視訊修復,主要包括影像校正(Alignment)與視訊復原。首先將影像暫存器中的K張影像,包含原始清晰影像與復原影像,接著為了取得時間相鄰域的資料,利用搜尋當前影像與前張影像的特徵點,用以計算透視變換(Perspective Transformation)中所需要的單應性矩陣(Homography Matrix)來求得兩張影像之間的變換,運用透視變換後可將兩張影像間的像素對齊校正;取得校正後影像後,利用前K張對齊後的影像畫面,進行時間鄰域加權計算,使當前影像的每個像素獲得新的計算結果。 In the actual image sequence, it usually contains clear images and blurred images. If each blurred image is deblurred by using a single image, it will be time consuming and cost-effective to be used for instant application, so the method of the present invention utilizes The concept of Temporal information to achieve instant video repair, mainly including image correction (Alignment) and video restoration. Firstly, the K image in the image buffer includes the original clear image and the restored image. Then, in order to obtain the data of the temporal neighboring domain, the feature points of the current image and the front image are searched for calculating the perspective transformation (Perspective Transformation). The Homography Matrix is used to obtain the transformation between the two images. After the perspective transformation, the pixels between the two images can be aligned and aligned; after the corrected image is obtained, the K -alignment is used. After the image image, the time neighborhood weighting calculation is performed, so that each pixel of the current image obtains a new calculation result.

4.1、影像校正 4.1, image correction

利用透視變換方法將連續影像匹配校正,如第10圖所示,透過兩張連續影像能計算出一個單應性矩陣H來執行影像的變換,第10圖中t-1畫面經H將此畫面拍攝角度變得跟第t張相同,變換處理可能包括平移、旋轉與縮放,接著利用Shi-Tomasi corners角點偵測計算影像中的角點,以此角點做為特徵點,最為後續單應性矩陣特徵點,如第11圖所示,透過單應性矩陣將影像進行透視變換,其主要目的是將清晰影像暫存器內的影像都校正與欲復原的當前模糊影像至相同位置,如此才能進行後續視訊修復處理中的影像權重值計算。 The continuous image matching is corrected by the perspective transformation method. As shown in Fig. 10, a homography matrix H can be calculated through two consecutive images to perform image conversion. In Fig. 10, the t -1 image is H through the screen. The shooting angle becomes the same as the t-th sheet. The transformation processing may include panning, rotation, and zooming. Then, using Shi-Tomasi corners to detect the corner points in the image, the corner point is used as the feature point, and the most subsequent one should be The characteristic points of the matrix, as shown in Fig. 11, the perspective transformation of the image through the homography matrix, the main purpose of which is to correct the image in the clear image buffer to the same position as the current blurred image to be restored, so The image weight value calculation in the subsequent video repair processing can be performed.

在影像的成像中,基本上攝影機是遵循著小孔成像的透視變換模型,假設三維空間中任意一點x=[X,Y,Z] T ,其對應點m=[u,v] T ,其中齊次座標分別為x'=[X,Y,Z,1] T m'=[u,v,1] T ,根據透視變換模型,三維空間點x及其對應點m應滿足公式(10)其中λ為常數,f x f y 是攝影機焦距,s是對應點座標的傾斜係數,取決於對應點坐標系統之X與Y軸的夾角,R是3×3的旋轉矩陣,T是3×1的平移向量。選擇三維座標系的X-Y平面與二維平面重疊,則三維座標Z值可表示為0,若 旋轉矩陣R的第i列元素由r i 表示,則式(10)可改寫為式(11),其中仍以x'表示二維平面上任一點x的齊次座標,則x'=[X,Y,1] T ,因此可產生式(12)其中H為三維平面與對應二維平面之間的單應性矩陣,由於有λ常數,因此三維與二維之間的映射變換為一組單應性矩陣,需四個或四個以上的已知特徵點才可求得單應性矩陣。 In image imaging, basically the camera follows a perspective transformation model of small hole imaging, assuming that any point in the three-dimensional space x = [ X , Y , Z ] T , its corresponding point m = [ u , v ] T , where The homogeneous coordinates are x' = [ X , Y , Z , 1 ] T and m ' = [ u , v , 1] T . According to the perspective transformation model, the three-dimensional space point x and its corresponding point m should satisfy the formula (10). Where λ is a constant, f x and f y are the camera focal lengths, s is the inclination coefficient of the corresponding point coordinates, depending on the angle between the X and Y axes of the corresponding point coordinate system, R is a 3×3 rotation matrix, and T is 3 A translation vector of ×1. If the XY plane of the three-dimensional coordinate system is overlapped with the two-dimensional plane, the Z-value of the three-dimensional coordinates can be represented as 0. If the ith column element of the rotation matrix R is represented by r i , the equation (10) can be rewritten as the equation (11). Where x' represents the homogeneous coordinate of any point x on the two-dimensional plane, then x' = [ X , Y , 1] T , so that equation (12) can be generated, where H is between the three-dimensional plane and the corresponding two-dimensional plane The homography matrix, because of the λ constant, transforms the mapping between 3D and 2D into a set of homography matrices. Four or more known feature points are required to obtain the homography matrix.

如上所述,建立單應性矩陣時,至少需要四組以上特徵點,每組特徵點(X,Y)與其對應特徵點(u,v)帶入公式(12)中λm’=Hx’可產生四組公式(13),其中每一組對應點可提供兩個單應性矩陣H的線性方程式,利用四組公式(13)以計算出透視變換矩陣Hh 11~h 33之數值而求出透視變換矩陣H。如果參數λ設為0,則上述函數可使用最小平方法(Least Squares)來計算其單應性。然而並不是所有的點都能適應此透射變換,若計算出異常值將導致其單應性估計值產生極大的偏差。因此這裡採用RANSAC(RANdom SAmple Consensus)方法來處理此狀況,於畫面中特徵點群內隨機抽取四點一組,利用這組集合和最小平方法來估計單應性矩陣H,通過最小化誤差的平方和尋找數據的最佳函數匹配,滿足當前變換矩陣的最佳函數匹配,即數據與實際數據之間誤差的平方和為最 小,並回傳一致集中數據,最後計算單應性矩陣的質量比,將最佳的該組集合做為單應性矩陣的數值。如第12圖所示經透視變換後之結果,圖中H t-2為第t-2畫面的單應性矩陣,H t-1為第t-1畫面的單應性矩陣,可見第t-2畫面與第t-1畫面經由各自的單應性矩陣計算後,可將畫面轉正並與第t畫面相同。 As described above, when establishing a homography matrix, at least four sets of feature points are required, and each set of feature points ( X, Y ) and its corresponding feature points ( u, v ) are brought into the formula (12) λm '= Hx ' Four sets of formulas (13) are generated, wherein each set of corresponding points can provide a linear equation of two homography matrices H , and four sets of formulas (13) are used to calculate the values of h 11 ~ h 33 in the perspective transformation matrix H. Find the perspective transformation matrix H. If the parameter λ is set to 0, the above function can calculate its homography using the least square method (Least Squares). However, not all points can adapt to this transmission transformation. If the outlier is calculated, it will cause a great deviation in its homography estimate. Therefore, the RANSAC (RANdom SAmple Consensus) method is used to deal with this situation, and a set of four points is randomly selected in the feature point group in the picture, and the set and the least square method are used to estimate the homography matrix H , by minimizing the error. The best function matching of the square sum finding data satisfies the best function matching of the current transform matrix, that is, the sum of the squares of the error between the data and the actual data is the smallest, and returns the consistent concentrated data, and finally calculates the mass ratio of the homography matrix. , the best set of the set as the value of the homography matrix. As shown in FIG. 12 by the perspective transform result, the figure for the first t H t -2 -2 screen homography matrix, H t -1 t -1 for the first picture homography, t-visible After the -2 picture and the t -1th picture are calculated via their respective homography matrices, the picture can be rotated to the same level as the t-th picture.

4.2、視訊修復 4.2, video repair

影像校正對於像素與像素間的匹配處理時,有助於像素對應之取值更容易,如此使得多張影像復原更有效益,藉透過像素與像素之間的匹配,可以比較像素間的權重值,權重值計算如公式(14),l i,x 為求出欲取代當前模糊影像之像素點,利用影像暫存器中CK張清晰影像來計算權重值,最後選擇權重值最高的像素點,以取代模糊影像像素值,首先對於暫存器中影像CB t-i,q 之每一依序像素q計算周圍像素的加權平均,如公式(15)所示,其中I t,p 為求出欲取代當前模糊畫面像素p之還原像素值,t代表畫面幀的時間序號,這裡對暫存器中CK張畫面做計算,W為所有權重w(t,p,t-i,q)之加總,而w(t,p,t-i,q)權重值計算如公式(16),H(CB t-i,q )為CB t-i,q 經由透視變換H處理後之影像,透過計算H(CB t-i,q )與當前模糊幀(B t,p )間之歐幾里得距離以進行指數函數運算,如此是為將權重值正規化於[0,1]之間,若此二者H(CB t-i,q )、(B t,p )之距離越遠,則指數運算結果會趨近於零,這意味著此點不具任何參考意義,此處設常數σ=0.5,其主要是將H(CB t-i,q )、(B t,p )二者計算出的歐幾里得距離作調整,使距離越大的經過指數運算越趨近0,最後選擇權重值最大的像素值取代掉當前模糊畫面像素值。如第13圖所示之多 張影像復原處理後之結果,圖中左邊部分為原始模糊影像,右邊部分為復原後結果,可看見復原後影像之內容物邊緣部分變得較清晰。 Image correction is more convenient for pixel-to-pixel matching processing, which makes it easier to match the values of pixels. This makes multi-image restoration more efficient. By matching pixels to pixels, you can compare the weight values between pixels. The weight value is calculated as formula (14), l i, x is to find the pixel point to replace the current blurred image, use the CK clear image in the image register to calculate the weight value, and finally select the pixel with the highest weight value. To replace the blurred image pixel value, first calculate the weighted average of the surrounding pixels for each sequential pixel q of the image CB ti,q in the register, as shown in equation (15), where I t,p is to be replaced. Currently, the restored pixel value of the blurred picture pixel p , t represents the time serial number of the picture frame, where the calculation is performed on the CK picture in the temporary register, where W is the sum of the weights w ( t, p, ti, q ), and w (t, p, ti, q ) weight value is calculated as shown in formula (16), H (CB ti , q) of CB ti, q via fluoroscopy image after the H process transitions, the current through the calculation of H (CB ti, q) Fuzzy frame (B t, p) for the Euclidean distance between the exponential function calculation, so it is Weight values normalized between [0,1], if this both H (CB ti, q), (B t, p) the farther the distance, the exponentiation result is close to zero, which means that this The point does not have any reference significance. Here, the constant σ = 0.5 is set, which mainly adjusts the Euclidean distance calculated by H ( CB ti, q ) and ( B t,p ) to make the distance larger. After the exponential operation approaches 0, the pixel value with the largest weight value is selected to replace the current blurred picture pixel value. As shown in Fig. 13, after the image restoration processing, the left part of the figure is the original blurred image, and the right part is the restored result, and the edge portion of the content of the restored image becomes clearer.

承上所述,本發明之模糊影像之復原方法可應用於線性運動模糊畫面之復原,所提出來的視訊去模糊方法可以實作於常見的影像監控系統中,或是一般移動載具之拍攝裝置,可以透過視訊串流的方式作即時去模糊之功能,由於本案所提出的演算法計算複雜度小,因此可以嵌入一般安全監控攝影系統中,以提升相關產品的附加價值。 As described above, the method for restoring a blurred image of the present invention can be applied to the restoration of a linear motion blur picture, and the proposed video deblurring method can be implemented in a common image monitoring system or a general moving vehicle. The device can perform the function of instant deblurring through video streaming. Since the algorithm proposed in this case has a small computational complexity, it can be embedded in a general security surveillance photography system to enhance the added value of related products.

以上所述之實施例僅係為說明本發明之技術思想及特點,其目的在使熟習此項技藝之人士能夠瞭解本發明之內容並據以實施,當不能以之限定本發明之專利範圍,即大凡依本發明所揭示之精神所作之均等變化或修飾,仍應涵蓋在本發明之專利範圍內。 The embodiments described above are merely illustrative of the technical spirit and the features of the present invention, and the objects of the present invention can be understood by those skilled in the art, and the scope of the present invention cannot be limited thereto. That is, the equivalent variations or modifications made by the spirit of the present invention should still be included in the scope of the present invention.

Claims (5)

一種模糊影像之復原方法,係包含下列步驟:將一影像序列中之模糊影像做傅立葉轉換以產生轉換後數值,接著取得該轉換後數值之倒頻譜,再從倒頻譜域中計算點擴散函數之模糊角度與模糊長度;依據該模糊角度與該模糊長度正規化該點擴散函數,再對正規化後的該點擴散函數進行單張影像的反摺積以產生復原影像;選取該影像序列中之清晰影像或該復原影像存入清晰影像暫存器;以及搜索當前影像和儲存在清晰影像暫存器中的該復原影像之間的對應特徵點,利用該特徵點推導透視變換所需的單應性矩陣,並據以校正該當前影像及該復原影像之相應像素,再依據時域資訊來計算這該當前影像及該復原影像中的像素之權重並進行比較,以高權重像素取代低權重像素以產生復原像素,而輸出另一該復原影像;其中,該當前影像係為存入該清晰影像暫存器之該清晰影像或該復原影像後下一張待處理之該模糊影像。 A method for restoring a blurred image includes the following steps: performing Fourier transform on a blurred image in an image sequence to generate a converted value, then obtaining a inverse spectrum of the converted value, and calculating a point spread function from the cepstral domain Fuzzy angle and fuzzy length; normalizing the point spread function according to the blur angle and the blur length, and then performing a deconvolution of the single image on the normalized point spread function to generate a restored image; selecting the image sequence Clear image or the restored image is stored in the clear image register; and the corresponding feature point between the current image and the restored image stored in the clear image register is used, and the feature point is used to derive the requirement for the perspective transformation And correcting the current image and the corresponding pixel of the restored image according to the time domain information, and calculating the weights of the pixels in the current image and the restored image according to the time domain information, and comparing the low weight pixels with the high weight pixels Generating a restored pixel and outputting another restored image; wherein the current image is temporarily stored in the clear image The clarity of the image or the restored image is next to be treated of the blurred image. 如申請專利範圍第1項所述之模糊影像之復原方法,其中該清晰影像暫存器在輸入該影像序列時,係儲存該影像序列中之該清晰影像,而該影像序列中之該模糊影像係藉由該模糊影像之復原方法復原為該復原影像。 The method for restoring a blurred image according to the first aspect of the invention, wherein the clear image register stores the clear image in the image sequence when the image sequence is input, and the blurred image in the image sequence The restored image is restored by the restored method of the blurred image. 如申請專利範圍第1項所述之模糊影像之復原方法,其中該點擴散函數係以下列公式表示: 其中,L係為模糊長度,θ係為模糊角度。 The method for restoring a blurred image according to claim 1, wherein the point spread function is expressed by the following formula: Among them, L is the blur length, and θ is the blur angle. 如申請專利範圍第3項所述之模糊影像之復原方法,其中該模糊影像之倒頻譜域定義係以下列公式表示:C(g(x,y))=F -1{log|F(g(x,y))|}其中,F表示傅立葉變換,F-1表示逆傅立葉變換,g(x,y)係為模糊影像。 The method for restoring a blurred image according to claim 3, wherein the cepstrum domain definition of the blurred image is expressed by the following formula: C ( g ( x , y ))= F -1 {log| F ( g ( x , y ))|} where F represents a Fourier transform, F -1 represents an inverse Fourier transform, and g(x, y) is a blurred image. 如申請專利範圍第4項所述之模糊影像之復原方法,其中該計算點擴散函數之該模糊長度及該模糊角度係包含下列步驟:(1)輸入一倒頻譜圖,找尋中心點;(2)由該中心點之90°、45°及0°三個方向中選擇最大倒頻譜值之點,以將其作為搜索點;(3)移動至該搜索點上,並使用一個5*5遮罩,該遮罩中心對應該搜索點;(4)將該遮罩內的倒頻譜值全部相加並儲存該搜索點的座標;(5)判斷該搜索點之方向是否偏移,若否則進入步驟(6),若是則進入步驟(7);(6)計算該搜索點與該中心點的距離是否大於閥值,若否則回該步驟(2)以重複該步驟(2)至該步驟(5),若是則進入該步驟(7);(7)從軌跡數值分佈圖中找出下降再上升之轉折點;以及(8)依據該轉折點與該中心點計算該模糊長度及該模糊角度; 其中,該轉折點之座標為(x1,y1),該中心點之座標為(x0,y0),L為該模糊長度,代入(x1,y1)及(x0,y0)以取得該模糊長度係以下列公式表示: R為由該轉折點、該中心點及X軸所構成之三角形之底邊長度,代入x1及x0以取得該底邊長度係以下列公式表示:R=|x 1-x 0|θ為該模糊角度,代入該模糊長度及該底邊長度以取得該模糊角度係以下列公式表示: The method for restoring a blurred image according to claim 4, wherein the calculating the blur length of the point spread function and the blur angle comprises the following steps: (1) inputting a cepstrum map to find a center point; (2) Selecting the point of the largest cepstral value from the three directions of the center point of 90°, 45°, and 0° as the search point; (3) moving to the search point and using a 5*5 cover a cover, the center of the mask corresponds to the search point; (4) all the cepstrum values in the mask are added and the coordinates of the search point are stored; (5) determining whether the direction of the search point is offset, if otherwise entering Step (6), if yes, proceed to step (7); (6) calculate whether the distance between the search point and the center point is greater than a threshold, if otherwise, return to the step (2) to repeat the step (2) to the step ( 5), if yes, enter the step (7); (7) find the turning point of the falling and rising from the trajectory value distribution map; and (8) calculate the blur length and the blur angle according to the turning point and the center point; the turning point of coordinates (x 1, y 1), the center point of the coordinates (x 0, y 0), L for the length fuzzy Substituting (x 1, y 1) and (x 0, y 0) to obtain the blur length lines represented by the following formula: R is the length of the base of the triangle formed by the turning point, the center point and the X axis, and substituting x 1 and x 0 to obtain the length of the base is expressed by the following formula: R =| x 1 - x 0 |θ The blur angle is substituted into the blur length and the length of the base to obtain the blur angle expressed by the following formula:
TW106139584A 2017-11-16 2017-11-16 Restoration method for blurred image TWI639135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW106139584A TWI639135B (en) 2017-11-16 2017-11-16 Restoration method for blurred image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW106139584A TWI639135B (en) 2017-11-16 2017-11-16 Restoration method for blurred image

Publications (2)

Publication Number Publication Date
TWI639135B true TWI639135B (en) 2018-10-21
TW201923704A TW201923704A (en) 2019-06-16

Family

ID=64797600

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106139584A TWI639135B (en) 2017-11-16 2017-11-16 Restoration method for blurred image

Country Status (1)

Country Link
TW (1) TWI639135B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI827771B (en) * 2018-12-26 2024-01-01 南韓商矽工廠股份有限公司 Image processing equipment and methods

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200828183A (en) 2006-12-17 2008-07-01 Blur Technologies Ltd D Image enhancement using hardware-based deconvolution
CN102708550A (en) 2012-05-17 2012-10-03 浙江大学 Blind deblurring algorithm based on natural image statistic property
CN103279934A (en) 2013-06-07 2013-09-04 南京大学 Remote sensing image recovery method based on little support domain regularization inverse convolution
CN104704806A (en) 2013-03-28 2015-06-10 富士胶片株式会社 Image-processing device, image-capturing device, image-processing method, program, and recording medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200828183A (en) 2006-12-17 2008-07-01 Blur Technologies Ltd D Image enhancement using hardware-based deconvolution
CN102708550A (en) 2012-05-17 2012-10-03 浙江大学 Blind deblurring algorithm based on natural image statistic property
CN104704806A (en) 2013-03-28 2015-06-10 富士胶片株式会社 Image-processing device, image-capturing device, image-processing method, program, and recording medium
CN103279934A (en) 2013-06-07 2013-09-04 南京大学 Remote sensing image recovery method based on little support domain regularization inverse convolution

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI827771B (en) * 2018-12-26 2024-01-01 南韓商矽工廠股份有限公司 Image processing equipment and methods

Also Published As

Publication number Publication date
TW201923704A (en) 2019-06-16

Similar Documents

Publication Publication Date Title
EP3800878B1 (en) Cascaded camera motion estimation, rolling shutter detection, and camera shake detection for video stabilization
Flusser et al. Degraded image analysis: an invariant approach
US9224189B2 (en) Method and apparatus for combining panoramic image
Rengarajan et al. From bows to arrows: Rolling shutter rectification of urban scenes
US7929801B2 (en) Depth information for auto focus using two pictures and two-dimensional Gaussian scale space theory
KR101524548B1 (en) Apparatus and method for alignment of images
Cucchiara et al. A Hough transform-based method for radial lens distortion correction
CN109919971B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
GB2536430B (en) Image noise reduction
JP2009093644A (en) Computer-implemented method for tacking 3d position of object moving in scene
Okade et al. Video stabilization using maximally stable extremal region features
US9613404B2 (en) Image processing method, image processing apparatus and electronic device
WO2014069103A1 (en) Image processing device
GB2536429A (en) Image noise reduction
TWI639135B (en) Restoration method for blurred image
Choi et al. Robust video stabilization to outlier motion using adaptive RANSAC
CN113298187A (en) Image processing method and device, and computer readable storage medium
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
CN110322476B (en) Target tracking method for improving STC and SURF feature joint optimization
US20070280555A1 (en) Image registration based on concentric image partitions
Saxena et al. Digital video stabilization with preserved intentional camera motion and smear removal
Vlahović et al. Deep learning in video stabilization homography estimation
Russo et al. Blurring prediction in monocular slam
Carbajal et al. Single image non-uniform blur kernel estimation via adaptive basis decomposition.
Galego et al. Auto-calibration of pan-tilt cameras including radial distortion and zoom

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees