TW201032592A - Method for upscaling images and videos and associated image processing device - Google Patents

Method for upscaling images and videos and associated image processing device Download PDF

Info

Publication number
TW201032592A
TW201032592A TW98106352A TW98106352A TW201032592A TW 201032592 A TW201032592 A TW 201032592A TW 98106352 A TW98106352 A TW 98106352A TW 98106352 A TW98106352 A TW 98106352A TW 201032592 A TW201032592 A TW 201032592A
Authority
TW
Taiwan
Prior art keywords
image
interpolation
input image
edge
processing
Prior art date
Application number
TW98106352A
Other languages
Chinese (zh)
Other versions
TWI384876B (en
Inventor
Kai Wei
Hao Huang
Peng-Fei Li
Original Assignee
Arcsoft Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Arcsoft Hangzhou Co Ltd filed Critical Arcsoft Hangzhou Co Ltd
Priority to TW98106352A priority Critical patent/TWI384876B/en
Publication of TW201032592A publication Critical patent/TW201032592A/en
Application granted granted Critical
Publication of TWI384876B publication Critical patent/TWI384876B/en

Links

Landscapes

  • Image Processing (AREA)

Abstract

A method for upscaling images and videos and an associated image processing device are provided. The method provides a preprocess module for extracting a high-frequency portion of an image inputted into the device and decomposing the image into plane and edge regions. The method also provides a composite upscaling module for executing the upscaling processes on the image and the high-frequency portion thereof respectively, wherein the upscaling process of the plane regions of the image and the high-frequency portion is based on a simple interpolation while the edge regions of the image and the high-frequency portion is based on both a complex interpolation and the simple interpolation. The upscaling results of the image and the high-frequency portion are then processed by a fusion process, so as to output an image having sharp but not blocky edges, rich details and strong contrast.

Description

201032592 六、發明說明: 【發明所屬之技術領域】 本發明係關於一種圖像及視頻(vide〇)的放大 (upscaling)方法,尤指一種將低解析度的輸入圖像放大且 , 顯示在高解析度的視頻設備上時,能使放大的輸出圖像仍 具有抗鋸齒、邊緣銳利、細節豐富及對比度強等特點的方 • 法。 〇 【先前技術】 近年來’各式齡:攝像裝如:數位相機及攝職 等)不斷推陳出新,不僅圖像品質愈來愈高、產品體積愈 來愈小,其市場價格亦日漸便宜’因此,該等數位攝像裝 置已日益普及,而成為人們曰常生活及工作中不可或缺的 -重要工具。以时市場上頗域行的具攝像魏的行動 電話為例,該等行動電話除設有CCD或CM〇s攝像單 修 用以拍攝_外’尚設有—小尺寸的液晶顯示榮幕, • 用以顯示所拍攝的圖像,供使用者瀏覽。一般而言,當使 帛者_行動電話賴景物時,所拍攝的®像儲存在其内 =插設的一記憶卡内,使用者在行動電話上點選圖像時, 订動電話將自記憶卡内讀取欲劉覽的圖像,並經重新編碼 後’將圖像的縮小圖顯示在液晶顯示螢幕±,如此,使用 ,即可在液晶顯示螢幕上對圖像進行縮小、放大、拖曳或 調整頁面大小等操控動作。因此,無論前述數位攝像裝置 上儲存的晴是幾百萬倾元組(如:U Mega byte),或 數十萬個位元組(如:120Kbyte),該等圖像均係被儲存在 4 201032592 的-儲錢置(如··記憶卡、硬碟或隨身碟)内, ,用者杖點選欲職_像時,數位攝像裝置再自儲存 置内讀取圖像’並經重新編碼,將原始圖像縮小成數萬 位疋組(如:75 Kbyte)或數千個位元組(如:7.5 Kbyte) 義傾後,始·州_在小尺寸魏晶顯示勞幕 上’供使用者戦,並令使用者可依實際需要,對縮小圖 進行縮】放大、拖戈或頁面大小調整等操控動作。 由該位攝縣置_像解析度及棘速度已 〇 絲愈高’使得各錢簡像裝置已被歧地顧在各專 業領域’如:刑事、生物、醫藥及天文科學等,用以保 存許多重要的事證’如··刑事案件的關鍵線索、證物或犯 罪現場等存證圖像;生物科學的新發現或實驗結果;醫學 上X光片或電腦斷層圖像等供醫療人員判斷病徵的資 料’因此’如何為專業研究人員保存實驗室或其他研究領 域㈣重要事證,並以數位圖像格式存稽,作為日後實驗 或研究時,查閱或比對的重要資料,且在對數位圖像進行 ❿ 放大時,能有效降低圖像失真,提供高解析度且有利於辨 ' 賴數位®像’供㈣騎判触圖像⑽呈制相關特 徵’即成為各該專業領域人士非常重視的一項課題。 按’圖像處理業者及設計人員對於圖像及視頻(video) 放大技術,已展開了很多年的研究,無論是從初期的線性 放大技術’到後來基於邊緣(edge)的放大技術,新的理論 及應用方法不斷地被開發出來,其中常用的線性放大技術 包括雙線性插值運算(Bilinear Interpolation)、雙立方插值 運算(Bicubic Interpolation)及蘭索斯法(Lanzc〇s)等,而基 5 201032592 於邊緣的放大技術中最具代表性者則是NEDI (New Edge-Directed Interpolation),惟,該等放大技術仍存在諸 多缺點’其中前述線性放大技術容易造成邊緣鋸齒(blocky edge)效應、細節損失及邊緣模糊(biurry)等問題,而前述 基於邊緣的放大技術係沿著圖像的邊緣根據梯度方向 (gradient direction)進行插值(interpolation),雖可解決部分201032592 VI. Description of the Invention: [Technical Field] The present invention relates to an image and video (vide〇) upscaling method, and more particularly to a low-resolution input image that is enlarged and displayed at a high level. On a resolution video device, the magnified output image still has the features of anti-aliasing, sharp edges, rich detail, and strong contrast. 〇[Prior Art] In recent years, 'various ages: camera installations such as digital cameras and photo-taking, etc., continue to innovate, not only the image quality is getting higher and higher, the product volume is getting smaller and smaller, and its market price is getting cheaper. These digital camera devices have become increasingly popular and become an indispensable tool for people's daily life and work. For example, in the market, there are mobile phones with cameras in the market. In addition to CCD or CM〇s camera repair, these mobile phones are used to shoot _ outside's still-small size LCD display screen. • Used to display the captured image for the user to browse. In general, when the viewer's mobile phone is used to view the scene, the captured image is stored in a memory card inserted in it. When the user clicks on the image on the mobile phone, the mobile phone will be self-made. The image of the image to be read is read in the memory card, and after re-encoding, the thumbnail image of the image is displayed on the liquid crystal display screen. Thus, the image can be reduced and enlarged on the liquid crystal display screen. Drag or adjust the page size and other manipulations. Therefore, regardless of whether the above-mentioned digital camera device stores a few million pours (such as U Mega byte) or hundreds of thousands of bytes (eg, 120 Kbyte), the images are stored in 4 In 201032592 - in the storage of money (such as memory card, hard drive or flash drive), when the user selects the job _ image, the digital camera reads the image from the storage and re-encodes , the original image is reduced to tens of thousands of groups (such as: 75 Kbyte) or thousands of bytes (such as: 7.5 Kbyte) After the inclination, the beginning of the state _ in the small size Wei Jing display on the screen The user is paralyzed and allows the user to perform zooming, dragging, or page resizing operations on the zoomed out image as needed. From the position of the county, the image resolution and the spine speed have become higher, so that the money simple image devices have been disregarded in various professional fields such as criminal, biological, medical and astronomical sciences, etc. Many important facts 'such as critical clues in criminal cases, evidences of evidence or crime scenes; new discoveries or experimental results in biological sciences; medical X-ray films or computed tomography images for medical personnel to judge symptoms The information 'how' is how to save important documents for professional researchers in the laboratory or other research areas (4) and to store them in digital image format for review or comparison of important data in future experiments or studies, and in the logarithmic position. When the image is magnified, it can effectively reduce the image distortion, provide high resolution and help to distinguish the 'Like digital image' for (4) riding the touch image (10) to develop relevant features', which is highly valued by professionals in this field. a subject. According to 'image processing operators and designers for image and video (video) amplification technology, has been carried out for many years, whether from the initial linear amplification technology 'to later edge-based amplification technology, new Theories and application methods have been continuously developed. Among them, the commonly used linear amplification techniques include Bilinear Interpolation, Bicubic Interpolation and Lannzc〇s. 201032592 The most representative of the edge-enlarged zoom technology is NEDI (New Edge-Directed Interpolation). However, there are still many shortcomings in these amplification techniques. The aforementioned linear amplification technique is prone to edge jagged effects and details. Problems such as loss and edge blur, and the aforementioned edge-based magnification technique performs interpolation according to the gradient direction along the edge of the image, although the part can be solved.

的邊緣鋸齒和模糊等問題,但是,該等基於邊緣的放大技 術對於邊緣方向(edge direction)的準確性有著較大的依 賴,谷易出現因邊緣方向不準確,而造成插值錯誤的現 象,特別是,在圖像中細節豐富且邊緣雜亂的區域中,插 值錯誤的問題格外明顯且嚴重;此外,該等基於邊緣的放 大技術’為了縣其結果的準雜,往往必需使用魔大的 運算,易造成圖像處理效率不彰的問題。最後,由於目前 使用的放大技術一般均是採用鄰域(nei动b〇rh〇〇(j)圖元 (pixel)的加權求和,⑽储值計#,對棚像具有低通 濾波的作Μ ’故在目像被放域,不可敎地會造成原圖 像上應有的銳度和細節資訊(高卿分)的損失。有馨於 此’圖像處理業者為恢復棚像的品f,往往會在圖像被 放大後’對圖像進行-些增強和恢復處理,但是該等處理 ΐίϋί緣過沖(〇VerSh〇〇t)及振铃效應(ringing effeCt) 故’如何提供—種全新_像放大技術,使得解析度 ==頻被放大且顯示在解析度較高的視頻設 禮/η I現清晰且相於__特徵的數位圖 像,即成為各《像處理及設計人刺正努力研究且= 6 201032592 待解決的一重要課題。 【發明内容】 有鑑於此’本發明之一目的,在於提供一種圖像及視 頻的放大方法與相關之圖像處理裝置,以改進前述傳統圖 • 像放大技術的缺點。 * 本發明揭露一種圖像及視頻的放大方法,其係應用在 -圖像處理裝置上’以在-數位圖像被輸入該圖像處理裝 ❹ 置後,該圖像處理裝置能依預定的放大比例,輸出一放大 的數位圖像’該方法包括提供一預處理模組及一複合放大 模組,其中該預處理模組係對該輸入圖像執行一高通濾波 處理,提取該輸入圖像的高頻部分,用於對放大後的圖像 進行高頻補償,且對該輸入圖像執行一圖像分解處理,利 用梯度運算元提取該輸入圖像的梯度,且根據預設之一固 定門限’將該輸入圖像分解成平坦區域和邊緣區域,並在 輸入圖像上標記該二區域,該複合放大模組係對原輸入圖 ❹ 像、平坦區域、邊緣區域及高頻部分,分別進行放大處理, 其中對於原輸入圖像及平坦區域係使用簡單播值運算, 如:雙立方插值運算(bicubic interpolation),進行放大處 理’對於邊緣區域及高頻部分則同時使用複雜插值運算和 簡單插值運算’複雜插值運算是對邊緣區域及高頻部分的 圖元使用方向插值(directional interpolation)運算,進行放 大處理,嗣,對該方向插值運算的結果,進行可信度處理 (confidence process),即邊緣方向愈明確,方向插值運算 結果的可仏度愈尚,否則,方向插值運算結果的可信度愈 7 201032592 ^再根據7得到的可信度,對方向插值運算的結果和 ,單插值運算的結麵行加權紗(weighted _),最 後’對原輸入圖像、平坦區域、邊緣區域及高頻部分的放 大結果,=行融合,即能在運算工作量較小、複雜度較低 ^ 錢度較⑤的情形下,依預定的放大關,輸出-具有抗 鋸齒、邊緣銳利、細節豐富及對比度強等特點的輸出圖像。 • 本發明另揭露―種®像處理裝置,料在-數位圖像 被輸入該®像處理裝置後,能依據—放大比例,輸出一放 © 大的數位圖像。該圖像處理裝置包括:-預處理模組,用 以對該輸入圖像執行一高通濾波處理,以提取該輸入圖像 的高頻部分,該高頻部分是用於對該輸入圖像的放大結果 進行高頻補償’該預處理模組並對該輸入圖像執行一圖像 分解處理,該圖像分解處理係利用一梯度運算元提取該輸 入圖像的圖像梯度,且根據預設之一固定門限,將該輸入 圖像分解成平坦區域和邊緣區域’並在該輸入圖像上標記 該一區域,一複合放大模組,其對於原輸入圖像及該平坦 ❹ 區域係使用一簡單插值運算,進行放大處理,對於該邊緣 區域及該高頻部分係分別使用一複雜插值運算和該簡單 - 插值運算’進行放大處理’接著,對該複雜插值運算的結 果,進行可信度處理,其中,邊緣方向愈明確,該複雜播 值運算的結果的可信度愈高,否則,該複雜插值運算的結 果的可信度愈低’最後’根據該可信度’對該複雜插值運 算的結果和該簡單插值運算的結果進行加權求和;以及一 融合處理單元,用以將原輸入圖像、平坦區域、邊緣區域 及高頻部分的放大結果,融合成為該輸出圖像。 201032592 茲為令貴審查委員能對本發明之目的、處理程式及 其功效,能有更進一步之認識與瞭解,特列舉實施例,並 配合圖示,詳細說明如下: 【實施方式】 v 本發明係一種圖像及視頻的放大方法,請參閱第1 ' 圖所示,該方法係應用在一圖像處理裝置1上,以在一數 位圖像被輸入該圖像處理裝置1後,該圖像處理裝置i ❹ 此依預定的放大比例,輸出一放大的數位圖像,使得一解 析度較低的輸入圖像及視頻被放大,且被顯示在一解析度 較尚的視頻設備上時,仍能呈現出清晰且有利於辨識相關 特徵的數位圖像。在本發明之一較佳實施例中,參閱第i 圖所示,該方法包括提供一預處理模組❻repr〇cess module)l〇 及一複合放大模組(c〇mp〇site up seaiing module)30 ’參閱第2圖所示’其中該預處理模組1〇係對 =輸入圖像執行-高通滤波處理u,提取該輸入圖像的 © 冑頻部分’以在後續處理過程中,利用該高頻部分對該輸 入圖像的放大結《進行高頻麵,且對該輸入圖像執行一 圖像分解處理12 梯度運算元,職輸人圖像分解 成平坦區域和雜_,賴纽大模組30係對原輸入 圖像、平坦區域、邊緣區域及高頻部分,分別進行放大處 理’其中對於原輸入圖像及平坦區域係使用簡單插值運 算’對於邊緣區域及高頻部分則使用複雜插值運算,最 後社對原輸入圖像、平坦區域、邊緣區域及高頻部分的放 大結果’進行—融合處理33,即驗預定的放大比例, 9 201032592 產生一輸出圖像。Edge aliasing and blurring, etc., however, these edge-based magnification techniques have a large dependence on the accuracy of the edge direction, and the valley is prone to interpolation errors due to inaccurate edge directions, especially Yes, in areas with rich details and cluttered edges in the image, the problem of interpolation errors is particularly obvious and serious; in addition, these edge-based amplification techniques often require the use of magic operations for the quasi-mixing of the results of the county. It is easy to cause problems in image processing efficiency. Finally, since the amplification techniques currently used are generally based on the neighborhood (the weighted sum of the nei moving b〇rh〇〇(j) primitives (pixel), (10) the stored value meter #, the low-pass filtering of the shed image Μ 'Therefore, the image is placed in the field, which will cause the loss of the sharpness and detail information (Gao Qingfen) that should be on the original image. There is a product in this image processing industry to restore the shed image. f, often after the image is enlarged, 'the image is enhanced and restored, but the processing ΐ ϋ ϋ 缘 〇 〇 〇 〇 〇 〇 〇 〇 ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring ring 如何 如何 如何 如何 如何 如何 如何 如何A new _ image amplification technology, so that the resolution == frequency is amplified and displayed in the higher resolution video gift / η I is now clear and the __ feature of the digital image, that is, become "image processing and design The human thorn is working hard and = 6 201032592 An important subject to be solved. [Invention] In view of the above, an object of the present invention is to provide an image and video enlargement method and related image processing apparatus to improve The aforementioned conventional figure • the disadvantage of zooming in. * The invention discloses A method for magnifying an image and a video, which is applied to an image processing apparatus, and after the image-in-digital image is input to the image processing apparatus, the image processing apparatus can output according to a predetermined magnification ratio An enlarged digital image includes a preprocessing module and a composite amplification module, wherein the preprocessing module performs a high pass filtering process on the input image to extract a high frequency portion of the input image And performing high frequency compensation on the enlarged image, performing an image decomposition process on the input image, extracting a gradient of the input image by using a gradient operation element, and fixing a threshold according to one of presets The input image is decomposed into a flat area and an edge area, and the two areas are marked on the input image, and the composite amplification module respectively amplifies the original input image image, the flat area, the edge area, and the high frequency part, For the original input image and the flat region, a simple broadcast value operation is used, such as: bicubic interpolation, and the amplification process is performed for the edge region and the high frequency portion. At the same time, complex interpolation operation and simple interpolation operation are used. 'Complex interpolation operation is to use the directional interpolation operation on the edges of the edge region and the high-frequency part to perform amplification processing, and the result of the interpolation operation in the direction can be performed. Confidence processing (confidence process), that is, the more the edge direction is clear, the more the entanglement of the direction interpolation operation result is, otherwise, the credibility of the direction interpolation operation result is more than 7201032592 ^ then according to the credibility obtained by 7 The result of the interpolation operation and the weighted yarn (weighted _) of the single interpolation operation, and finally the amplification result of the original input image, the flat region, the edge region and the high frequency portion, = line fusion, that is, the operation can be performed Smaller amount, lower complexity ^ In the case of a money of 5, according to the predetermined amplification, the output - has an output image with anti-aliasing, sharp edges, rich details and strong contrast. • The present invention further discloses a “image processing device” which, after inputting the digital image into the image processing device, can output a large digital image according to the magnification ratio. The image processing apparatus includes: a pre-processing module for performing a high-pass filtering process on the input image to extract a high-frequency portion of the input image, the high-frequency portion being used for the input image Amplifying the result to perform high frequency compensation 'the preprocessing module and performing an image decomposition process on the input image, the image decomposition process extracting an image gradient of the input image by using a gradient operation element, and according to the preset a fixed threshold, the input image is decomposed into a flat area and an edge area' and the area is marked on the input image, a composite amplification module that uses one for the original input image and the flat region Simple interpolation operation, performing amplification processing, using a complex interpolation operation and the simple-interpolation operation 'for amplification processing' for the edge region and the high-frequency portion respectively, and then performing reliability processing on the result of the complex interpolation operation The more clear the edge direction, the higher the credibility of the result of the complex broadcast value operation. Otherwise, the credibility of the result of the complex interpolation operation is lower. And weighting the result of the complex interpolation operation and the result of the simple interpolation operation according to the reliability degree; and a fusion processing unit for amplifying the original input image, the flat region, the edge region, and the high frequency portion As a result, the fusion becomes the output image. 201032592 In order to enable the reviewing committee to have a better understanding and understanding of the purpose, processing program and function of the present invention, the embodiments are described in detail with reference to the accompanying drawings. An image and video enlargement method, as shown in FIG. 1 ', the method is applied to an image processing apparatus 1 after a digital image is input to the image processing apparatus 1, the image Processing device i ❹ outputting an enlarged digital image at a predetermined magnification ratio so that a lower resolution input image and video are amplified and displayed on a higher resolution video device It can present digital images that are clear and useful for identifying relevant features. In a preferred embodiment of the present invention, as shown in FIG. 1, the method includes providing a pre-processing module, a repr〇cess module, and a composite amplification module (c〇mp〇site up seaiing module). 30 'Refer to Figure 2, where the pre-processing module 1 pair = input image execution - high-pass filtering process u, extract the © frequency portion of the input image' for use in subsequent processing The high-frequency part of the amplified image of the input image is subjected to a high-frequency surface, and an image decomposition processing is performed on the input image. The gradient operator is decomposed into a flat region and a miscellaneous _, Lai New The module 30 performs amplification processing on the original input image, the flat region, the edge region, and the high-frequency portion respectively. [In the original input image and the flat region, simple interpolation operation is used.] For the edge region and the high-frequency portion, the use is complicated. The interpolation operation, finally, the original input image, the flat region, the edge region, and the high-frequency portion of the amplification result '--fusion processing 33, that is, the predetermined amplification ratio, 9 201032592 produces an output image.

由於’目前使用的傳統插值放大技術都具有低通濾波 的特性’輸入圖像被放大後,放大的輸出圖像必然會損失 掉輸入圖像中的高頻資訊’在該實施例中,復參閱第1 圖所示,該預處理模組1〇為了防止高頻資訊的過分損 失,以為後續處理做好準備工作,會對輸入圖像先進行該 高通濾波處理11,參閱第2圖所示,以提取輸入圖像中 高頻成分,在輸入圖像的放大處理過程中,該高頻部分與 該邊緣區域是使用相同的插值運算法被放大,當原輸入圖 像、平坦區域、邊緣區域及高頻部分都完成放大後,再對 其進行該融合處理(fUsion)33,以補償輸入圖像在放大過 程中損失的高頻部份。 此外’由於人眼往往對圖像中梯度較強的邊緣區域特 別敏感,因此,在該實施例中,為了減少後續插值放大處 理的運算量,以簡化及加速整體的運算效能,僅對圖像中 人眼特別敏感的邊緣區域使用複雜插值運算(如:方向插 值運算(directional interpolation)) ’進行高精確度的放大處 理’對於圖像中平坦區域則使用簡單插值運算,如:雙立 方插值運算(bicubic interpolation) ’進行放大處理,此一權 宜作法,最終仍能在不影響視覺效果的前提下,獲得清晰 的放大圖像。另,由於圖像中較為雜亂的邊緣區域(如: 草地等),其邊緣方向不易準確判斷,且人眼對於該等雜 亂的邊緣區域的放大處理是否準確,也不會太感興趣,所 以,在該實施例中,對於該等雜亂的邊緣區域也同時使用 簡單插值運算,以簡化插值放大處理的複雜度及運算量。 201032592 為了實現前述目的’參閱第3圖所示,該實施例在對輸入 圖像執行該圖像分解處理12時,係先針對輸入圖像進行 一圖像分解121,以分解出平坦區域和邊緣區域,其作法 係利用梯度運算元’提取高頻成分的梯度,在該實施 例中係使用sobel運算元(operator),提取高頻成分的梯度 - ,然後,根據人眼對梯度變化的敏感程度,使用固定 ' 門限為艮據下列公式(1)分解出平坦區域(plane) 和邊緣區域(edge),並對南頻成分的圖元^pixei)進行標記 ❹ Label{x) ·Since 'the traditional interpolation amplification technique currently used has the characteristics of low-pass filtering', after the input image is enlarged, the amplified output image will inevitably lose the high-frequency information in the input image. In this embodiment, As shown in FIG. 1 , in order to prevent excessive loss of high-frequency information, the pre-processing module 1 performs preparatory work for subsequent processing, and performs high-pass filtering processing on the input image first. Referring to FIG. 2, To extract high-frequency components in the input image, during the amplification process of the input image, the high-frequency portion and the edge region are enlarged using the same interpolation algorithm, when the original input image, the flat region, the edge region, and the height are After the frequency portion is amplified, the fusion processing (fUsion) 33 is performed to compensate the high frequency portion of the input image lost during the amplification process. In addition, since the human eye is often particularly sensitive to the edge regions with strong gradients in the image, in this embodiment, in order to reduce the amount of computation of the subsequent interpolation amplification processing, to simplify and accelerate the overall computational performance, only the image The edge area that is particularly sensitive to the human eye uses complex interpolation operations (eg, directional interpolation) to perform high-precision amplification processing. For flat areas in the image, simple interpolation operations such as bicubic interpolation are used. (bicubic interpolation) 'Amplify the processing, this is an expedient method, and finally can obtain a clear magnified image without affecting the visual effect. In addition, due to the more chaotic edge regions in the image (such as grass, etc.), the edge direction is not easy to accurately judge, and the human eye is not too interested in the amplification processing of the disordered edge regions, so, In this embodiment, simple interpolation operations are also used for the chaotic edge regions to simplify the complexity and computational complexity of the interpolation amplification process. 201032592 In order to achieve the foregoing object, as shown in FIG. 3, when performing the image decomposition processing 12 on an input image, the embodiment first performs an image decomposition 121 on the input image to decompose the flat region and the edge. The region, which uses the gradient operator to extract the gradient of the high frequency component, in this embodiment uses the sobel operator to extract the gradient of the high frequency component - and then, according to the sensitivity of the human eye to the gradient change Using the fixed 'threshold', the flat area (edge) and the edge area are decomposed according to the following formula (1), and the primitive (pixei) of the south frequency component is marked ❹ Label{x)

LabeKx)4Edge,^Grd^Thr^D) … [Plane, if{Grd{x)<ThreshD) ...................... v ^ ^ ;嗣,復參閱第3圖所示,再針對該邊緣區域内較為 雜亂的邊緣圖元’進行一去雜亂處理122 ’去掉雜亂的邊 緣圖元,其作法係先提取某一邊緣圖元的一預定範圍的鄰 ❹ 域ΜΧΛΤ ’再統計該鄰域内邊緣圖元的數目LabeKx)4Edge,^Grd^Thr^D) ... [Plane, if{Grd{x)<ThreshD) ...................... v ^ ^ ; 嗣, refer to Figure 3, and then perform a de-scrambling process 122 for the more cluttered edge primitives in the edge region to remove the cluttered edge primitives. The method is to extract one of the edge primitives first. The neighborhood of the predetermined range ΜΧΛΤ 'Recount the number of edge elements in the neighborhood

Nedge ,且根據下 ‘ 列公式(2),在判斷出該鄰域内邊緣圖元的數目超過一預 定數目(太多或太少)時,即刪除該邊緣圖元: if {Nedge > ThrH or Nedge < ThrL) remove the edge ...................... (2) ,其中77wi = min(M,A〇,7%r//=〇.8 M."。最後,為了擴大邊 緣區域,復參閱第3圖所示,該實施例乃對邊緣圖元向四 周進行一形態學膨脹處理(dilation) 123,在該實施例中係 11 201032592 使用十字結構元(Cross structure element),對邊緣圖元進 行該形態學膨脹處理123,以擴大邊緣區域。 在該實施例中’參閱第1及3圖所示,該複合放大模 組30係對原輸入圖像、平坦區域、邊緣區域及高頻部分, 分別進行放大處理,其中對於邊緣區域及高頻部分係使用 第一複合插值放大模組31,進行插值放大及增益處理, • 對於原輸入圖像及平坦區域係使用第二複合插值放大模 組32進行插值放大處理,第一複合插值放大模組和第 © 一複合插值放大模組32中所使用的插值放大運算法可為 相同,且在實際運算過程中,第一複合插值放大模組31 和第二複合插值放大模組32中的插值放大運算可同時進 行,以節省整體的運算量,最後,再對該第一複合插值放 大模組31及第二複合插值放大模組32輸出的圖像,進行 融合(iUsion)處理33,整個計算過程依下列公式(3),即 能依預定的放大比例,產生一高解析度的輸出圖像: β HR(x) = LR(x) * Hp(x) * CUp(x) Gain + LR(x) * CUp(x) ( 3) ,其中ww是高解析度的輸出圖像,是低解析度 的輸入圖像’ 是預處理模組1Q使用的高通丨慮波函數, o/pw是該第一複合插值放大模組3丨和第二複合插值放大 模組32中使用的插值放大函數’細是該第一複合插值放 大模組31中使用的常數増益因數。 一般言,該等複合插值放大模組31及32主要係執行 輸入圖像的放大功能’目前使用的插值放大技術只能將輸 12 201032592 入圖像放大成偶數倍,若要放大成任意倍,則要結合使用 降採樣(down sample)技術。現在以放大兩倍為例,說明該 等複合插值放大模組31及32的功能,參閲第4圖所示為 一幅經過插值放大得到的高解析度的圖像,其中標記為黑 色的圖元疋直接從低解析度的輸入圖像中拷貝而來的圖 元,其他圖元則是根據黑色圖元插值而來的圖元,在本發 ' 明的插值放大運算過程中,係先計算標記為灰色的圖元, 最後,再計算白色圖元,且本發明針對由原輸入圖像分解 © 出的邊緣區域及平坦區域,係使用不同的插值放大運算, 其中對於邊緣區域使用複雜插值運算(smart interpolation),如:方向插值運算,對於平坦區域則使用 簡單插值運算(simple inteipolation)如:雙立方插值運算。 在該實施例中,該第一複合插值放大模組31分別對 預處理模組10輸出的邊緣區域及高頻部分進行插值放大 處理,該第二複合插值放大模組32分別對預處理模組1〇 輸出的原輸入圖像及平坦區域進行插值放大處理,第一複 ® 合插值放大模組31和第二複合插值放大模組32可使用相 同的插值放大運算法,參閱第5圖所示,唯一不同的地方 在於第一複合插值放大模組31的放大結果要做增益處 理。由於,該預處理模組1〇已經在原輸入圖像上標記了 邊緣區域和平坦區域,根據該等標記,第一複合插值放大 模組31和第二複合插值放大模組32會針對圖像中的圖元 進行邊緣區域的判斷處理51,若被標記為邊緣區域,即 採用複雜插值運算52,若被標記為平坦區域,則採用簡 單插值運算(如:雙立方插值運算)53,其中該複雜插值運 13 201032592 算52的運算結構,參閱第6圖所示,係針對處於邊緣區 域的當前圖兀,同時進行方向插值運算521(directi〇nal interpolation)和簡單插值運算53(如:雙立方插值運算), 然後,針對方向插值運算521的結果,執行可信度 (confid·)及權重(weight)計算切,可信度計算的原則 係邊緣方向愈明確者’方向插值運算521結果的可信度愈 • 咼,否則,可仏度愈低;最後,根據可信度及權重計算 522得到的可信度’再對方向插值運算S21的結果和簡單 ❹ 插值運算53的結果’進行加權求和(weighted sum)處理 523,並予輸出。 在該實施例中’方向插值運算521是先根據處於邊緣 區域的當前圖元的梯度值’估算出其邊緣方向,然後,沿 著其邊緣方向獲取其他圖元,以進行插值放大運算,以下 係以第7騎示的灰色點x為例,說明如何使用方向插值 運算52卜將灰色圖元X的插值範本旋轉衫度,而得到 第7圖所示白色圖元的運算過程: ❹首先,復參閱帛7圖所示,使用該灰色圖元X鄰域的 12個圖元/¾〜化,依照下列公式(4),對其進行插值計算 /於,由於其插值計算係從六個方向進行判斷,而該等圖 元η〜Λ,的排列更接近於圓形,故具有更高的精確度:Nedge, and according to the following formula (2), when it is determined that the number of edge primitives in the neighborhood exceeds a predetermined number (too much or too little), the edge primitive is deleted: if {Nedge > ThrH or Nedge < ThrL) remove the edge ...................... (2) , where 77wi = min(M,A〇,7%r//= 〇.8 M." Finally, in order to enlarge the edge region, as shown in Fig. 3, this embodiment performs a morphological dilation 123 on the periphery of the edge primitive, in this embodiment 11 201032592 The morphological expansion process 123 is performed on the edge primitives by using a cross structure element to enlarge the edge regions. In this embodiment, as shown in FIGS. 1 and 3, the composite amplification module 30 The original input image, the flat region, the edge region, and the high frequency portion are respectively subjected to amplification processing, wherein the first composite interpolation amplification module 31 is used for the edge region and the high frequency portion, and interpolation amplification and gain processing are performed, The original input image and the flat region are interpolated and amplified using the second composite interpolation amplification module 32. The interpolation amplification method used in the first composite interpolation amplification module and the first composite interpolation amplification module 32 may be the same, and in the actual operation, the first composite interpolation amplification module 31 and the second composite The interpolation and amplification operations in the interpolation and amplification module 32 can be simultaneously performed to save the overall calculation amount. Finally, the images output by the first composite interpolation and amplification module 31 and the second composite interpolation and amplification module 32 are merged. (iUsion) processing 33, the entire calculation process according to the following formula (3), that is, a high-resolution output image can be generated according to a predetermined amplification ratio: β HR(x) = LR(x) * Hp(x) * CUp(x) Gain + LR(x) * CUp(x) ( 3) , where ww is a high-resolution output image and is a low-resolution input image' is a Qualcomm consideration used by the pre-processing module 1Q The wave function, o/pw, is an interpolation amplification function used in the first composite interpolation amplification module 3丨 and the second composite interpolation amplification module 32. The constant is a constant benefit factor used in the first composite interpolation amplification module 31. In general, the composite interpolation amplification modules 31 and 32 are mainly implemented. The zoom function of the input image 'The interpolation amplification technique currently used can only enlarge the image into the even multiple of 12 201032592. If you want to enlarge it to any multiple, you should use the down sample technique. Now zoom in two For example, the functions of the composite interpolation amplification modules 31 and 32 are described. Referring to FIG. 4, a high-resolution image obtained by interpolation and amplification is shown, wherein the symbol marked as black is directly from The primitives copied from the low-resolution input image, and the other primitives are the primitives based on the interpolation of the black primitives. In the interpolation and amplification operation of the present invention, the grayscale is first calculated. The primitive, and finally, the white primitive is recalculated, and the present invention uses different interpolation and amplification operations for the edge region and the flat region derived from the original input image, wherein a complex interpolation operation is used for the edge region. For example, direction interpolation operation, for flat areas, simple interpolation (such as: double-cube interpolation). In this embodiment, the first composite interpolation amplification module 31 respectively performs interpolation and amplification processing on the edge region and the high frequency portion of the output of the preprocessing module 10, and the second composite interpolation amplification module 32 respectively processes the preprocessing module. The first input image and the flat area of the output are subjected to interpolation and amplification processing, and the first complex interpolation amplification module 31 and the second composite interpolation amplification module 32 can use the same interpolation amplification algorithm, as shown in FIG. The only difference is that the amplification result of the first composite interpolation amplification module 31 is to be subjected to gain processing. Because the pre-processing module 1〇 has marked the edge region and the flat region on the original input image, according to the markers, the first composite interpolation amplification module 31 and the second composite interpolation amplification module 32 are targeted to the image. The primitive element performs the edge region judgment processing 51. If it is marked as the edge region, the complex interpolation operation 52 is employed, and if it is marked as a flat region, a simple interpolation operation (for example, a double-cubic interpolation operation) 53 is employed, wherein the complex Interpolation operation 13 201032592 Calculate the operation structure of 52, as shown in Fig. 6, for the current picture in the edge region, simultaneously perform direction interpolation operation 521 (directi〇nal interpolation) and simple interpolation operation 53 (eg: double cubic interpolation Then, for the result of the direction interpolation operation 521, the reliability (confid·) and the weight calculation are performed. The principle of the reliability calculation is that the edge direction is more clear, and the result of the direction interpolation operation 521 is credible. Degrees are more than •, otherwise, the lower the degree of volatility; finally, the credibility obtained by calculating 522 according to the credibility and weights Single ❹ interpolation result 53 'of a weighted sum (weighted sum) process 523, and to output. In this embodiment, the 'direction interpolation operation 521 first estimates the edge direction according to the gradient value of the current primitive in the edge region, and then acquires other primitives along the edge direction thereof to perform interpolation and amplification operations. Taking the gray point x of the 7th riding as an example, how to use the direction interpolation operation 52 to rotate the interpolation model of the gray primitive X to obtain the operation process of the white primitive shown in FIG. 7: ❹ First, the complex Referring to Figure 7, the 12 primitives in the neighborhood of the gray primitive X are used, and the interpolation calculation is performed according to the following formula (4). Since the interpolation calculation is performed from six directions Judging, and the arrangement of the primitives η~Λ is closer to a circle, so it has higher precision:

Dpxh ρ〆.................................(4) 其中α,為該等鄰域圖元p"的加權係數,依照下列 (5) 201032592 公式(5),計算其六個方向上的梯度值: ^o-IPo-^l » D/r, =|/>_/>2| > 〇/>2 =|^4_p71 D,r^Ps-P6\ » D/r4=|P8_pM| , Dir5=\P9-PX0\ ,若公式(5)中有些鄰域圖元^。〜&還未計算出,可 先用簡單插值運算% (如:雙立方插值運算)估算出替代Dpxh ρ〆..............................(4) where α is the neighboring primitive p&quot The weighting factor is calculated according to the following (5) 201032592 formula (5), and the gradient values in six directions are calculated: ^o-IPo-^l » D/r, =|/>_/>2| &gt ;〇/>2 =|^4_p71 D,r^Ps-P6\ » D/r4=|P8_pM| , Dir5=\P9-PX0\ , if there are some neighborhood primitives in equation (5). ~& has not been calculated, you can use the simple interpolation operation % (such as: double cubic interpolation operation) to estimate the replacement

值。然後’使用該等梯度值,依照下列公式(6),計算各 圖元的權重α,: 伽―=min(Z)i>0,…£),广5) (6) lf (Dirnm ~ Dir0) a0=a3= 0.5, other at=〇 =Dirt) a'= a2 = Q,5, other ai=Q .(^min == Dir2) a4=a7= 0.5, other a,. = 〇 =Dir3) a5 = a6 = 0.5,〇齡 a: = 〇 .... If 伽-=Dir4) ag=au = 0.5, other a. = 〇 .if (Dirmn =Dir5) a9 = a10 = 0.5, other = 〇 〇 ,俟計算出該等加權係數《,後,便可根據公式(4), . 再對方向插值運算521的結果和簡單插值運算幻的結 . 果’進行加權求和(weighted sum)處理523,計算出複雜插 值運算52的結果,並予輸出。 此外’本發明為了提高方向插值運算521的魯棒性 (robustness) ’以防止在邊緣方向不明顯的邊緣區域,產生 錯誤的鬼(Ghost)點,參閱第2圖所示,乃針對該第一複 合插值放大模組31和第二複合插值放大模組32中簡單插 值運算及方向插值運算的放大結果,採用下列公式(7), 15 201032592 進行融合處理33,以依預定的放大比例,產生一高組 度的輸出圖像: # HPx = (1 - βίΐχ). SPx + /Mix - DPx ......................... (7)value. Then, using these gradient values, the weight α of each primitive is calculated according to the following formula (6): gamma ==min(Z)i>0,...£), wide 5) (6) lf (Dirnm ~ Dir0 ) a0=a3= 0.5, other at=〇=Dirt) a'= a2 = Q,5, other ai=Q .(^min == Dir2) a4=a7= 0.5, other a,. = 〇=Dir3) A5 = a6 = 0.5, age a: = 〇.... If 伽-=Dir4) ag=au = 0.5, other a. = 〇.if (Dirmn = Dir5) a9 = a10 = 0.5, other = 〇〇 , 俟 calculate the weighting coefficients ", after which, according to the formula (4), then the result of the direction interpolation operation 521 and the simple interpolation operation magic knot. If 'weighted sum processing 523, The result of the complex interpolation operation 52 is calculated and output. Further, 'the present invention is for improving the robustness of the direction interpolation operation 521' to prevent an edge region which is not noticeable in the edge direction, and generates an erroneous ghost (Ghost) point, as shown in Fig. 2, for the first The amplification result of the simple interpolation operation and the direction interpolation operation in the composite interpolation amplification module 31 and the second composite interpolation amplification module 32 is performed by the following formula (7), 15 201032592 to perform a fusion process 33 to generate a predetermined amplification ratio. High-level output image: # HPx = (1 - βίΐχ). SPx + /Mix - DPx ......................... (7 )

,其中^/Λ是最終輸出圖像的圖元值,奶是針斜該 輸入圖像圖元的簡單插值運算的放大結果,是針對^ 輸入圖像的方向插值運算的放大結果,卿是採用下列公 式(8)獲得的融合係數: AWhere ^/Λ is the primitive value of the final output image, milk is the enlarged result of the simple interpolation operation of the input image primitive, and is the enlarged result of the direction interpolation operation for the input image. The fusion coefficient obtained by the following formula (8): A

Dir /6 /«0 if (D/rmin '2 > Dirmean) fMix = 0.25: else if (Dir* . 4 > Dir獅)/Mix = 〇·5; ...... (8) else /Mix = h 另,由於,在前述插值放大運算過程中所使用的插 值放大函數均具有低通濾波的特性,會導致放大圖像的邊 緣發生模糊的問題’因此,本發明為了提高放大圖像邊緣 ❹ 的清晰度,乃需要對圖像進行一銳化處理34,復參閱第2 . 圖所示,且為了避免在銳化過程中在邊緣產生過沖 (overshoot)現象,本發明特別使用下列公式(9)所示的非線 性高通滤波器,如:有限脈波響應(Finite Impulse Response,簡稱FIR)的高通濾波器: SHP(x) = MedaiiKLocMax(x).L〇cMin(x),Fir(x)) ....... ( 9 ^ ,其中頌〇〇疋銳化的結果,遍咖()函數是取中值操 作,和是當前圖元鄰域内的最大和最小值, 201032592 是具有非線性高通特性的FIR濾波器。如此,將高通 滤波的結果關在局部__最大及最小值之間,即能 有效防止銳化過程中的邊緣過沖現象。 如此’使林發明的綠對低解析度的輸人圖像進行 放大,以顯示在高解析度的視頻設備上時,不僅能使放大 運算量較其絲於邊緣的方料小,且在複雜度較低及速 ' 紐高的情形下’使放大的輸_像健有抗織、邊緣 銳利、細節豐富及對比度強等特點,以在對數位圖像進行 ❹ 放大時,能有效降低圖像失真,提供高解析度且有利於辨 識相關特徵的數位圖像。 當前述之圖像及視頻的放大方法以軟體實施時,可藉 由硬體的協助來加速執行。具體來說,第丨圖之預處理模 組10與複合放大模組3〇以軟體的程式碼實施,而該程式 碼可藉由一硬體之圖像處理器(graphics pr〇cessing unit, GPU)來執行。若該圖像處理器支援通用平行計算,亦即 通用圖像處理器(General Purpose GPU, GPGPU),便可以 © 更快的速度執行該程式碼,提升圖像及視頻的放大效率。 例如,若使用有支援CUDA技術(Compute Unified Device Architecture ’由NVIDIA公司所開發的並行運算技術)的 圖像處理器,則可將預處理模組10與複合放大模組3〇 所執行的前述各項運算,如高通濾波處理n、簡單插值 運算53、複雜插值運算52、融合處理33、銳化處理34... 等等’編寫為CUDA核函數(kernel function)的程式碼,而 有支援CUDA技術的圖像處理器在執行這些CUDA核函 數時,可產生多個執行緒(thread)來同時執行同一個 17 201032592 CUDA核函數’此種並行運算的方式可大幅提高效率。 前述之圖像及視頻的放大方法可應用於影像後處理 (postprocessing)領域’例如,應用於視頻播放軟體中,如 第8A圖所示。第8A圖係顯示一視頻播放軟體的架構, 其中’視頻播放器(video player)80包含視頻解褐器(vide〇 decoder)81、預處理模組1〇、複合放大模組3〇以及視頻 . 渲染器(video renderer)82。視頻解碼器81可從各種視頻來 源取得影像資料,以對其進行解碼;預處理模組1〇與複 ⑩ 合放大模組3〇則將解碼後的影像資料予以放大;視頻渲 染器82則利用影像渲染(rendering)技術’如微軟(Microsoft) 的Direct3D技術,將所放大的影像資料繪製顯示。 一般而言’當第8A圖的架構置於電腦中運作時,視 頻解碼器81係將解碼後的影像資料存在電腦的系統記憶 體中,而預處理模組10與複合放大模組3〇則對於系統記 憶體所儲存之解碼影像資料進行放大。預處理模組1〇與 複合放大模組30亦可整合於視頻解碼器81中。然而,若 ❹ 電腦具有支援CUDA技術的圖像處理器,且預處理模組 1〇與複合放大模組30係以CUDA核函數的程式碼來實 施’則將預處理模組10與複合放大模組3〇整合於視頻演 染器82中,如第8B圖所示,可進一步提升硬體加速的 效果。這是因為,當視頻解碼器81將解碼後的影像資料 傳遞到視頻渲染器82時,視頻渲染器82會建立Direct3D 表面(surface)(此處係以Direct3D技術為例說明),用以顯 示該影像資料’而這些表面係儲存於顯示記憶體中。另一 方面’ CUDA技術有支援Direct3D互用性 201032592 機制,使得預處理模組10與複合放大模組30可直接對 Direct3D表面進行處理,因此,將預處理模組1〇與複合 放大模組30整合於視頻渲染器82中’使其可直接對儲存 於顯示記憶體的Direct3D表面進行放大,有助於提昇效 率,而避免浪費時間於系統記憶體與顯示記憶體間的資料 傳輸。 ' 第9圖係第8B圖之視頻播放軟體架構之一具體實施 例的方塊圖’其中’視頻播放器90包含視頻解碼器91 〇 及視頻>宣染器92 ’而視頻沒染器92包含放大模組921及 融合器與呈現器(mixer and presenter)922。第9圖的架構 係藉由有支援CUDA技術的圖像處理器來實現,而放大 模組921係以CUDA核函數的程式碼來實施。視頻沒染 器92從視頻解碼器91接收解碼後的影像資料後,會建立 Direct3D表面存於顯示記憶體中,而放大模組921會從顯 示§己憶體中之Direct3D表面提取影像紋理(texmre)(即方 塊9210),將其區分為亮度分量(即γ分量)與色度分量(即 ® U/V分量)’以分別執行不同的放大處理:亮度分量依序 經過去噪點(de-noising ’ 即方塊 9211)、去塊(de-blocking, 即方塊9212)、邊緣插值運算(edge interpolation,即方塊 9213)、抗鋸齒(anti-aliasing’ 即方塊 9214)、邊緣銳化(edge sharpening,即方塊9215)等處理;色度分量則經過雙線性 插值運算(bilinear interpolation,即方塊9216)。最後,放 大模組921依據放大後的亮度分量與色度分量,設定所要 顯示的影像紋理(即方塊9217) ’再送至融合器與呈現器 922進行繪製顯示。對於前述之方塊9211〜9216,可分別 201032592 編寫適當的CUDA核函數來實施,如此,細象處理器 ,執行这些CUDA核函鱗,便可以並行運算的方式執 行,以大幅提昇放大模組921的運作效率。 以上所述係细難實關詳細制本㈣而非限 制本發明之麵。凡熟知此類技藝人士皆能日膽,可根據 以上實施例之揭示峨㈣多可賴化,仍不脫離本發明 之精神和範圍。 【圖式簡單說明】 第1圖係本發明的圖像處理裝置的架構示意圖; 第2圖係本發明的圖像處理裝置的細部架構示意 Π51 · 圃, 第3圖係本發明的圖像分解處理的細部架構示意 回 · 圖, 第4圖係在本發明之一最佳實施例中一幅經過插值 放大得到的高解析度的圖像示意圖; 第5圖係該實施例中第一複合插值放大模組及第二 複合插值放大模組的架構示意圖; 第6圖係該實施例中複雜插值放大運算的細部架構 示意圖; 第7圖係該實施例中一幅經過方向插值運算得到的 尚解析度的圖像示意圖; 第8A圖係一視頻播放軟體的架構的示意圖; 第8B圖係將第8A圖之預處理模組與複合放大模 組整合於視頻渲染器的示意圖;及 201032592 第9圖係第8B圖之視頻播放軟體架構之一具體實 施例的方塊圖。 【主要元件符號說明】 圖像處理裝置 ................1 預處理模組 ................10 • 高通濾波處理 ................11 圖像分解處理 ................12 ® 圖像分解 121 去雜亂處理 122 形態學膨脹處理 123 複合放大模組 ................30 第一複合插值放大模組 ................31 第二複合插值放大模組 ................32 融合處理 ................33Dir /6 /«0 if (D/rmin '2 > Dirmean) fMix = 0.25: else if (Dir* . 4 > Dir lion)/Mix = 〇·5; ...... (8) else /Mix = h In addition, since the interpolation amplification function used in the aforementioned interpolation amplification operation has low-pass filtering characteristics, the edge of the enlarged image may be blurred. Therefore, the present invention aims to improve the enlarged image. The sharpness of the edge 乃 requires a sharpening of the image 34, as shown in Fig. 2, and in order to avoid overshooting at the edges during sharpening, the present invention particularly uses the following A nonlinear high-pass filter as shown in equation (9), such as a finite pulse response (Finite Impulse Response, FIR) high-pass filter: SHP(x) = MedaiiKLocMax(x).L〇cMin(x), Fir (x)) ....... ( 9 ^ , where the result of 颂〇〇疋 sharpening, the ubi-cafe () function is taking the median operation, and is the maximum and minimum values in the neighborhood of the current primitive, 201032592 It is a FIR filter with nonlinear high-pass characteristics. Thus, the result of high-pass filtering is kept between the local __maximum and minimum values, which is effective. Prevent edge overshoot during sharpening. So 'Let's invent the Green's low-resolution input image to be displayed on a high-resolution video device, not only can it be enlarged The edge of the wire is small, and in the case of low complexity and speed 'Nhigh', the enlarged image is made of anti-woven, sharp edges, rich details and strong contrast, in the logarithmic map. When performing ❹ amplification, it can effectively reduce image distortion, and provide high-resolution digital images that are useful for recognizing related features. When the above-mentioned image and video amplification method is implemented in software, it can be assisted by hardware. To speed up the execution. Specifically, the pre-processing module 10 and the composite amplification module 3 of the figure are implemented by software code, and the code can be implemented by a hardware image processor (graphics pr〇) The cessing unit (GPU) is executed. If the image processor supports general parallel computing, that is, the General Purpose GPU (GPGPU), the code can be executed at a faster speed to enhance the image and Video placement For example, if an image processor supporting CUDA technology (Compute Unified Device Architecture 'parallel computing technology developed by NVIDIA Corporation) is used, the pre-processing module 10 and the composite amplification module 3 can be executed. The foregoing operations, such as high-pass filter processing n, simple interpolation operation 53, complex interpolation operation 52, fusion processing 33, sharpening processing 34, etc., are written as code of the CUDA kernel function, and Image processors that support CUDA technology can generate multiple CUDA kernel functions to simultaneously execute the same 17 201032592 CUDA kernel function. This type of parallel operation can greatly improve efficiency. The aforementioned image and video enlargement method can be applied to the field of post processing, for example, in video playback software, as shown in Fig. 8A. Figure 8A shows the architecture of a video playing software, wherein the 'video player 80' includes a video de-embryo (vide〇decoder) 81, a pre-processing module 1〇, a composite amplification module 3〇, and a video. Video renderer 82. The video decoder 81 can obtain image data from various video sources to decode the same; the pre-processing module 1 and the complex 10 amplification module 3 amplify the decoded image data; the video renderer 82 utilizes Image rendering technology, such as Microsoft's Direct3D technology, displays the magnified image data. In general, when the architecture of Figure 8A is placed in a computer, the video decoder 81 stores the decoded image data in the system memory of the computer, while the pre-processing module 10 and the composite amplification module 3 The decoded image data stored in the system memory is enlarged. The pre-processing module 1A and the composite amplifying module 30 can also be integrated in the video decoder 81. However, if the computer has an image processor that supports CUDA technology, and the pre-processing module 1 and the composite amplification module 30 are implemented by the code of the CUDA kernel function, then the pre-processing module 10 and the composite amplification mode are implemented. The group 3 is integrated into the video dying 82, as shown in Fig. 8B, which further enhances the effect of hardware acceleration. This is because, when the video decoder 81 passes the decoded image data to the video renderer 82, the video renderer 82 establishes a Direct3D surface (here, using Direct3D technology as an example) to display the Image data' and these surfaces are stored in display memory. On the other hand, the CUDA technology supports the Direct3D interoperability 201032592 mechanism, so that the pre-processing module 10 and the composite amplification module 30 can directly process the Direct3D surface. Therefore, the pre-processing module 1〇 and the composite amplification module 30 are Integrated into the video renderer 82 to make it possible to directly zoom in on the Direct3D surface stored in the display memory, which helps to improve efficiency and avoid wasting time on data transfer between the system memory and the display memory. Figure 9 is a block diagram of one embodiment of a video playback software architecture of Figure 8B, wherein 'video player 90 includes video decoder 91 视频 and video > narrator 92' and video annihilator 92 contains The amplification module 921 and the mixer and presenter 922. The architecture of Fig. 9 is implemented by an image processor that supports CUDA technology, and the amplification module 921 is implemented by the code of the CUDA kernel function. After the video eliminator 92 receives the decoded image data from the video decoder 91, the Direct3D surface is stored in the display memory, and the amplification module 921 extracts the image texture from the Direct3D surface in the § 己 体 (texmre) (ie, block 9210), which is divided into a luminance component (ie, a gamma component) and a chrominance component (ie, a U/V component) to perform different amplification processes: the luminance component is sequentially denoised (de-noising) 'ie block 9211), de-blocking (block 9212), edge interpolation (block 9213), anti-aliasing (block 9214), edge sharpening ( Block 9215) is processed; the chrominance component is subjected to a bilinear interpolation (block 9216). Finally, the enlargement module 921 sets the image texture to be displayed (i.e., block 9217) according to the enlarged luminance component and the chrominance component, and sends it to the fuser and renderer 922 for drawing display. For the foregoing blocks 9211~9216, an appropriate CUDA kernel function can be implemented by 201032592 respectively, so that the fine processor can execute these CUDA kernel scales in parallel, so as to greatly enhance the amplification module 921. Operational efficiency. The above description is difficult to implement in detail (4) and is not intended to limit the scope of the present invention. Those skilled in the art will be able to do so without departing from the spirit and scope of the present invention. BRIEF DESCRIPTION OF THE DRAWINGS Fig. 1 is a schematic view showing the structure of an image processing apparatus of the present invention; Fig. 2 is a detailed diagram of an image processing apparatus of the present invention, and Fig. 3 is an image decomposition of the present invention. The detailed structure of the process is illustrated in Fig. 4, which is a schematic diagram of a high resolution image obtained by interpolation in a preferred embodiment of the present invention; Fig. 5 is a first composite interpolation in this embodiment. Schematic diagram of the amplification module and the second composite interpolation amplification module; Fig. 6 is a detailed schematic diagram of the complex interpolation and amplification operation in the embodiment; Fig. 7 is an analysis of the direction interpolation operation in the embodiment Figure 8A is a schematic diagram of a video playback software architecture; Figure 8B is a schematic diagram of integrating the pre-processing module and composite amplification module of Figure 8A into a video renderer; and 201032592 Figure 9 A block diagram of one embodiment of a video playback software architecture of Figure 8B. [Explanation of main component symbols] Image processing device................1 Preprocessing module................10 • Qualcomm Filter processing................11 Image decomposition processing................12 ® Image decomposition 121 Unscrambled processing 122 Form Learning expansion processing 123 compound amplification module ................30 first composite interpolation amplification module................31 Two composite interpolation amplification module ................32 fusion processing................33

A . 銳化處理 ................34 邊緣區域的判斷處理 ................51 複雜插值運算 ................52 簡單插值運算 ....................................53 方向插值運算 521 可信度及權重計算 522 加權求和處理 523 21 201032592 視頻播放器 ................80, 90 視頻解碼器 ................81,91 視頻渲染器 ................82, 92 放大模組 ................921 . 融合器與呈現器 ................922 . 提取影像紋理 ................9210 去噪點 ................9211 © 去塊 ................9212 邊緣插值運算 ................9213 抗鋸齒 ................9214 邊緣銳化 ................9215 雙線性插值運算 ................9216 設定影像紋理 ................9217 ❹ 22A. Sharpening processing................34 Judgment processing of edge areas................51 Complex interpolation operation... .............52 Simple interpolation operation................................. ...53 Direction Interpolation Operation 521 Credibility and Weight Calculation 522 Weighted Summation Processing 523 21 201032592 Video Player................80, 90 Video Decoder... .............81,91 Video Renderer................82, 92 Amplifier Module......... .......921 . Cage and renderer ................922 . Extract image texture............... .9210 Denoising ................9211 © Deblocking................9212 Edge Interpolation Operation... ..........9213 Anti-aliasing................9214 Edge sharpening................9215 Bilinear interpolation operation................9216 Setting image texture................9217 ❹ 22

Claims (1)

201032592 七、申請專利範圍: 1· 一種圖像及視頻的放大方法,係應用在一圖像處理裝 置上,以在一數位圖像被輸入該圖像處理裝置後該 圖像處理裝置能依據一放大比例,輸出一放大的數= 圖像,該方法包括: 提供一預處理模組,用以對該輸入圖像執行一高通濾 ' 波處理,以提取該輸入圖像的高頻部分,該高頻; 分是用於對該輸入圖像的放大結果進行高頻補償, © 該預處理模組並對該輸入圖像執行一圖像分解處 理,該圖像分解處理係利用一梯度運算元提取該輸 入圖像的圖像梯度,且根據預設之一固定門限,將 該輸入圖像分解成平坦區域和邊緣區域,並在該輸 入圖像上標記該二區域; 提供一複合放大模組,其對於原輸入圖像及該平坦區 域係使用一簡單插值運算,進行放大處理;對於該 邊緣區域及該高頻部分係分別使用一複雜插值運算 ' 和該簡單插值運算,進行放大處理;接著,對該複 雜插值運算的結果,進行可信度處理,其中,邊緣 方向愈明確’該複雜插值運算的結果的可信度愈 高’否則,該複雜插值運算的結果的可信度愈低; 最後,根據該可信度,對該複雜插值運算的結果和 該簡單插值運算的結果進行加權求和;及 執行一融合處理,以將原輸入圖像、平坦區域、邊緣 區域及向頻部分的放大結果,融合成為該輸出圖像。 2·如中明專利範園第1項所述的方法,該複合放大模組 23 201032592 包括一第一複合插值放大模組及一第二複合插值放大 模組’其中該第一複合插值放大模組係對輸入圖像的 邊緣區域及高頻部分執行插值放大及增益處理,該第 二複合插值放大模組係對原輸入圖像及平坦區域,執 行插值放大處理,其中第一及第二複合插值放大模組 中使用的插值運算法相同’該融合處理係對該第一複 • 合插值放大模組及第二複合插值放大模組輸出的圖像 進行融合,該插值放大處理及融合處理是依下列公 Ο 式,以預定的放大比例,產生高解析度的輸出圖像: HR(x) = LR(x) * Hp{x) * CUp(x) · Gain + LR{x) * CUp{x) 其中⑻疋南解析度的輸出圖像’ 是低解析度的輸 入圖像’你W是該預處理模組使用的高通濾波函數, ct/pW是該第一和第二複合插值放大模組使用的插值 放大函數,是該第一複合插值放大模組使用的常 數增益因數。 3. 如申請專利範圍第2項所述的方法,其中該圖像分解 〇 處理係利用該梯度運算元’提取該輸入圖像的圖像梯 度,然後,根據人眼對梯度變化的敏感程度,使 用該固定門限7VWD,根據下列公式分解出平坦區域和 邊緣區域’並對輸入圖像進行標記: Label{x) = ' ^Grd(x) > ThreshD 〇 [平坦,若 4. 如申請專利範圍第3項所述的方法,其中該複雜插值 運算係使用一方向插值運算,對邊緣區域及高頻部分 的圖元,進行放大處理。 24 201032592 5.如申請專利範圍第4項所述的方法,其中該方向插值 運算係根據某一邊緣圖元的梯度值,估算出其邊緣方 向,然後’沿著邊緣方向獲取鄰域内的圖元,進行方 向插值運算。 6·如申請專利範圍第5項所述的方法,其中該方向插值 * 運算是先根據處於邊緣區域的當前圖元的梯度值,估 - 算出其邊緣方向,然後,沿著其邊緣方向獲取其他圖 元,以依照下列公式’對其進行插值計算: ODPx = y'alPi ^λ,=1 i=〇 9 i=0 其中4為該等鄰域圖元凡的加權係數,依照下列公式, 計算其六個方向上的梯度值: Dir0=iP0-P3\ , Dir}=\Pl-P2\ , Dir2^P4~p7\ , Diri=\P5-P6\ , /)//-4=1/¾-/iil , D/r5 I 然後,使用該等梯度值,依照下列公式,計算各圖元A 的加權係數A :201032592 VII. Patent application scope: 1. An image and video enlargement method is applied to an image processing device, and the image processing device can be based on a digital image after being input into the image processing device. Enlarging the ratio, outputting an enlarged number = image, the method comprising: providing a preprocessing module for performing a high pass filtering on the input image to extract a high frequency portion of the input image, The high frequency is used to perform high frequency compensation on the amplified result of the input image, and the preprocessing module performs an image decomposition process on the input image, and the image decomposition processing uses a gradient operation element Extracting an image gradient of the input image, and decomposing the input image into a flat area and an edge area according to one of preset presets, and marking the two areas on the input image; providing a composite amplification module And performing amplifying processing on the original input image and the flat region using a simple interpolation operation; using a complex interpolation operation for the edge region and the high frequency portion respectively Simple interpolation operation is performed to perform amplification processing; then, the result of the complex interpolation operation is subjected to credibility processing, wherein the edge direction is more clear 'the higher the credibility of the result of the complex interpolation operation', otherwise, the complex interpolation operation The lower the credibility of the result is; finally, according to the credibility, the result of the complex interpolation operation and the result of the simple interpolation operation are weighted and summed; and a fusion process is performed to flatten the original input image The enlarged results of the region, the edge region, and the frequency portion are merged into the output image. 2. The method of claim 1, wherein the composite amplifier module 23 201032592 includes a first composite interpolation amplification module and a second composite interpolation amplification module, wherein the first composite interpolation amplification module The group performs interpolation amplification and gain processing on the edge region and the high frequency portion of the input image, and the second composite interpolation amplification module performs interpolation and amplification processing on the original input image and the flat region, wherein the first and second composites The interpolation algorithm used in the interpolation amplification module is the same. The fusion processing system fuses the images output by the first complex interpolation amplification module and the second composite interpolation amplification module, and the interpolation amplification processing and the fusion processing are Produce a high-resolution output image at a predetermined magnification ratio according to the following convention: HR(x) = LR(x) * Hp{x) * CUp(x) · Gain + LR{x) * CUp{ x) where (8) the output image of the Minnan resolution is a low-resolution input image. You W is the high-pass filter function used by the pre-processing module, and ct/pW is the first and second composite interpolation amplification modes. The interpolation amplification function used by the group is Amplifying the first composite interpolation constant gain factor module used. 3. The method of claim 2, wherein the image decomposition process uses the gradient operator to extract an image gradient of the input image, and then, according to the sensitivity of the human eye to the gradient change, Using the fixed threshold of 7VWD, the flat and edge regions are resolved according to the following formula' and the input image is marked: Label{x) = ' ^Grd(x) > ThreshD 〇 [flat, if 4. For patent application The method according to Item 3, wherein the complex interpolation operation performs a magnification process on the edge region and the high-frequency portion of the primitive using a one-direction interpolation operation. The method of claim 4, wherein the direction interpolation operation estimates the edge direction according to the gradient value of a certain edge primitive, and then acquires the primitive in the neighborhood along the edge direction. , perform direction interpolation. 6. The method of claim 5, wherein the direction interpolation* operation first estimates the edge direction based on the gradient value of the current primitive in the edge region, and then obtains other directions along the edge direction thereof. The primitive is interpolated according to the following formula: ODPx = y'alPi ^λ,=1 i=〇9 i=0 where 4 is the weighting coefficient of the neighboring primitives, calculated according to the following formula The gradient values in six directions: Dir0=iP0-P3\ , Dir}=\Pl-P2\ , Dir2^P4~p7\ , Diri=\P5-P6\ , /)//-4=1/3⁄4 -/iil , D/r5 I Then, using the gradient values, calculate the weighting factor A of each primitive A according to the following formula: = min(Dir0,…£>irs) 若(伽*min =伽。)= α3 = 0.5,其他 = 〇 若(伽-=伽丨)% = α2 = 0.5,其他α, = 0 < 若⑽‘ ==Λ>2) α4 =七=〇.5,其他 % = 〇 若⑽= Λ>3) α5 = = 0.5,其他 & = Ο ^(Dirnm = Dir,) a%=au= 0.5, a, = 0 .若伽_ =伽·5) a9 = 丨〇 = 0.5,其他 a, = 0 在计算出該等加獅數〜後,再對方向插值運算的結果 和簡單插值運算的結果,進行加權求和處理,計算 出複雜插值運算的結果^^,並予輸出。 7.如申μ專利範圍第6項所述的方法,其中針對該第一 複合插值放大模組和第二複合插值放大模組中簡單插 25 201032592 值運算及方向插值運算的放大結果,係採用下列公 式’進行融合處理,以依預定的放大比例,產生該高 解析度的輸出圖像: HPx = (1 - β4ίχ). SPX + βίϊχ. DPx 其中///¾是最終輸出圖像的圖元值,奶是針對該輸入 圖像的圖元的簡單插值放大運算的結果,是針對 該輸入圖像的圖元的方向插值運算的結果,你ά是採 用下列公式獲得的融合係數:= min(Dir0,...£>irs) If (gamma *min = gamma.) = α3 = 0.5, other = 〇 if (gamma - = gamma)% = α2 = 0.5, other α, = 0 < (10) ' ==Λ>2) α4 = seven = 〇.5, other % = 〇 if (10) = Λ > 3) α5 = = 0.5, other & = Ο ^(Dirnm = Dir,) a%=au= 0.5 , a, = 0. If gamma_ = gamma·5) a9 = 丨〇 = 0.5, other a, = 0 After calculating the number of lions added, the result of the interpolation operation and the result of the simple interpolation operation The weighted summation process is performed, and the result of the complex interpolation operation is calculated and output. 7. The method of claim 6, wherein the amplification result of the simple interpolation 25 201032592 value calculation and the direction interpolation operation is used for the first composite interpolation amplification module and the second composite interpolation amplification module. The following formula 'fuses the fusion to produce the high-resolution output image at a predetermined magnification: HPx = (1 - β4ίχ). SPX + βίϊχ. DPx where ///3⁄4 is the primitive of the final output image The value, milk is the result of a simple interpolation and enlargement operation for the primitive of the input image, and is the result of the interpolation operation for the direction of the primitive of the input image, and you are using the fusion coefficient obtained by the following formula: ^rmean ~ / 6 矿(伽-.2 > U 錄=0.25; • else if {Dir^ · 4 > Dirmean) βίχχ = 〇 5: else βίιχ = \. 8·如申s奮專利範圍第7項所述的方法,其中該圖像分解 處理還對邊緣區域内的邊緣圖元進行一去雜亂處理, 以去除掉雜亂的邊緣圖元。 ❹ 9.如申请專利範圍第8項所述的方法,其中該去雜亂處 理係先提取某一邊緣圖元的一預定範圍的鄰域w〜,再 統計該鄰域内邊緣圖元的數目心如,且根據下列公式, 在判斷出該鄰域内邊緣圖元的數目不在—預定範圍内 時,即刪除該邊緣圖元: 若(iVetfee > 7¾也或< 77yi:) ’刪除該邊緣圖元; 其中 ThrL = min(M,N) , ThrH =d.i-M N 〇 10. 如申請專利範圍第9項所述的方法,其中該預處理寺萬 組在完成該去雜亂處理後,會對該邊緣區域進行—带' 態學的膨脹處理,以擴大該邊緣區域。 v 11. 如申請專利範圍第10項所述的方法,其中該形態學& 26 201032592 脹處理係使用十字結構元對邊緣圖元向四周擴大邊緣 區域。 12. 如申請專利範圍第11項所述的方法,尚包括使用一非 線性高通濾波的有限脈波響應濾波器,對該輸出圖像 進行一銳化處理,將高通濾波的結果限制在局部鄰域 内的最大及最小值之間。 13. 如申請專利範圍第12項所述的方法,其中該有限脈波 響應濾波器係採用下列公式進行銳化處理: SHP(x) = Medain{LocMctx{x).LocMin{x), Fir(x)) 其中是銳化的結果,函數是取中值操作, 和是當前圖元鄰域内的最大和最小 值,是該有限脈波響應濾波器。 M.如申請專利範圍第1項所述的方法,其中該簡單插值 運算係一雙立方插值運算。 15.如申請專利範圍第1項所述的方法,其中該梯度運算 元係sobel運算元。 16·如申請專利範圍第1項所述的方法,其中該預處理模 組與該複合放大模組係以一程式碼實施,該程式碼‘ 藉由一圖像處理器(GPU)來執行’其中,該圖像處理器 支援並行運算,以提高該程式碼的執行效率。 17·如申4專利範®第I6項所述的方法,其巾該圖像處理 器支援CUDA技術。 18.如申請專利範圍第17項所述的方法,其中該預處理模 組與該複合放大模組係整合於一視頻渲染器中,其 中,該視頻渲染器於一顯示記憶體中建立〇1^以31)表 27 201032592 面’而該贼賴_賴纽蝴赠該Direct3D 表面提取該輸入圖像。 9.種圖像處理裝置,用以在—數蝴像撕入該圖像 處理裝置後,能依據-放大比例,輸出一放大的數位 圖像’該圖像處理裝置包括: 一預處理模組’用以職輸人圖像執行—高通滤波處 理,以提取該輸入圖像的高頻部分,該高頻部分是 用於對該輸人目像賊A絲断冑賴償,該預 處理模組並對該輸入圖像執行一圖像分解處理,該 圖像分解處理係利用一梯度運算元提取該輸入圖像 的圖像梯度’且根據預設之-固定門限,將該輸入 圖像分解成平坦區域和邊賴域,並在該輸入圖像 上標記該二區域; 一複合放大模組,其對於原輸入圖像及該平坦區域係 ,用一簡單插值運算,進行放大處理;對於該邊緣 區域及該高頻部分係分別使用一複雜插值運算和該 簡單插值運算’進行放大處理;接著,對該複雜插 值運算的結果,進行可信度處理,其中,邊緣方向 愈明確’該複雜插值運算的結果的可信度愈高,否 則’該複雜播值運算的結果的可信度愈低;最後, 根據該可信度’對該複雜插值運算的絲和該簡單 ,值運算的結果断加權求和;及 -融合處理單元’用以將原輸入圖像、平坦區域、邊 緣區域及高頬部分的放大結果,融合成為該輸出圖 像。 28 201032592 20. 如申請專利範圍第19項所述的圖像處理裝置,其中該 簡單插值運算係一雙立方插值運算。 21. 如申請專利範圍第19項所述的圖像處理裝置,其中該 複雜插值運算係一方向插值運算。 22. 如申請專利範圍第19項所述的圖像處理裝置,其中該 梯度運算元係sobel運算元。^rmean ~ / 6 mine (gamma-.2 > U record = 0.25; • else if {Dir^ · 4 > Dirmean) βίχχ = 〇5: else βίιχ = \. 8·如申s奋 patent scope 7th The method of the item, wherein the image decomposition process further performs a de-scrambling process on the edge primitives in the edge region to remove cluttered edge primitives. 9. The method of claim 8, wherein the de-scrambling process first extracts a predetermined range of neighborhood w~ of a certain edge primitive, and then counts the number of edge primitives in the neighborhood. And according to the following formula, when it is judged that the number of edge primitives in the neighborhood is not within the predetermined range, the edge primitive is deleted: if (iVetfee > 73⁄4 or < 77yi:) 'delete the edge primitive Wherein ThrL = min(M,N), ThrH =di-MN 〇10. The method of claim 9, wherein the pre-treatment 10,000 group will complete the un-disordered processing The area is carried out - with the expansion of the 'state' to expand the marginal area. 11. The method of claim 10, wherein the morphological & 26 201032592 bulging process uses a cross-structure element to expand the edge region around the edge primitive. 12. The method of claim 11, further comprising using a nonlinear high-pass filtered finite pulse wave response filter to sharpen the output image to limit the result of the high pass filtering to local neighbors Between the maximum and minimum values within the domain. 13. The method of claim 12, wherein the finite pulse response filter is sharpened by the following formula: SHP(x) = Medain{LocMctx{x).LocMin{x), Fir( x)) where is the result of sharpening, the function takes the median operation, and is the maximum and minimum values in the neighborhood of the current primitive, which is the finite impulse response filter. M. The method of claim 1, wherein the simple interpolation operation is a bicubic interpolation operation. 15. The method of claim 1, wherein the gradient operator is a sobel operand. The method of claim 1, wherein the pre-processing module and the composite amplifying module are implemented by a code, and the code is executed by an image processor (GPU). The image processor supports parallel operations to improve the execution efficiency of the code. 17. The method of claim 4, wherein the image processor supports CUDA technology. 18. The method of claim 17, wherein the pre-processing module and the composite amplification module are integrated in a video renderer, wherein the video renderer establishes a display memory in a display memory. ^ to 31) Table 27 201032592 face 'and the thief Lai _ Lai New Butterfly to the Direct3D surface to extract the input image. 9. An image processing apparatus for outputting an enlarged digital image according to an amplification ratio after the digital image is torn into the image processing apparatus. The image processing apparatus comprises: a preprocessing module 'Used input image execution-high-pass filtering process to extract the high-frequency part of the input image, the high-frequency part is used for the input image thief A wire break, the pre-processing module And performing an image decomposition process on the input image, the image decomposition process extracting an image gradient of the input image by using a gradient operation element and decomposing the input image into a preset threshold according to a predetermined threshold a flat area and a side field, and marking the two areas on the input image; a composite amplifying module that performs amplifying processing on the original input image and the flat area system by a simple interpolation operation; The region and the high frequency portion respectively perform amplification processing using a complex interpolation operation and the simple interpolation operation; and then, the result of the complex interpolation operation is subjected to reliability processing, wherein the edge direction is more It is clear that the higher the credibility of the result of the complex interpolation operation, otherwise the lower the credibility of the result of the complex broadcast value operation; finally, according to the credibility of the complex interpolation operation and the simplicity, The result of the value operation is off-weighted and summed; and the -fusion processing unit' is used to fuse the original input image, the flat region, the edge region, and the enlarged result of the sorghum portion into the output image. The image processing device of claim 19, wherein the simple interpolation operation is a bicubic interpolation operation. 21. The image processing device of claim 19, wherein the complex interpolation operation is a one-way interpolation operation. 22. The image processing apparatus of claim 19, wherein the gradient operation unit is a sobel operand. 2929
TW98106352A 2009-02-27 2009-02-27 Method for upscaling images and videos and associated image processing device TWI384876B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98106352A TWI384876B (en) 2009-02-27 2009-02-27 Method for upscaling images and videos and associated image processing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98106352A TWI384876B (en) 2009-02-27 2009-02-27 Method for upscaling images and videos and associated image processing device

Publications (2)

Publication Number Publication Date
TW201032592A true TW201032592A (en) 2010-09-01
TWI384876B TWI384876B (en) 2013-02-01

Family

ID=44854941

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98106352A TWI384876B (en) 2009-02-27 2009-02-27 Method for upscaling images and videos and associated image processing device

Country Status (1)

Country Link
TW (1) TWI384876B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI490818B (en) * 2013-01-02 2015-07-01 Chunghwa Picture Tubes Ltd Method and system for weighted image enhancement
CN109919847A (en) * 2017-12-13 2019-06-21 彩优微电子(昆山)有限公司 Improve the method for enlarged drawing quality
TWI723123B (en) * 2017-01-23 2021-04-01 香港商斑馬智行網絡(香港)有限公司 Image fusion method, device and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7480071B2 (en) * 2003-04-14 2009-01-20 Lexmark International, Inc Maximizing performance in a hardware image scaling module
TWI342154B (en) * 2006-05-17 2011-05-11 Realtek Semiconductor Corp Method and related apparatus for determining image characteristics
TW200802171A (en) * 2006-06-20 2008-01-01 Sunplus Technology Co Ltd Image scaling system for saving memory
US20080055338A1 (en) * 2006-08-30 2008-03-06 Ati Technologies Inc. Multi-stage edge-directed image scaling

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI490818B (en) * 2013-01-02 2015-07-01 Chunghwa Picture Tubes Ltd Method and system for weighted image enhancement
TWI723123B (en) * 2017-01-23 2021-04-01 香港商斑馬智行網絡(香港)有限公司 Image fusion method, device and equipment
CN109919847A (en) * 2017-12-13 2019-06-21 彩优微电子(昆山)有限公司 Improve the method for enlarged drawing quality
TWI665640B (en) * 2017-12-13 2019-07-11 大陸商彩優微電子(昆山)有限公司 Method for improving quality of enlarged image
CN109919847B (en) * 2017-12-13 2023-03-14 彩优微电子(昆山)有限公司 Method for improving quality of amplified image

Also Published As

Publication number Publication date
TWI384876B (en) 2013-02-01

Similar Documents

Publication Publication Date Title
US8131117B2 (en) Method for magnifying images and videos and associated image processing device
CN101815157B (en) Image and video amplification method and relevant image processing device
Ma et al. Handling motion blur in multi-frame super-resolution
US9485432B1 (en) Methods, systems and apparatuses for dual-camera based zooming
Sun et al. Image super-resolution using gradient profile prior
Wang et al. Edge-directed single-image super-resolution via adaptive gradient magnitude self-interpolation
US9772771B2 (en) Image processing for introducing blurring effects to an image
Sun et al. Gradient profile prior and its applications in image super-resolution and enhancement
Kim et al. Curvature interpolation method for image zooming
Parsania et al. A review: Image interpolation techniques for image scaling
Chatterjee et al. Application of Papoulis–Gerchberg method in image super-resolution and inpainting
US20100260433A1 (en) System and method for scaling images
US8406518B2 (en) Smoothed local histogram filters for computer graphics
US20110097011A1 (en) Multi-resolution image editing
US7679620B2 (en) Image processing using saltating samples
TWI384876B (en) Method for upscaling images and videos and associated image processing device
WO2018040437A1 (en) Picture processing method and apparatus
US8249395B2 (en) System, method, and computer program product for picture resizing
Kraus et al. GPU-based edge-directed image interpolation
TW200949759A (en) Image processing apparatus and method
Abu et al. Image projection over the edge
He et al. Joint motion deblurring and superresolution from single blurry image
CN101847252B (en) Image amplification method for maintaining image smoothness
Abu et al. Image super-resolution via discrete tchebichef moment
Feng et al. Perceptual thumbnail generation