TWI419059B - Method and system for example-based face hallucination - Google Patents

Method and system for example-based face hallucination Download PDF

Info

Publication number
TWI419059B
TWI419059B TW099119334A TW99119334A TWI419059B TW I419059 B TWI419059 B TW I419059B TW 099119334 A TW099119334 A TW 099119334A TW 99119334 A TW99119334 A TW 99119334A TW I419059 B TWI419059 B TW I419059B
Authority
TW
Taiwan
Prior art keywords
resolution
face
training
image
low
Prior art date
Application number
TW099119334A
Other languages
Chinese (zh)
Other versions
TW201145181A (en
Inventor
Chia Wen Lin
Chih Chung Hsu
Original Assignee
Ind Tech Res Inst
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ind Tech Res Inst filed Critical Ind Tech Res Inst
Priority to TW099119334A priority Critical patent/TWI419059B/en
Priority to US12/858,442 priority patent/US8488913B2/en
Publication of TW201145181A publication Critical patent/TW201145181A/en
Application granted granted Critical
Publication of TWI419059B publication Critical patent/TWI419059B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

以樣本為基礎之人臉超解析度方法與系統Sample-based face super-resolution method and system

本揭露係關於一種以樣本為基礎(example-based)之人臉超解析度(face hallucination)方法與系統。The present disclosure relates to an example-based face hallucination method and system.

人臉超解析度的技術可以有許多應用,例如安全監(surveillance)、人臉辨識(face recognition)、人臉表情估測(face expression estimation)與人臉年齡估測(face age estimation)等應用。人臉影像不同於一般的低解析度到高解析度影像問題,因為人臉有其結構性,且細節部分的一些錯誤可能會造成整張高解析度人臉變得不合理,例如當眼睛的形狀變成非橢圓形,或者是嘴部的形狀扭曲,對於整張影像而言,其錯誤率非常小,然而影響卻很大。The face super-resolution technology can have many applications, such as surveillance, face recognition, face expression estimation, and face age estimation. . Face image is different from the general low-resolution to high-resolution image problem, because the face has its structure, and some errors in the details may cause the whole high-resolution face to become unreasonable, for example, when the eyes The shape becomes non-elliptical, or the shape of the mouth is distorted. For the entire image, the error rate is very small, but the effect is great.

基於人臉結構的特性,獨特的人臉超解析度技術便陸續被提出來。例如,美國專利號7,379,611的文獻中,揭露一種影像超解析度方法,此方法係透過擷取輸入低解析度影像中的原始草圖先驗(primal sketch priors)資料,來推斷對應於此低解析度輸入影像之高解析度細節。美國專利公開號2008/0267525的文獻中,揭露一種透過擷取影像中的邊緣特徵(edge feature)來進行超解析度放大的方法。Based on the characteristics of the face structure, unique face super-resolution technology has been proposed. For example, in U.S. Patent No. 7,379,611, an image super-resolution method is disclosed which infers the low resolution by extracting the original sketch priors data from the input low resolution image. Enter the high resolution detail of the image. In the document of U.S. Patent Publication No. 2008/0267525, a method of performing super-resolution amplification by capturing an edge feature in an image is disclosed.

Wei Fan等人揭露的論文“Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds”中,提出一種以學習為基礎(learning-based)之影像超解析度(image hallucination)方法,是透過擷取影像中的原始特徵(primitive features),並利用多張訓練集中的原始特徵來組合出目標影像之高解析度的原始特徵。Yueting Zhuang等人揭露的論文“LPH super-resolution and neighbor reconstruction for residue compensation”中,提出一種兩階段之人臉超解析度技術,是利用流形學習(manifold learning)的特性,亦即每張輸入影像都可以在流形域有相似的分佈,依此,透過計算低解析度影像,來修補(patch)流形域裡的線性組合係數,然後利用相同的線性組合係數與半徑式函數(radial basis function)來組合出高解析度影像,此技術利用流形學習來組合高解析度影像。In the paper "Image Hallucination Using Neighbor Embedding over Visual Primitive Manifolds" disclosed by Wei Fan et al., a learning-based image hallucination method is proposed, which is obtained by capturing the original image. Primitive features, and use the original features of multiple training sets to combine the high-resolution original features of the target image. In the paper "LPH super-resolution and neighbor reconstruction for residue compensation" disclosed by Yueting Zhuang et al., a two-stage face super-resolution technique is proposed, which utilizes the characteristics of manifold learning, that is, each input. Images can be similarly distributed in the manifold domain. By calculating the low-resolution image, the linear combination coefficients in the manifold domain are patched, and then the same linear combination coefficient and radius function are used. Function) to combine high-resolution images, this technique uses manifold learning to combine high-resolution images.

Jeong-Seon Park等人揭露的論文“An example-based face hallucination method for single-frame,low-resolution facial images”中,提出一種以樣本為基礎之人臉超解析度方法,是利用主要成分析(Principle Components Analysis,PCA),從低解析度影像來拆解與訓練為基底影像,並透過變形(warping)來對齊人臉。如第一圖的範例流程所示,透過計算低解析度的基底影像之線性組合,並在高解析度的基底影像中用相同的組合來計算高解析度人臉。In the paper "An example-based face hallucination method for single-frame, low-resolution facial images" disclosed by Jeong-Seon Park et al., a sample-based super-resolution method for face is proposed, which utilizes the main analysis ( Principle Components Analysis (PCA), disassembles and trains as a base image from low-resolution images, and aligns faces through warping. As shown in the example flow of the first figure, a high-resolution face is calculated by computing a linear combination of low-resolution base images and using the same combination in a high-resolution base image.

B.G.Vijay Kumar等人揭露的論文“Face hallucination using OLPP and Kernel Ridge Regression”中,提出一種人臉超解析度技術,是利用流形學習法中的正交局部性保存投影(Orthogonal Locality Preserving Projection,OLPP)方法,來對人臉影像的小片段進行降維的動作,在低維度空間利用機率模型估計出最大可能的高解析度的小片段。再利用核嶺回歸(Kernel Ridge Regression,KRR)的預測模型來修正重建的人臉影像。In the paper "Face hallucination using OLPP and Kernel Ridge Regression" disclosed by BGVijay Kumar et al., a face super-resolution technique is proposed, which uses the orthogonal locality preserving projection (OLPP) in the manifold learning method. The method is to reduce the dimension of the small segment of the face image, and use the probability model in the low-dimensional space to estimate the largest possible high-resolution small segment. The reconstructed face image is corrected by using the prediction model of Kernel Ridge Regression (KRR).

本揭露的實施範例可提供一種以樣本為基礎之人臉超解析度方法與系統。The embodiment of the present disclosure can provide a sample-based face super-resolution method and system.

在一實施範例中,所揭露者是關於一種以樣本為基礎之人臉超解析度方法。此方法包含:準備一個備有多張訓練影像的訓練資料庫,並取得一欲放大的低解析度人臉影像;利用流形學習,將此訓練資料庫中此多張訓練影像Itrain 與此低解析度人臉影像投影到同一流形域,其中,投影後的低解析度人臉影像以符號yL 表示,投影後的訓練影像以符號ytrain 表示;從N張投影後的訓練影像ytrain 中,篩選出與yL 最匹配的(best matching)一訓練集;以及透過對此訓練集與yL 進行基底分解,學習出一組基底影像,包括此訓練集的高低解析度原型臉與yL 的低解析度原型臉;以及利用此組基底影像來重建出此低解析度人臉影像的一高解析度人臉影像。其中此訓練集的高解析度原型臉與y L 的原型臉之間的差異符合一門檻值要求。In one embodiment, the disclosed person is directed to a sample-based face super-resolution method. The method comprises: preparing a training database with a plurality of training images, and obtaining a low-resolution facial image to be enlarged; using the manifold learning, the plurality of training images I train in the training database and the The low-resolution face image is projected to the same manifold domain, wherein the projected low-resolution face image is represented by the symbol y L , and the projected training image is represented by the symbol y train ; the N-projected training image y In the train , a best matching training set is selected for y L ; and a base decomposition is performed on the training set and y L to learn a set of base images, including high and low resolution prototype faces of the training set. a low resolution prototype face of y L ; and a high resolution face image reconstructed from the low resolution face image using the set of base images. The difference between the high-resolution prototype face of this training set and the prototype face of y L meets a threshold requirement.

在另一實施範例中,所揭露者是關於一種以樣本為基礎之人臉超解析度系統。此系統包含一訓練資料庫,收集與儲存多張訓練影像;一投影模組(projection module),接收此多張訓練影像,並透過一流形學習法求得一投影矩陣後,將此多張訓練影像與一輸入的低解析度人臉影像,投影到同一流形域,得到N張投影後的訓練影像ytrain 與一投影後的低解析度人臉影像yL ;一匹配模組(matching module),從N張投影後的訓練影像ytrain 中,篩選出k張與yL 可匹配的一訓練集,k≦N;一基底分解模組(basis decomposition module),將此訓練集使用基底分解來萃取出一低解析度原型臉(low resolution prototype face)與此訓練集的一組高解析度原型臉,且此低解析度原型臉與此組高解析度原型臉之間的差異符合一門檻值要求;以及一人臉超解析度模組(face hallucination module),將符合此門檻值要求的此低解析度原型臉與此組高解析度原型臉當成一組基底影像,來重建出該低解析度人臉影像的一高解析度人臉影像。In another embodiment, the disclosed person is directed to a sample-based face super-resolution system. The system comprises a training database for collecting and storing a plurality of training images; a projection module, receiving the plurality of training images, and obtaining a projection matrix by a first-class shape learning method, and then training the plurality of exercises The image and the input low-resolution face image are projected into the same manifold domain, and the N-projected training image y train and a projected low-resolution face image y L are obtained ; a matching module (matching module) From the N-projected training image y train , a training set that k and y L can match is selected, k≦N; a basic decomposition module, which is decomposed using the base To extract a low resolution prototype face and a set of high-resolution prototype faces of the training set, and the difference between the low-resolution prototype face and the set of high-resolution prototype faces is in line with the threshold. a value requirement; and a face hallucination module that reconstructs the low low-resolution prototype face that meets the threshold value and the set of high-resolution prototype faces as a set of base images A high-resolution face image that resolves facial images.

茲配合下列圖示、實施範例之詳細說明及申請專利範圍,將上述及本發明之其他目的與優點詳述於後。The above and other objects and advantages of the present invention will be described in detail with reference to the accompanying drawings.

在一般的空間域(spatial domain)中,不同人臉的差異常無法明確表示出來,因此,在本揭露中,將一人臉資料庫轉換到另一個可以表示人臉差異的空間,亦即流形域。本揭露的實施範例透過流形學習的方法,將輸入影像與訓練資料庫中的人臉影像都投影到流形域(manifold domain),並透過如基底分解,來篩選適合的人臉影像去建立原型臉(prototype faces),再透過結合適當的組合參數與原型臉來得到高解析度的人臉影像。In a general spatial domain, the difference in the face of different faces cannot be clearly expressed. Therefore, in the present disclosure, a face database is converted to another space that can represent a face difference, that is, a manifold. area. The embodiment of the present disclosure projects a face image in an input image and a training database into a manifold domain through a manifold learning method, and filters a suitable face image to establish a face image. Prototype faces, and then combine high-resolution face images with appropriate combination parameters and prototype faces.

一般而言,重建的人臉影像R 可以表示為In general, the reconstructed face image R can be expressed as

其中,I 是原本輸入的人臉影像,P是原型臉(prototype face),α是重建係數(coefficients of reconstruction)。因此,透過計算出的低解析度之人臉原型的組合參數,例如權重(weightings),可以利用高解析度的人臉原型乘上相同的組合參數,就可以得到重建的高解析度人臉影像,如第二圖的範例所示,原本輸入的低解析度人臉影像I 等於M個低解析度之人臉原型的線性組合,其中,α1 至αM 是其M個組合參數,則可以利用M個高解析度的人臉原型乘上相同的組合參數,即α1 至αM ,來得到重建的高解析度人臉影像。這些組合參數的計算可用下列的一個範例式來求出:Where I is the face image originally input, P is the prototype face, and α is the coefficient of reconstruction. Therefore, through the calculated combination parameters of the low-resolution face prototype, such as weightings, the high-resolution face prototype can be multiplied by the same combination parameters to obtain the reconstructed high-resolution face image. As shown in the example in the second figure, the originally input low-resolution face image I is equal to the linear combination of M low-resolution face prototypes, wherein α 1 to α M are M combination parameters, The reconstructed high-resolution face images are obtained by multiplying M high-resolution face prototypes by the same combination parameters, α 1 to α M . The calculation of these combined parameters can be found using one of the following examples:

α *=((P L )TP L )-1 ‧(P L )TI L α *=(( P L ) TP L ) -1 ‧( P L ) TI L

其中,P L 是低解析度之原型臉,I L 是低解析度之輸入的人臉影像。Among them, P L is a low-resolution prototype face, and I L is a low-resolution input face image.

第三圖是一範例示意圖,說明一種以樣本為基礎之人臉超解析度方法,與所揭露的某些實施範例一致。第三圖的範例中,首先,準備一訓練資料庫310,並取得一欲放大的低解析度人臉影像312。訓練資料庫310可收集訓練人臉影像,一般包含各式各樣的訓練影像(comprehensive training images)。然後,利用流形學習314,例如利用一種降維法得到一轉換矩陣A,將訓練資料庫310中N張訓練影像與低解析度人臉影像312投影到同一流形域,其中,投影後的低解析度人臉影像312以符號yL 表示,投影後的訓練影像以符號ytrain 表示。從N張投影後的訓練影像ytrain 的集合中,篩選出與y L 可匹配的一訓練集(training set)316。利用基底分解(basis decomposition)318,來萃取出此訓練集的一組高解析度原型臉P H 和y L 的原型臉P L ,直到學習出一組基底影像為止,其中此組高解析度原型臉與y L 的原型臉之間的差異符合一門檻值要求。然後,利用此組基底影像來重建出低解析度人臉影像312的一高解析度人臉影像322,亦即進行以樣本為基礎之人臉超解析度320。The third figure is a schematic diagram illustrating a sample-based method of super-resolution of a face, consistent with certain disclosed embodiments. In the example of the third figure, first, a training database 310 is prepared, and a low-resolution face image 312 to be enlarged is obtained. The training database 310 can collect training face images, and generally includes a variety of training images. Then, using the manifold learning 314, for example, a transformation matrix A is obtained by using a dimensionality reduction method, and the N training images and the low-resolution facial image 312 in the training database 310 are projected into the same manifold domain, wherein the projected The low-resolution face image 312 is represented by the symbol y L , and the projected training image is represented by the symbol y train . From a set of N projected training images y train , a training set 316 that matches y L is selected. Decomposed using a base (basis decomposition) 318, this training set to extract the high resolution set of face prototypes y L and P H prototype face P L, a set of learning until the substrate until the image, wherein the prototype of this group of high-resolution The difference between the face and the prototype face of y L meets a threshold requirement. Then, the high-resolution face image 322 of the low-resolution face image 312 is reconstructed by using the set of base images, that is, the sample-based face super-resolution 320 is performed.

在訓練資料庫310中引入流形學習的技術,可從訓練資料庫310中學習出一個投影矩陣(projection matrix),此投影矩陣可將訓練資料庫310中的人臉影像投影到一流形域,使得不同人臉影像的差異可以明顯地表示出來。透過同一投影矩陣,也可以將輸入的低解析度人臉影像312投影到同一流形域。假設透過一流形學習法,例如降維演算法(dimensionality reduction algorithm),得到了一投影矩陣A ,則低解析度輸入影像投影到流形域的公式可以表示為y L =A T I L 。透過任一流形學習演算法求得一投影矩陣A ,一般而言,此投影矩陣A 的維度會遠小於訓練資料庫310中原始資料的維度,例如,高解析度人臉影像的維度是64×64=4096,而投影後的維度可能只選擇100。A technique of manifold learning is introduced in the training database 310, and a projection matrix can be learned from the training database 310, and the projection matrix can project the face image in the training database 310 to the first-class shape. The difference in different facial images can be clearly expressed. The input low-resolution face image 312 can also be projected to the same manifold domain through the same projection matrix. It is assumed that a projection matrix A is obtained by a first-order shape learning method, such as a dimensionality reduction algorithm, and a formula for projecting a low-resolution input image into a manifold domain can be expressed as y L = A T I L . A projection matrix A is obtained by any manifold learning algorithm. In general, the dimension of the projection matrix A is much smaller than the dimension of the original data in the training database 310. For example, the dimension of the high-resolution facial image is 64×. 64=4096, and the projected dimension may only select 100.

在備有N張投影後的訓練影像ytrain 的集合中,可利用一方法,例如k-NN演算法,從中篩選出與y L 可匹配的一組{ytrain },例如從N張投影後的訓練影像中,找出k張與yL 最相似的人臉影像ytrain 。透過此k張最相似的人臉影像ytrain 來求出一組基本影像。In a set of training images y train with N projections, a method, such as a k-NN algorithm, can be used to filter out a set of {y train } that can be matched with y L , for example, after N projections. In the training image, find the face image y train that k is most similar to y L. A set of basic images is obtained through the k most similar facial image y train .

第四圖是一範例流程圖,進一步說明如何在流形域裡的訓練集合進行篩選,與所揭露的某些實施範例一致。第四圖的範例中,從N張投影後的訓練影像中,篩選出K張與yL 最匹配的人臉影像ytrain 的一訓練集,如步驟410所示;K≦N。然後,透過此訓練集,使用基底分解,例如主成分分析可用來作為一基底分解函數,來萃取出一低解析度原型臉PL 與此訓練集的一組高解析度原型臉,如步驟420所示。當低解析度原型臉PL 與此組高解析度原型臉之間的差異小於一門檻值時,將此低解析度原型臉PL 與此組高解析度原型臉當成一組基底影像,如步驟440所示。否則的話,遞增k值,即步驟430,後,重複步驟410至步驟420,直到選出符合門檻值要求的一組高解析度原型臉為止。The fourth diagram is an example flow diagram that further illustrates how to filter the training set in the manifold domain, consistent with certain disclosed embodiments. In the example of the fourth figure, from the N pieces of the projected training images, a training set of the face images y train that best matches K and y L is selected, as shown in step 410; K≦N. Then, through the training set, a base decomposition, such as principal component analysis, can be used as a base decomposition function to extract a low-resolution prototype face P L and a set of high-resolution prototype faces of the training set, as in step 420. Shown. When the difference between the low-resolution prototype face P L and the set of high-resolution prototype faces is less than a threshold, the low-resolution prototype face P L and the set of high-resolution prototype faces are regarded as a set of base images, such as Step 440 is shown. Otherwise, the value of k is incremented, step 430, and then steps 410 through 420 are repeated until a set of high resolution prototype faces that meet the threshold requirements are selected.

此組基底影像會隨著不同輸入低解析度影像而變化,也是重建低解析度人臉影像312的基底。由於輸入的低解析度人臉影像在合成之前,先對基底影像進行上述重新訓練的動作,因此,可避免在使用者輸入人臉影像與訓練資料庫中的人臉影像差異過大時,重建出不相似的人臉影像;也可避免因為人臉影像差異過大的問題而使得瑕疵出現在重建後的高解析度影像。This set of base images will vary with different input low resolution images and is the basis for reconstructing low resolution face images 312. Since the input low-resolution face image is subjected to the above-mentioned re-training action on the base image before the synthesis, it is possible to avoid reconstructing when the user inputs the face image and the face image in the training database is too large. Dissimilar facial images; it can also avoid the appearance of high-resolution images after reconstruction due to the problem of excessive facial image differences.

也就是說,一旦從流形域裡人臉影像資料庫中挑選出適合的訓練人臉影像集合之後,可透過基底分解萃取出符合門檻值要求的一組原型臉,此組原型臉也就是用來作為之後要重建高解析度影像的基底影像。步驟410中,可使用一成本函數(cost function)來決定k值。此成本函數所決定的k值可使此訓練集的一組原型臉的線性組合與低解析度人臉影像312之間的差異為最小。That is to say, once a suitable training face image set is selected from the face image database in the manifold domain, a set of prototype faces meeting the threshold value can be extracted through the base decomposition, and the set of prototype faces is also used. It is used as a base image to reconstruct high-resolution images. In step 410, a cost function can be used to determine the k value. The k value determined by this cost function minimizes the difference between the linear combination of a set of prototype faces of this training set and the low resolution face image 312.

如第五圖的曲線範例所示,當k值等於330時,對一測試的低解析度人臉影像而言,從N張(例如N=400)投影後的訓練影像中,可篩選出330張與yL 最匹配的人臉影像ytrain 的一訓練集,此訓練集的一組原型臉的線性組合與此測試的低解析度人臉影像之間的差異為最小。第五圖中,橫軸代表篩選出的人臉影像ytrain 的個數,亦即k值,縱軸代表上述的差異值。As shown in the curve example of the fifth figure, when the k value is equal to 330, for a low-resolution face image of a test, from the N (for example, N=400) projected training images, 330 can be selected. A training set of the face image y train that best matches the y L , the difference between the linear combination of a set of prototype faces of the training set and the low-resolution face image of the test is minimal. In the fifth figure, the horizontal axis represents the number of screen images y train , that is, the k value, and the vertical axis represents the above difference value.

當取得一低解析度影像後,也可以先進行對齊、亮度平均化等前處理,之後利用投影後的訓練影像的集合,從中篩選出與此低解析度影像有匹配的人臉影像。再透過前述的組合參數計算,以及結合這些算出的參數與此組萃取出的基底影像,而重建出接近原本輸入的低解析度人臉影像的一個高解析度的人臉影像版本。此即為本揭露之以樣本為基礎之超解析度的設計原理。此設計原理也可確保投影到流形域上的低解析度影像與高解析度影像的分佈相似。第六A圖與第六B圖的範例分別是投影到流形域上的低解析度影像的分佈與高解析度影像的分佈,其中,只選擇投影後前兩個維度來觀察。可以看出第六A圖與第六B圖的範例有相似的分佈。也就是說,訓練資料庫310中多張訓練影像被投影到流形域後的低解析度影像與高解析度影像的分佈相似。After obtaining a low-resolution image, pre-processing such as alignment and brightness averaging may be performed first, and then the projected image of the training image is used to select a face image matching the low-resolution image. A high-resolution version of the face image that is close to the originally input low-resolution face image is reconstructed through the aforementioned combined parameter calculation and combined with the calculated parameters and the extracted base image. This is the design principle of sample-based super-resolution for this disclosure. This design principle also ensures that low-resolution images projected onto the manifold domain are similar in distribution to high-resolution images. The examples of the sixth A map and the sixth B graph are respectively the distribution of the low-resolution image projected onto the manifold domain and the distribution of the high-resolution image, wherein only the first two dimensions after projection are selected for observation. It can be seen that the sixth A map and the sixth B graph have similar distributions. That is to say, the distribution of the low-resolution image after the plurality of training images in the training database 310 are projected to the manifold domain is similar to the distribution of the high-resolution image.

承上述,第七圖是一範例示意圖,說明一種以樣本為基礎之人臉超解析度系統,與所揭露的某些實施範例一致。在第七圖的範例中,人臉超解析度系統700可包含訓練資料庫310、一投影模組720、一匹配模組730、一基底分解模組740、以及一人臉超解析度模組750。In view of the above, the seventh figure is an exemplary diagram illustrating a sample-based face super-resolution system consistent with certain disclosed embodiments. In the example of the seventh figure, the super-resolution system 700 can include a training database 310, a projection module 720, a matching module 730, a base decomposition module 740, and a face super-resolution module 750. .

訓練資料庫310用來收集與儲存多張訓練影像712。投影模組720接收來自訓練資料庫310中多張訓練影像712,並透過一流形學習,例如降維演算法,求得的投影矩陣A 後,將多張訓練影像712與輸入的低解析度人臉影像312,投影到同一流形域,亦即得到投影後的訓練影像ytrain 與投影後的低解析度人臉影像yL 。匹配模組730從N張投影後的訓練影像ytrain 中,例如利用k-NN演算法,篩選出與yL 可匹配之具k張訓練影像的一訓練集732,k≦N。基底分解模組740使用基底分解來萃取出低解析度人臉影像312的一低解析度原型臉PL 和此訓練集的一組高解析度原型臉PHThe training database 310 is used to collect and store a plurality of training images 712. The projection module 720 receives the plurality of training images 712 from the training database 310, and passes the first-order shape learning, such as the dimensionality reduction algorithm, to obtain the projection matrix A , and then the plurality of training images 712 and the input low-resolution people. The face image 312 is projected into the same manifold domain, that is, the projected training image y train and the projected low-resolution face image y L are obtained . The matching module 730 filters out a training set 732, k≦N with k training images that can be matched with y L from the N projected training images y train , for example, using a k-NN algorithm. The base decomposition module 740 uses base decomposition to extract a low resolution prototype face P L of the low resolution face image 312 and a set of high resolution prototype faces P H of the training set.

當原型臉PL 與此組高解析度原型臉之間的差異還未符合一門檻值要求時,基底分解模組740遞增k值,並且匹配模組730從N張投影後的訓練影像ytrain 中,根據此遞增k值,重新篩選出另一訓練集,再透過基底分解模組740來萃取出此另一訓練集的一組高解析度原型臉,直到此原型臉PL 與最終的一組高解析度原型臉之間的差異符合此門檻值要求為止。人臉超解析度模組750將原型臉PL 與符合此門檻值要求的最終的此組高解析度原型臉當成一組基底影像,來重建出低解析度人臉影像312的高解析度人臉影像322,例如透過結合適當的權重組合與此組基底影像來得到一高解析度的人臉影像。When the difference between the prototype face P L and the set of high-resolution prototype faces does not meet the threshold requirement, the base decomposition module 740 increments the k value, and the matching module 730 extracts the training images from the N projections y train According to the incremental k value, another training set is re-screened, and then a set of high-resolution prototype faces of the other training set are extracted through the base decomposition module 740 until the prototype face P L and the final one The difference between the high-resolution prototype faces of the group meets this threshold requirement. The face super-resolution module 750 reconstructs the high-resolution person of the low-resolution face image 312 by using the prototype face P L and the final set of high-resolution prototype faces meeting the threshold value as a set of base images. The face image 322 is obtained by combining the appropriate weights with the set of base images to obtain a high-resolution face image.

如第八圖的範例所示,人臉超解析度系統700可執行於一電腦系統800中。電腦系統800至少包含一記憶體裝置810以及一處理器820。記憶體裝置810可用來實現訓練資料庫310。處理器820裡備有投影模組720、匹配模組730、基底分解模組740、以及人臉超解析度模組750。處理器820可接收輸入的低解析度人臉影像312,並讀取記憶體裝置810的訓練影像,以執行上述投影模組720、匹配模組730、基底分解模組740、以及人臉超解析度模組750的功能,並透過結合適當的參數組合,來產生低解析度人臉影像312的高解析度人臉影像322。As shown in the example of the eighth diagram, the face super-resolution system 700 can be implemented in a computer system 800. The computer system 800 includes at least one memory device 810 and a processor 820. The memory device 810 can be used to implement the training database 310. The processor 820 is provided with a projection module 720, a matching module 730, a base decomposition module 740, and a face super-resolution module 750. The processor 820 can receive the input low-resolution facial image 312 and read the training image of the memory device 810 to execute the projection module 720, the matching module 730, the base decomposition module 740, and the face super-resolution. The function of the module 750 is used to generate a high-resolution face image 322 of the low-resolution face image 312 by combining appropriate parameter combinations.

利用本揭露的實施範例,提供在人臉仿真加強的一實驗範例,此實驗範例中,訓練集包含483張人臉影像,人臉影像的高解析度與測試影像(測試影像1~4)的低解析度分別採用64×64與16×16。使用主成分分析作為一基底分解函數,共萃取出100張原型臉,其中k-NN演算法的k值可在100~483之間遞增並自動被決定。第八圖的範例表格提供一個重建品質之客觀數據的比較。從第九圖的客觀數據可以很明顯的發現,相較於三種習知技術,本揭露的實施範例皆能有效地改善人臉影像的重建品質。Using the embodiment of the present disclosure, an experimental example of face simulation enhancement is provided. In this experimental example, the training set includes 483 face images, high resolution of the face image and test images (test images 1~4). The low resolution is 64 x 64 and 16 x 16, respectively. Using principal component analysis as a base decomposition function, a total of 100 prototype faces were extracted. The k-value of the k-NN algorithm can be incremented between 100 and 483 and automatically determined. The example table in Figure 8 provides a comparison of objective data for reconstruction quality. It can be clearly seen from the objective data of the ninth figure that the implementation examples of the present disclosure can effectively improve the reconstruction quality of the face image compared to the three conventional techniques.

第十圖是一範例表格,提供五種不同技術之偵測率比較(detection rate comparison),與所揭露的某些實施範例一致。其中,根據多線性主成分分析(Multilinear PCA,MPCA)技術來執行人臉辨識;在訓練階段,有62張訓練影像,共有14項標的(subjects);在辨識階段,有12張影像,共有6項標的。第十圖的範例中,以兩種比較方式,亦即僅移除可加性的高斯雜訊(additive Gaussian noise only)和以平均化濾波器來移除可加性的高斯雜訊(additive Gaussian noise with averaging filtering),來比較這些不同技術的偵測率。從第十圖的數據可以很明顯的發現,與四種習知技術相較,本揭露的實施範例有高的偵測率。實施範例可從訓練資料庫中篩選有用的訓練集,當輸入影像與資料庫中的影像差異大時,不會重建出不像原本人臉影像的高解析度版本,也不會嚴重改變流形域裡局部臉成分的分佈(distribution of local face components)。The tenth figure is an example table providing detection rate comparisons for five different techniques, consistent with some of the disclosed embodiments. Among them, face recognition is performed according to Multilinear PCA (MPCA) technology; in the training phase, there are 62 training images with 14 subject objects; in the identification stage, there are 12 images, total 6 Item of the item. In the example of the tenth figure, the additive Gaussian noise only and the averaging filter are used to remove the additivity Gaussian noise only (additive Gaussian noise). Noise with averaging filtering) to compare the detection rates of these different technologies. It can be clearly seen from the data of the tenth figure that the embodiment of the present disclosure has a high detection rate as compared with the four conventional techniques. The implementation example can filter useful training sets from the training database. When the input image and the image in the database are different, the high-resolution version that is not like the original face image will not be reconstructed, and the manifold will not be seriously changed. The distribution of local face components in the domain.

綜上所述,本揭露之實施範例提供一種以樣本為基礎之人臉超解析度方法與系統。本揭露之實施範例利用流形學習並藉由篩選出較匹配於輸入之低解析度人臉影像的訓練集,來迭代地(iteratively)改善重建時的基底影像。因此,可以同時在客觀與主觀上有效地加強一低解析度人臉影像的重建品質。與習知技術相較,本揭露之實施範例可以重建出較接近原始人臉影像的重建高解析度版本,又可以避免因為人臉影像差異過大的問題而使得瑕疵出現在重建後的高解析度影像。In summary, the embodiment of the present disclosure provides a sample-based face super-resolution method and system. The embodiment of the present disclosure utilizes manifold learning and iteratively improves the base image at the time of reconstruction by filtering out a training set that is more closely matched to the input low-resolution face image. Therefore, it is possible to effectively enhance the reconstruction quality of a low-resolution face image both objectively and subjectively. Compared with the prior art, the embodiment of the present disclosure can reconstruct a reconstructed high-resolution version that is closer to the original face image, and can avoid the high resolution after the reconstruction due to the problem of excessive facial image difference. image.

以上所述者僅為本發明之實施範例,當不能依此限定本發明實施之範圍。即大凡本發明申請專利範圍所作之均等變化與修飾,皆應仍屬本發明專利涵蓋之範圍。The above is only an embodiment of the present invention, and the scope of the present invention cannot be limited thereto. That is, the equivalent changes and modifications made by the scope of the present invention should remain within the scope of the present invention.

310...訓練資料庫310. . . Training database

312...低解析度人臉影像312. . . Low resolution face image

yL ...投影後的低解析度人臉影像y L . . . Low resolution face image after projection

ytrain ...投影後的訓練影像y train . . . Projected image after projection

314...流形學習314. . . Manifold learning

316...可匹配的一訓練集316. . . Matchable training set

318...基底分解318. . . Substrate decomposition

320...人臉超解析度320. . . Face super resolution

322...高解析度人臉影像322. . . High resolution face image

410...從N個投影後的訓練影像中,篩選出k個與yL 最匹配的人臉影像ytrain 的一新訓練集410. . . From the N projected training images, a new training set of k and y L matching face images y train is selected .

420...透過此新訓練集,使用基底分解來萃取出一低解析度原型臉PL 與一組高解析度原型臉420. . . Through this new training set, a base decomposition is used to extract a low-resolution prototype face P L and a set of high-resolution prototype faces.

430...遞增k值430. . . Incremental k value

440...將此低解析度原型臉PL 與此組高解析度原型臉當成一組基底影像440. . . Combine this low-resolution prototype face P L with this set of high-resolution prototype faces as a set of base images

700...人臉超解析度系統700. . . Face super-resolution system

712...訓練影像712. . . Training image

720...投影模組720. . . Projection module

732...具k張訓練影像的一訓練集732. . . a training set with k training images

730...匹配模組730. . . Matching module

740...基底分解模組740. . . Substrate decomposition module

750...人臉超解析度模組750. . . Face super-resolution module

PL ...低解析度原型臉P L . . . Low resolution prototype face

yL ...投影後的低解析度人臉影像y L . . . Low resolution face image after projection

PH ...高解析度原型臉P H . . . High resolution prototype face

ytrain ...投影後的訓練影像y train . . . Projected image after projection

800...電腦系統800. . . computer system

810...記憶體裝置810. . . Memory device

820...處理器820. . . processor

第一圖是一範例流程圖,說明習知的一種以樣本為基礎之人臉超解析度方法。The first figure is an example flow diagram illustrating a conventional sample-based face super-resolution method.

第二圖是一範例示意圖,說明如何透過計算出的低解析度之人臉原型的組合參數,來得到重建的高解析度人臉影像。The second figure is an example schematic diagram showing how to reconstruct a high-resolution face image through the calculated combination parameters of the low-resolution face prototype.

第三圖是一範例示意圖,說明一種以樣本為基礎之人臉超解析度方法,與所揭露的某些實施範例一致。The third figure is a schematic diagram illustrating a sample-based method of super-resolution of a face, consistent with certain disclosed embodiments.

第四圖是一範例流程圖,說明如何在流形域裡的訓練集合進行篩選,與所揭露的某些實施範例一致。The fourth diagram is an example flow diagram illustrating how to screen a training set in a manifold domain consistent with certain disclosed embodiments.

第五圖是一曲線範例圖,說明篩選出的人臉影像ytrain 的個數,亦即k值,與差異值的變化關係,與所揭露的某些實施範例一致。The fifth figure is a graph of a curve, which illustrates that the number of screen images y train , that is, the value of k, is related to the difference value, and is consistent with some of the disclosed embodiments.

第六A圖與第六B圖分別是投影到流形域上的低解析度影像的分佈與高解析度影像的分佈的範例示意圖,與所揭露的某些實施範例一致。6A and 6B are schematic diagrams showing examples of the distribution of low-resolution images projected onto the manifold field and the distribution of high-resolution images, respectively, consistent with certain disclosed embodiments.

第七圖是一範例示意圖,說明一種以樣本為基礎之人臉超解析度系統,與所揭露的某些實施範例一致。。The seventh diagram is a schematic diagram illustrating a sample-based facial hyper-resolution system consistent with certain disclosed embodiments. .

第八圖是一範例示意圖,說明第六圖之人臉超解析度系統可執行於一電腦系統中,與所揭露的某些實施範例一致。The eighth figure is an exemplary diagram illustrating that the face super-resolution system of the sixth figure can be implemented in a computer system consistent with certain disclosed embodiments.

第九圖是一範例表格,提供人臉影像重建品質之客觀數據的比較,與所揭露的某些實施範例一致。The ninth diagram is an example table that provides a comparison of objective data for the quality of facial image reconstruction, consistent with certain disclosed embodiments.

第十圖是一範例表格,提供五種不同技術之偵測率比較,與所揭露的某些實施範例一致。The tenth figure is an example table that provides a comparison of detection rates for five different techniques, consistent with some of the disclosed embodiments.

700...人臉超解析度系統700. . . Face super-resolution system

712...訓練影像712. . . Training image

720...投影模組720. . . Projection module

732...具k張訓練影像的一訓練集732. . . a training set with k training images

730...匹配模組730. . . Matching module

740...基底分解模組740. . . Substrate decomposition module

750...人臉超解析度模組750. . . Face super-resolution module

PL ...低解析度原型臉P L . . . Low resolution prototype face

yL ...投影後的低解析度人臉影像y L . . . Low resolution face image after projection

PH ...高解析度原型臉P H . . . High resolution prototype face

310...訓練資料庫310. . . Training database

312...低解析度人臉影像312. . . Low resolution face image

ytrain ...投影後的訓練影像y train . . . Projected image after projection

314...流形學習314. . . Manifold learning

322...高解析度影像322. . . High resolution image

Claims (13)

一種以樣本為基礎之人臉超解析度方法,該方法包含:準備一個備有多張訓練影像的訓練資料庫,並取得欲放大的一低解析度人臉影像;利用一流形學習技術,決定一投影矩陣,將該訓練資料庫的該多張訓練影像投影到一流形域,以清楚地區分該多張訓練影像的投影後的影像;利用該投影矩陣,將該訓練資料庫中該多張訓練影像與該低解析度人臉影像投影到該流形域,其中,yL 代表該低解析度人臉影像的一投影後的影像,ytrain 代表該訓練資料庫的該多張訓練影像的投影後的訓練影像;從N張投影後的訓練影像ytrain 中,篩選出與yL 可匹配的一訓練集,N≦該多張訓練影像的個數;透過對該訓練集與yL 進行基底分解,學習出一組基底影像,包括該訓練集的一組高低解析度原型臉與yL 的一低解析度原型臉;以及利用該組基底影像,重建出該低解析度人臉影像的一高解析度人臉影像。A sample-based super-resolution method for a face, comprising: preparing a training database with a plurality of training images, and obtaining a low-resolution facial image to be enlarged; using a first-class shape learning technique, determining a projection matrix, projecting the plurality of training images of the training database to the first-class shape to clearly distinguish the projected images of the plurality of training images; using the projection matrix, the plurality of training databases The training image and the low-resolution face image are projected to the manifold domain, wherein y L represents a projected image of the low-resolution face image, and y train represents the plurality of training images of the training database. The projected training image; from the N projections of the training image y train , a training set that matches y L is selected, and the number of the plurality of training images is N; by performing the training set and y L Base decomposition, learning a set of base images, including a set of high and low resolution prototype faces of the training set and a low resolution prototype face of y L ; and reconstructing the low resolution face using the set of base images A high-resolution face image like the one. 如申請專利範圍第1項所述之人臉超解析度方法,其中該訓練集是在該N張投影後的訓練影像ytrain 中找出與該投影後的影像yL 最相似的k張人臉影像,k≦N。The face super-resolution method according to claim 1, wherein the training set finds k persons most similar to the projected image y L in the N-projection training image y train Face image, k≦N. 如申請專利範圍第1項所述之人臉超解析度方法,其中該流形學習技術利用一種降維演算法,來求得該投影矩陣。 The face super-resolution method according to claim 1, wherein the manifold learning technique uses a dimension reduction algorithm to obtain the projection matrix. 如申請專利範圍第1項所述之人臉超解析度方法,其中 該訓練集的該組高解析度原型臉與y L 的該低解析度原型臉之間的差異符合一門檻值要求。The face super-resolution method according to claim 1, wherein the difference between the set of high-resolution prototype faces of the training set and the low-resolution prototype face of y L meets a threshold requirement. 如申請專利範圍第1項所述之人臉超解析度方法,其中該基底分解是採用主成分分析作為一基底分解函數,以萃取出該組基底影像。 The method of super-resolution of a face according to claim 1, wherein the substrate decomposition is performed by using principal component analysis as a substrate decomposition function to extract the set of base images. 如申請專利範圍第1項所述之人臉超解析度方法,其中從N張投影後的訓練影像ytrain 中,篩選出與yL 可匹配的該訓練集,以及透過對該訓練集與yL 進行基底分解,學習出該組基底影像,還包括:從N張投影後的訓練影像ytrain 中,篩選出與該低解析度人臉影像的該投影後的影像yL 最匹配的k張人臉影像ytrain ,k是一正整數,k≦N;透過該k張人臉影像ytrain ,使用基底分解,來萃取出yL 的該低解析度人臉影像與該組高解析度原型臉;以及當該低解析度原型臉與該組高解析度原型臉之間的差異符合一門檻值要求時,將yL 的該低解析度原型臉與該組高解析度原型臉當成該組基底影像;否則的話,遞增k值後,重複上述兩步驟,直到得出該組基底影像為止。The method for super-resolution of a face according to claim 1, wherein the training set matching the y L is filtered from the N pieces of the projected training image y train , and the training set is matched with the y L is an exploded substrate, the set of the learning image substrate, further comprising: a training image y from the N train slides were screened out and the image y after the projection of the low-resolution face image of the k best matching Zhang L The face image y train ,k is a positive integer, k≦N; through the k face images y train , using the base decomposition to extract the low resolution face image of y L and the set of high resolution prototypes a face; and when the difference between the low-resolution prototype face and the set of high-resolution prototype faces meets a threshold requirement, the low-resolution prototype face of y L and the set of high-resolution prototype faces are regarded as the group Base image; otherwise, after incrementing the k value, repeat the above two steps until the set of base images is obtained. 如申請專利範圍第6項所述之人臉超解析度方法,該方法使用一成本函數來決定k值,該成本函數所決定的k值使該訓練集的一組原型臉的線性組合與該低解析度人臉影像之間的差異為最小。 The method according to claim 6, wherein the method uses a cost function to determine a k value, and the k value determined by the cost function causes a linear combination of a set of prototype faces of the training set and the The difference between low-resolution face images is minimal. 一種以樣本為基礎之人臉超解析度系統,該系統包括:一訓練資料庫,收集與儲存多張訓練影像; 一投影模組,接收該多張訓練影像,並透過一流形學習技術求得一投影矩陣後,將該多張訓練影像與一輸入的低解析度人臉影像,投影到同一流形域,得到N張投影後的訓練影像ytrain 與一投影後的低解析度人臉影像yL ,該投影矩陣係藉由該流形學習技術來決定,以清楚地區分該N張投影後的訓練影像ytrain ;一匹配模組,從該N張投影後的訓練影像ytrain 中,篩選出與yL 最匹配的k張人臉影像的一訓練集,k≦N;一基底分解模組,對該訓練集與yL 使用基底分解來萃取出yL 的一低解析度原型臉與該訓練集的一組高解析度原型臉,使得yL 的該低解析度原型臉與該組高解析度原型臉之間的差異符合一門檻值要求;以及一人臉超解析度模組,將符合該門檻值要求的該低解析度原型臉與該組高解析度原型臉當成一組基底影像,來重建出該低解析度人臉影像的一高解析度人臉影像。A sample-based face super-resolution system, the system comprising: a training database for collecting and storing a plurality of training images; a projection module, receiving the plurality of training images, and obtaining the first-order shape learning technology After a projection matrix, the plurality of training images and an input low-resolution face image are projected into the same manifold domain to obtain N projections of the training image y train and a projected low-resolution face image. y L , the projection matrix is determined by the manifold learning technique to clearly distinguish the N projections of the training image y train ; a matching module, from the N projections of the training image y train , A training set of k face images that best match y L is selected, k≦N; a base decomposition module, which uses base decomposition to extract a low-resolution prototype face of y L for the training set and y L And a set of high-resolution prototype faces of the training set, such that the difference between the low-resolution prototype face of y L and the set of high-resolution prototype faces meets a threshold requirement; and a face super-resolution module, Will meet the threshold requirement Prototype resolution face with the prototype set of high-resolution face image as a set of base, to reconstruct a high-resolution low-resolution face image of the facial image. 如申請專利範圍第8項所述之人臉超解析度系統,其中當yL 的該低解析度原型臉與該組高解析度原型臉之間的差異還未符合該門檻值要求時,該基底分解模組遞增k值,並且該匹配模組從N張投影後的訓練影像ytrain 中,根據該遞增k值,重新篩選出另一訓練集,再透過該基底分解模組來萃取出另一組高解析度原型臉,直到yL 的該低解析度原型臉與最終的一組高解析度原型臉之間的差異符合該檻門值要求為止。The face super-resolution system of claim 8, wherein when the difference between the low-resolution prototype face of y L and the set of high-resolution prototype faces does not meet the threshold requirement, The base decomposition module increments the k value, and the matching module rescreens another training set according to the incremental k value from the N projected training images y train , and then extracts another through the base decomposition module. A set of high-resolution prototype faces until the difference between the low-resolution prototype face of y L and the final set of high-resolution prototype faces meets the threshold value requirement. 如申請專利範圍第8項所述之人臉超解析度系統,該人臉超解析度系統執行於一電腦系統中,該電腦系統至少 包含:一記憶體裝置,該記憶體裝置用來實現該訓練資料庫;以及一處理器,該處理器裡備有該投影模組、該匹配模組、該基底分解模組、與該人臉超解析度模組,且接收輸入的該低解析度人臉影像,並執行該投影模組、該匹配模組、該基底分解模組、與該人臉超解析度模組的功能,並且透過結合多個參數,來產生該低解析度人臉影像的該高解析度人臉影像。 For example, the face super-resolution system described in claim 8 of the patent scope, the face super-resolution system is implemented in a computer system, the computer system is at least The method includes: a memory device, the memory device is used to implement the training database; and a processor, wherein the processor is provided with the projection module, the matching module, the base decomposition module, and the face An ultra-resolution module, and receiving the input low-resolution facial image, and performing the functions of the projection module, the matching module, the base decomposition module, and the face super-resolution module, and Combining a plurality of parameters to generate the high-resolution face image of the low-resolution face image. 如申請專利範圍第8項所述之人臉超解析度系統,其中k的值是自動被決定的。 The face super-resolution system of claim 8, wherein the value of k is automatically determined. 如申請專利範圍第8項所述之人臉超解析度系統,其中該多張訓練影像被投影到該流形域後的低解析度影像與高解析度影像的分佈相似。 The face super-resolution system of claim 8, wherein the low-resolution image after the plurality of training images are projected into the manifold domain is similar to the distribution of the high-resolution image. 如申請專利範圍第8項所述之人臉超解析度系統,其中該人臉超解析度模組透過結合多個適當參數的組合與該組基底影像,來得到該高解析度人臉影像。The face super-resolution system of claim 8, wherein the face super-resolution module obtains the high-resolution face image by combining a plurality of appropriate parameter combinations with the set of base images.
TW099119334A 2010-06-14 2010-06-14 Method and system for example-based face hallucination TWI419059B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
TW099119334A TWI419059B (en) 2010-06-14 2010-06-14 Method and system for example-based face hallucination
US12/858,442 US8488913B2 (en) 2010-06-14 2010-08-18 Method and system for example-based face hallucination

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW099119334A TWI419059B (en) 2010-06-14 2010-06-14 Method and system for example-based face hallucination

Publications (2)

Publication Number Publication Date
TW201145181A TW201145181A (en) 2011-12-16
TWI419059B true TWI419059B (en) 2013-12-11

Family

ID=45096270

Family Applications (1)

Application Number Title Priority Date Filing Date
TW099119334A TWI419059B (en) 2010-06-14 2010-06-14 Method and system for example-based face hallucination

Country Status (2)

Country Link
US (1) US8488913B2 (en)
TW (1) TWI419059B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI771250B (en) * 2021-12-16 2022-07-11 國立陽明交通大學 Device and method for reducing data dimension, and operating method of device for converting data dimension

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8655027B1 (en) * 2011-03-25 2014-02-18 The United States of America, as represented by the Director, National Security Agency Method of image-based user authentication
US8743119B2 (en) * 2011-05-24 2014-06-03 Seiko Epson Corporation Model-based face image super-resolution
US8917910B2 (en) * 2012-01-16 2014-12-23 Xerox Corporation Image segmentation based on approximation of segmentation similarity
US8938118B1 (en) 2012-12-12 2015-01-20 Rajiv Jain Method of neighbor embedding for OCR enhancement
US8837861B2 (en) 2012-12-13 2014-09-16 Microsoft Corporation Bayesian approach to alignment-based image hallucination
CN103020940B (en) * 2012-12-26 2015-07-15 武汉大学 Local feature transformation based face super-resolution reconstruction method
CN103042436B (en) * 2013-01-21 2014-12-24 北京信息科技大学 Spindle turning error source tracing method based on shaft center orbit manifold learning
CN103049897B (en) * 2013-01-24 2015-11-18 武汉大学 A kind of block territory face super-resolution reconstruction method based on adaptive training storehouse
NL2013417A (en) 2013-10-02 2015-04-07 Asml Netherlands Bv Methods & apparatus for obtaining diagnostic information relating to an industrial process.
CN103489174B (en) * 2013-10-08 2016-06-29 武汉大学 A kind of face super-resolution method kept based on residual error
TW201531104A (en) * 2014-01-24 2015-08-01 Sintai Optical Shenzhen Co Ltd Electronic device
WO2015195827A1 (en) * 2014-06-17 2015-12-23 Carnegie Mellon University Methods and software for hallucinating facial features by prioritizing reconstruction errors
CN104091320B (en) * 2014-07-16 2017-03-29 武汉大学 Based on the noise face super-resolution reconstruction method that data-driven local feature is changed
CN104400560B (en) * 2014-11-07 2016-11-23 西安交通大学 A kind of numerical control machine tool cutting operating mode lower main axis orbit of shaft center On-line Measuring Method
JP5937661B2 (en) * 2014-11-13 2016-06-22 みずほ情報総研株式会社 Information prediction system, information prediction method, and information prediction program
CN104933692B (en) * 2015-07-02 2019-03-08 中国地质大学(武汉) A kind of method for reconstructing and device of human face super-resolution
WO2017177363A1 (en) * 2016-04-11 2017-10-19 Sensetime Group Limited Methods and apparatuses for face hallucination
US10297059B2 (en) 2016-12-21 2019-05-21 Motorola Solutions, Inc. Method and image processor for sending a combined image to human versus machine consumers
CN107392865B (en) * 2017-07-01 2020-08-07 广州深域信息科技有限公司 Restoration method of face image
WO2019147693A1 (en) * 2018-01-23 2019-08-01 Insurance Services Office, Inc. Computer vision systems and methods for machine learning using image hallucinations
TWI772627B (en) 2019-03-19 2022-08-01 財團法人工業技術研究院 Person re-identification method, person re-identification system and image screening method
CN110650295B (en) * 2019-11-26 2020-03-06 展讯通信(上海)有限公司 Image processing method and device
CN112990123B (en) * 2021-04-26 2021-08-13 北京世纪好未来教育科技有限公司 Image processing method, apparatus, computer device and medium
CN114004742A (en) * 2021-09-30 2022-02-01 浙江大华技术股份有限公司 Image reconstruction method, training method, detection method, device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379611B2 (en) * 2004-04-01 2008-05-27 Microsoft Corporation Generic image hallucination
TW201013545A (en) * 2008-09-24 2010-04-01 Univ Nat Cheng Kung Robust curvature estimation method through line integrals

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW420939B (en) 1999-06-29 2001-02-01 Ind Tech Res Inst Human face detection method
JP3798637B2 (en) 2001-02-21 2006-07-19 インターナショナル・ビジネス・マシーンズ・コーポレーション Touch panel type entry medium device, control method thereof, and program
US8335403B2 (en) 2006-11-27 2012-12-18 Nec Laboratories America, Inc. Soft edge smoothness prior and application on alpha channel super resolution
CN101216889A (en) 2008-01-14 2008-07-09 浙江大学 A face image super-resolution method with the amalgamation of global characteristics and local details information
CN101477684B (en) 2008-12-11 2010-11-10 西安交通大学 Process for reconstructing human face image super-resolution by position image block
CN101615290B (en) 2009-07-29 2012-09-05 西安交通大学 Face image super-resolution reconstructing method based on canonical correlation analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379611B2 (en) * 2004-04-01 2008-05-27 Microsoft Corporation Generic image hallucination
TW201013545A (en) * 2008-09-24 2010-04-01 Univ Nat Cheng Kung Robust curvature estimation method through line integrals

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
J.-S. Park and S.-W. Lee "An example-based face hallucination method for single-frame, low-resolution facial images", IEEE Trans. Image Process., vol. 17, no. 10, pp.1806 -1816 2008 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI771250B (en) * 2021-12-16 2022-07-11 國立陽明交通大學 Device and method for reducing data dimension, and operating method of device for converting data dimension

Also Published As

Publication number Publication date
TW201145181A (en) 2011-12-16
US20110305404A1 (en) 2011-12-15
US8488913B2 (en) 2013-07-16

Similar Documents

Publication Publication Date Title
TWI419059B (en) Method and system for example-based face hallucination
Chen et al. Denoising hyperspectral image with non-iid noise structure
Gao et al. Face sketch–photo synthesis and retrieval using sparse representation
Vageeswaran et al. Blur and illumination robust face recognition via set-theoretic characterization
Xie et al. Normalization of face illumination based on large-and small-scale features
WO2017080196A1 (en) Video classification method and device based on human face image
CN110069978B (en) Discriminating non-convex low-rank decomposition and superposition linear sparse representation face recognition method
Singh et al. Identity aware synthesis for cross resolution face recognition
Bao et al. General subspace learning with corrupted training data via graph embedding
Dong et al. Low-rank laplacian-uniform mixed model for robust face recognition
Hao et al. Face super-resolution reconstruction and recognition using non-local similarity dictionary learning based algorithm
kumar Shukla et al. A novel method for identification and performance improvement of Blurred and Noisy Images using modified facial deblur inference (FADEIN) algorithms
Mitra Gaussian mixture models for human face recognition under illumination variations
Chen et al. A novel face super resolution approach for noisy images using contour feature and standard deviation prior
Mundra et al. Exposing gan-generated profile photos from compact embeddings
Tariang et al. Synthetic Image Verification in the Era of Generative Artificial Intelligence: What Works and What Isn’t There yet
Mi Face image recognition via collaborative representation on selected training samples
Lu et al. Cross-resolution Face Recognition via Identity-Preserving Network and Knowledge Distillation
Jiang et al. Robust projective dictionary learning by joint label embedding and classification
Lan et al. Face hallucination with shape parameters projection constraint
Maureira et al. Synthetic periocular iris pai from a small set of near-infrared-images
Xie et al. Restoration of a frontal illuminated face image based on KPCA
CN105740885A (en) Classification method based on multi-kernel authentication linear representation
Ye Feature learning and active learning for image quality assessment
Zheng et al. Heterogeneous iris recognition using heterogeneous eigeniris and sparse representation