TWI419058B - Image recognition model and the image recognition method using the image recognition model - Google Patents

Image recognition model and the image recognition method using the image recognition model Download PDF

Info

Publication number
TWI419058B
TWI419058B TW98135980A TW98135980A TWI419058B TW I419058 B TWI419058 B TW I419058B TW 98135980 A TW98135980 A TW 98135980A TW 98135980 A TW98135980 A TW 98135980A TW I419058 B TWI419058 B TW I419058B
Authority
TW
Taiwan
Prior art keywords
group
image recognition
matrix
recognition model
establishing
Prior art date
Application number
TW98135980A
Other languages
Chinese (zh)
Other versions
TW201115482A (en
Original Assignee
Univ Nat Chiao Tung
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Univ Nat Chiao Tung filed Critical Univ Nat Chiao Tung
Priority to TW98135980A priority Critical patent/TWI419058B/en
Publication of TW201115482A publication Critical patent/TW201115482A/en
Application granted granted Critical
Publication of TWI419058B publication Critical patent/TWI419058B/en

Links

Landscapes

  • Image Analysis (AREA)

Description

影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法Method for establishing image recognition model and image recognition method using the image recognition model

本發明係有關影像辨識,尤其是一種影像辨識模型的建立及其應用。The invention relates to image recognition, in particular to the establishment and application of an image recognition model.

按,人臉識別係為生物特徵的識別技術,其技術手段可利用統計學方法建構人臉特徵模型,以便待測人臉影像比對,而人臉影像辨識整體技術分為兩個階段:訓練階段與測試階段,在訓練階段使用一套人臉特徵模型,在模型的特徵空間中將不同的人臉影像區別且特徵化,因此在訓練階段中,所使用的人臉特徵模型即影響人臉辨識的精確度。According to the face recognition system, the biometric feature recognition technology can use statistical methods to construct the facial feature model for the comparison of the face images to be tested. The overall face recognition technology is divided into two stages: training. In the stage and test phase, a set of face feature models is used in the training phase to distinguish and characterize different face images in the feature space of the model. Therefore, in the training phase, the face feature model used affects the face. The accuracy of the identification.

有鑑於先前技術投入心力來尋找最完美的人臉辨識模型,其中根據He等人於2005年於IEEE Trans. On Pattern Analysis and Machine Intelligence,vol. 27,pp. 328-340之期刊論文中發表一文章「使用拉普拉司臉進行人臉辨識(face recognition using laplacianfaces)」,提出以特徵空間方法建立拉普拉斯矩陣(laplacian Matrix)為基礎的人臉辨識系統,來保留訓練樣本之局部結構,其作法是在臉特徵空間中流型結構的區域性透過區域保存投影方法(locality preserving projection,LPP)在最近相鄰關係圖(nearest-neighbor graph)而保留下來。然而,根據第1圖所示,其係為由He等人提出方法進行訓練樣本的特徵空間轉換分佈情況,如圖所示,此習知技術仍受到容易外界干擾因素:表情、光線與角度而導致人臉影像區分結果不佳。不過He等人提出的方法所產生局部流行結構係比其他先前技術利用歐式結構(euclidean structure)之主成分分析法(PCA)或線性判別分析法(LDA)更有效率。In view of the prior art efforts to find the most perfect face recognition model, according to a paper published by He et al. in IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 27, pp. 328-340, 2005. The article "face recognition using laplacianfaces" proposes a face recognition system based on the feature space method to establish a Laplacian matrix to preserve the local structure of training samples. The method is to preserve the locality preserving projection (LPP) of the flow structure in the face feature space in the nearest-neighbor graph. However, according to Fig. 1, it is a method of transforming the feature space of the training sample by the method proposed by He et al. As shown in the figure, the prior art is still subject to external disturbance factors: expression, light and angle. The result of poor facial image discrimination is not good. However, the locally popular structure produced by the method proposed by He et al. is more efficient than other prior art techniques using principal component analysis (PCA) or linear discriminant analysis (LDA) of the euclidean structure.

先前技術中,許多演算法採用分類標籤來進行判別分析法的程序,利用費雪條件(Fisher criterion)來最佳化組內緊密度和組間可分性,基於區域保存投影方法來計算修改後的組內與組間變異,以保留局部結構,並且,使用正交基鄰近保留判別分析(orthogonal neighborhood preserving discriminant analysis,ONPDA)或邊境費雪分析法(marginal Fisher analysis,MFA)等方法描述樣本之間鄰近關係的相似矩陣。在此列舉三篇具有代表性之論文:1.Cai等人於2006年在同一期刊發表「Othogonal Laplacianfaces for Face Recognition」延續了拉普拉司矩陣之精神,其不同處也是最重要的地方在於將Cai將特徵空間轉換成為正交基(orthogonal base)。2.Yan在2007年於同一期刊發表「Graph embedding and extensions:General framework for dimensionality reduction」該論文亦是延續了拉普拉司的方法,其不同處係在於,加入了類別資訊的概念,並且在計算不同類別資訊時,只考慮最鄰近點的資訊而非所有點。3.由Yan等人提出一種圖表嵌入式的普遍架構,其係利用兩張圖,一張為本質圖表(intrinsic graph),另一張懲罰圖表(penalty graph),來描述組內點相鄰以及組間邊境點的相鄰關係,因此這些近似方法皆可保留局部組內與組內鄰近之結構。In the prior art, many algorithms use classification tags to perform discriminant analysis procedures, using Fisher criterion to optimize intra-group tightness and inter-group separability, and calculate the modified based on the region preservation projection method. Intra- and inter-group variation to preserve local structure, and to describe samples using orthogonal neighborhood preserving discriminant analysis (ONPDA) or marginal Fisher analysis (MFA) A similarity matrix between adjacent relationships. Here are three representative papers: 1. Cai et al. published "Othogonal Laplacianfaces for Face Recognition" in the same journal in 2006. The spirit of the Laplacian matrix continues. The difference is also the most important. Cai converts the feature space into an orthogonal base. 2.Yan published "Graph embedding and extensions: General framework for dimensionality reduction" in the same journal in 2007. This paper is also a continuation of Laplace's method. The difference is that the concept of category information is added, and When calculating different categories of information, only the information of the nearest neighbor is considered, not all points. 3. Yan et al. proposed a general architecture for chart embedding, which uses two graphs, one is an intrinsic graph and the other is a penalty graph to describe the adjacent points in the group. The adjacent relationship of the border points between the groups, so these approximation methods can retain the structure within the local group and the adjacent groups.

在人臉辨識的另一階段-測試階段,其係將待測影像轉換進入已建構完成的特徵轉換空間中,與建制特徵轉換空間的訓練樣本進行比對,習知技術中有近似方法係注重距離測量以及比對演算法的設計,其中最近特徵線(nearest feature line,NFL)方法係由S.Z.Li與J.Lu在IEEE Transactions on Pattern Analysis and Machine Intelligence 論文期刊首次提出,而S.Z.Li與J. Lu所發表二文分別為「使用最近特徵線方法進行人臉辨識(Face recognition using the nearest feature line method)」以及「影像分類與補償中最近特徵線方法之成果估算(Performance evaluation of the nearest feature line method in image classification and retrieval)」內文提到特徵線係從兩個特徵點產生且特徵線可線性插入或外推同一類組中每一對特徵點,每個類組具有無限個偽原型(pseudo prototypes)係由線性插入產生,而完成分類工作係由選出的輸入影像與特徵線之間的最短距離,特徵線係線性趨近兩張人臉影像的變化量。In another stage of the face recognition-testing phase, the image is converted into the constructed feature conversion space, and compared with the training sample of the formed feature conversion space. The approximation method in the prior art focuses on Distance measurement and comparison algorithm design, in which the nearest feature line (NFL) method was first proposed by SZLi and J.Lu in the IEEE Transactions on Pattern Analysis and Machine Intelligence papers, while SZLi and J. Lu The two publications are "Face recognition using the nearest feature line method" and "Performance evaluation of the nearest feature line method in the image classification and compensation" (Performance evaluation of the nearest feature line method in "Image classification and retrieval"" refers to the feature line generated from two feature points and the feature line can linearly insert or extrapolate each pair of feature points in the same group, each group has an infinite number of pseudo-prototypes (pseudo prototypes) ) is generated by linear interpolation, and the classification work is completed by selected input images and features The shortest distance between the two approaches a linear characteristic line is human facial image variation amount.

再者,根據Chien等人在IEEE Transactions on Pattern Analysis and Machine Intelligence vol.24,pp. 1644-1649,2002中發表的「區分小波和最近特徵分類器(Discriminant waveletfaces and nearest feature classifiers for face recognition)」,其係將最近特徵線分類器(NFL-based classifier)的技術擴展至以最近特徵空間(nearest feature space)的方式來進行人臉影像的測試比對。Furthermore, according to the "Discriminant wavelet faces and nearest feature classifiers for face recognition" published by Chien et al., IEEE Transactions on Pattern Analysis and Machine Intelligence vol. 24, pp. 1644-1649, 2002. It extends the technology of the recent NFL-based classifier to test comparison of face images in the manner of nearest feature space.

而相對於傳統的最相近鄰居分類器(nearest neighbor based classifier,NN-based classifier),最近特徵線分類器分類效果遠比傳統的點到點的比對方式為佳,然而,最近特徵線分類器在比對時由於要選取組合成直線的樣本配對,因此當樣本量大時,就會需要大量的比對時間,此為其缺點。Compared with the traditional nearest neighbor classifier (NN-based classifier), the feature line classifier is much better than the traditional point-to-point comparison method. However, recently the feature line classifier In the comparison, since the sample pairs combined in a straight line are selected, when the sample size is large, a large amount of comparison time is required, which is a disadvantage.

除此之外,人臉辨識技術可利用影像本身的特性進行分類器的設計,其技術特徵通常較為簡單,例如色彩資訊,然而此類行特徵受到光線變化等雜訊影像也較大,因此在分類能力上不佔優勢,而在實務上也因此受到限制。In addition, face recognition technology can use the characteristics of the image itself to design the classifier. Its technical features are usually simple, such as color information. However, such line features are also subject to large changes in noise such as light changes. The ability to classify is not dominant, but it is also limited in practice.

有鑑於此,本發明提出一種影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法,其係將最近特徵線分類器的作法直接融入前段的訓練階段中的拉普拉司矩陣中,以提高影像辨識正確性以及加快辨識速度。In view of the above, the present invention provides an image recognition model establishing method and an image recognition method using the image recognition model, which directly integrates the method of the feature line classifier into the Laplacian matrix in the training stage of the previous stage. In order to improve the accuracy of image recognition and speed up the recognition.

本發明之主要目的係提出一種影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法,係將最近特徵空間嵌入式的概念融入拉普拉司矩陣中而建立新的特徵空間轉換矩陣,不僅可將訓練樣本在原空間中的拓樸資料保留,更可以保存訓練樣本的線性變化關係,並且使得在測試階段可以快速度比對方式,進行辨識人臉影像辨識。The main object of the present invention is to provide a method for establishing an image recognition model and an image recognition method using the image recognition model, which is to integrate a concept embedded in the feature space into a Laplacian matrix to establish a new feature space transformation matrix. Not only the topology data of the training samples in the original space can be retained, but also the linear relationship of the training samples can be saved, and the face image recognition can be performed in the test phase by the fast comparison method.

本發明之次一目的係提出一種影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法,本發明所建立的數學模型不侷限於最近特徵線方法來建構,亦相容各種空間的分類器,例如:最近特徵點、最近特徵面與最近高維特徵空間的分類器,以建立更多具有識別度的數學模型,提高人臉辨識的精準度。The second object of the present invention is to provide a method for establishing an image recognition model and an image recognition method using the image recognition model. The mathematical model established by the present invention is not limited to the recent feature line method, and is also compatible with various spatial classifications. For example, the nearest feature point, the nearest feature surface, and the nearest high-dimensional feature space classifier to create more mathematical models with recognition to improve the accuracy of face recognition.

本發明之再一目的係提出一種影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法,在訓練階段可選用最近特徵點分類器來進行人臉辨識,以提高辨識效率。A further object of the present invention is to provide a method for establishing an image recognition model and an image recognition method using the image recognition model. In the training phase, a nearest feature point classifier can be used for face recognition to improve recognition efficiency.

為達到上述之目的本發明所揭示之影像辨識模型之建立方法及利用該影像辨識模型之影像辨識方法,首先,收集不同類組之複數個訓練樣本且決定係以點至特徵子空間最近距離之方式進行空間轉換;接續,以主成分分析法計算此些訓練樣本的主要轉換矩陣,並將訓練樣本投影至降維空間;再以所決定的點至特徵子空間最近距離方式計算所有樣本的組內投影點與組間投影點以及此些投影點所構成之組內向量與組間向量,並將此些向量依序排列;接續,根據此些組內向量與組間向量,計算出組內變異矩陣與組間變異矩陣;由組內變異矩陣與組間變異矩陣求得變異轉換矩陣;再將主要轉換矩陣內積變異轉換矩陣即可求得特徵空間轉換矩陣。In order to achieve the above object, the method for establishing an image recognition model disclosed by the present invention and the image recognition method using the image recognition model firstly collect a plurality of training samples of different groups and determine the closest distance from the point to the feature subspace. The method performs spatial transformation; the continuation, the main transformation matrix of the training samples is calculated by principal component analysis method, and the training samples are projected to the dimensionality reduction space; then the group of all samples is calculated by the determined distance from the point to the closest distance of the feature subspace. The inner projection point and the inter-group projection point and the intra-group vector and inter-group vector formed by the projection points, and the vectors are sequentially arranged; and then, according to the intra-group vector and the inter-group vector, the intra-group is calculated. Mutation matrix and inter-group variation matrix; the variation transformation matrix is obtained from the intra-group variation matrix and the inter-group variation matrix; then the main transformation matrix inner product variation transformation matrix can be used to obtain the feature space transformation matrix.

因此,可在特徵空間轉換矩陣的特徵空間中進行測試影像與訓練樣本之比對,以辨識出該測試影像所屬之類組。Therefore, the comparison between the test image and the training sample can be performed in the feature space of the feature space conversion matrix to identify the group to which the test image belongs.

底下藉由具體實施例配合所附的圖式詳加說明,當更容易瞭解本發明之目的、技術內容、特點及其所達成之功效。The purpose, technical contents, features and effects achieved by the present invention will be more readily understood by the detailed description of the embodiments and the accompanying drawings.

目前常見的人臉辨識方法多著重於特徵空間轉換方法、影像處理以及演算法等三大類,但這些方法時常無法在降維後所保留影像在原空間中的特性,或是容易受到外界因素干擾,例如光影變化,使得分類的效率大打折扣。因此,本發明係在訓練階段以主成分分析法先將影像作前處理,而後以最近特徵空間嵌入式(nearest feature space embedding)而建立一套特徵空間轉換矩陣,亦即,將最近特徵空間特性融入拉普拉司矩陣中,在特徵空間轉換法上進行改善,因此整合了特徵空間轉換與最近特徵空間分類器之技術,以點到線、點到面、或點到高維空間方式來決定特徵空間轉換矩陣。At present, the common face recognition methods focus on three major categories: feature space conversion method, image processing and algorithm. However, these methods often fail to preserve the characteristics of the image in the original space after dimensionality reduction, or are easily interfered by external factors. For example, light and shadow changes make the efficiency of classification greatly reduced. Therefore, the present invention preliminarily processes images by principal component analysis in the training phase, and then establishes a set of feature space transformation matrices by nearest feature space embedding, that is, recent feature space characteristics. It is integrated into the Laplacian matrix and improved on the feature space transformation method. Therefore, the technique of feature space transformation and the nearest feature space classifier is integrated, which is determined by point-to-line, point-to-surface, or point-to-high-dimensional space. Feature space conversion matrix.

請參見第2圖,如步驟S10所示,首先先收集複數個不同個類組的複數張人臉影像,作為建立影像辨識模型的訓練樣本,而且將每個訓練樣本資料正規化為相同大小的視窗影像,以及將每個人臉影像對齊校正。同時,決定p值大小,p值代表特徵子空間的形式,例如p=1代表特徵子空間係為點,p=2代表特徵子空間係為線,依此類推。因此根據p值,我們可知特徵轉換空間的策略係以點至點、點至線、點至平面或點至高維空間的最短距離方式,而特徵子空間代表點、線、面或高維空間。Referring to FIG. 2, as shown in step S10, first, a plurality of face images of different groups are collected as training samples for establishing an image recognition model, and each training sample data is normalized to the same size. Window image and align each face image. At the same time, the p-value is determined, and the p-value represents the form of the feature subspace. For example, p=1 represents the feature subspace system as a point, p=2 represents a feature subspace system as a line, and so on. Therefore, according to the p value, we can know that the strategy of the feature transformation space is the shortest distance method from point to point, point to line, point to plane or point to high dimensional space, and the feature subspace represents point, line, plane or high dimensional space.

接續,如步驟S12所示,決定K1 、K2 兩值,其係代表在進行費雪條件式(Fisher’s Criterion)計算時,各訓練樣本要選擇組內向量個數以及組間向量個數。再進行步驟S14,以主成分分析法建立在所收集之訓練樣本的主要轉換矩陣wPCA ,並將訓練樣本投影至降維空間。接著可計算各樣本點在同類組內之投影點以及各樣本點在類組間投影點,如步驟S16所示,根據步驟S10的p值,求得點到點、點到線點到面與點到高維空間的關鍵向量,其中關鍵向量包含組內投影點所構成的組內向量,以及組間投影點所構成的組間向量,並且組內向量以及組間向量依照距離由小而大的方式排列。Next, as shown in step S12, two values of K 1 and K 2 are determined, which are representative of the number of vectors in the group and the number of vectors between groups in each training sample when performing Fisher's Criterion calculation. Then, in step S14, the main transformation matrix w PCA of the collected training samples is established by principal component analysis, and the training samples are projected to the dimensionality reduction space. Then, the projection points of each sample point in the same group and the projection points of each sample point between the clusters can be calculated. As shown in step S16, the point-to-point, point-to-line point-to-surface and point-to-point are obtained according to the p-value of step S10. A key vector that points to a high-dimensional space, where the key vector contains the intra-group vector formed by the intra-group projection points, and the inter-group vector formed by the inter-group projection points, and the intra-group vector and the inter-group vector are small and large according to the distance. Arranged in a way.

如第2圖中步驟S18所示,從排列有序的組內向量中,選前K1 個組內向量,K1 值已在步驟S10決定,並設定對應的兩個權重矩陣W以及M,以計算出組內變異矩陣(within-class scatter)SW ,如第3(a)圖所示,並且同樣從組間向量中,選取前K2 個組間向量,並設定對應的兩個權重矩陣W以及M,以計算出組間變異矩陣(between-class scatter)SB ,如第3(b)圖所示。As shown in step S18 in FIG. 2, from the aligned intra-group vectors, the first K 1 intra-group vectors are selected, and the K 1 value has been determined in step S10, and the corresponding two weight matrices W and M are set. To calculate the within-class scatter S W , as shown in Figure 3(a), and also from the inter-group vector, select the first K 2 inter-group vectors and set the corresponding two weights. The matrices W and M are used to calculate the between-class scatter S B as shown in Fig. 3(b).

繼續下一個步驟S20,係配合利用費雪條件式(Fisher’s Criterion)轉換,獲得變異轉換矩陣w * 係由該組內變異矩陣SB 與該組間變異矩陣SW 求得,其關係式為,其中選取前數個具最大特徵值的特徵向量來構成變異轉換矩陣。最後,將主要轉換矩陣wPCA 與變異轉換矩陣w* 內積後,獲得特徵空間轉換矩陣w,如步驟S22所示,其關係式係為w =w PCA w * Proceeding to the next step S20, the Fishery's Criterion transformation is used to obtain the variation transformation matrix w * obtained by the intra-group variation matrix S B and the inter-group variation matrix S W , and the relationship is The feature vector with the largest eigenvalue is selected to form the mutation transformation matrix. Finally, after the main transformation matrix w PCA and the mutation transformation matrix w * are inner product, the feature space transformation matrix w is obtained. As shown in step S22, the relationship is w = w PCA w * .

而步驟S18可採取另一種方式來進行組內向量與組間向量的篩選,先設定第一特徵臨界值,再選取各類組中小於第一特徵臨界值的組內向量,並設定對應的兩個權重矩陣W以及M,計算求得組內變異矩陣SW 。以及,同樣設定第二特徵臨界值,選取不同類組中小於第二特徵臨界值的組間向量,並設定對應的兩個權重矩陣W以及M,求得組間變異矩陣SB 。若步驟S18係採取本段落之方式進行計算,則無須事先考慮K1 與K2 的個數,因此可省略步驟S12,從步驟S10直接進行步驟S14,其餘流程皆不變。Step S18 may adopt another method for screening the intra-group vector and the inter-group vector, first setting the first feature threshold, and then selecting the intra-group vector of the group lower than the first feature threshold, and setting the corresponding two The weight matrix W and M are calculated to obtain the intra-group variation matrix S W . And, the second feature threshold is also set, and the inter-group vector smaller than the second feature threshold in the different types of groups is selected, and the corresponding two weight matrices W and M are set to obtain the inter-group variation matrix S B . If the calculation is performed in the manner of this paragraph in step S18, it is not necessary to consider the number of K 1 and K 2 in advance, so step S12 may be omitted, and step S14 may be directly performed from step S10, and the remaining processes are unchanged.

本發明所建立的特徵空間轉換矩陣係為採用最近特徵空間嵌入拉普拉司矩陣策略,而所建構出的特徵空間轉換矩陣所得到的特徵空間,成為本發明作為影像辨識模型,不僅可保留訓練樣本原有的拓樸資訊,並同時可表現出訓練樣本之間的線性變化以及類組資訊,如第4圖所示,係為運用本發明之方法所得之特徵空間的訓練樣本分佈圖,從第4圖與第1圖比較可知,使用本發明所建立之影像辨識模型將不同類組訓練樣本有效分類,其辨識能力比先前技術佳。The feature space transformation matrix established by the invention is a feature space obtained by embedding a Laplacian matrix strategy in the nearest feature space, and the feature space transformation matrix constructed by the invention is used as an image recognition model, which not only retains training. The original topology information of the sample, and at the same time, can show the linear change between the training samples and the group information, as shown in Fig. 4, which is the training sample distribution map of the feature space obtained by the method of the present invention. Comparing Fig. 4 with Fig. 1, it can be seen that the image recognition model established by the present invention effectively classifies different types of training samples, and the recognition ability is better than the prior art.

本發明亦提出使用影像辨識模型之影像辨識裝置,如第5圖所示,此裝置係由其係由影像擷取器10取得一測試影像12,再利用一影像模擬處理器14將此測試影像12投影於本發明所建立影像辨識模型16中,亦即,特徵空間轉換矩陣之特徵空間中,由於此特徵空間中係根據一套不同類組的訓練樣本18而建構,所以影像模擬處理器14將利用點對點、點對線、點對面或點對高維空間的最相近距離方式來將測試影像12與訓練樣本18進行比對,因此計算器10計算出測試影像12應所屬之類組,最後透過一顯示器20輸出測試影像的辨識結果。而若有欲縮短測試階段的時間,亦即人臉辨識的時間,則可將測試影像選用特徵空間中點對點之最相近距離方式與訓練樣本進行比對,即可節省進行人臉辨識的時間。The present invention also proposes an image recognition device using an image recognition model. As shown in FIG. 5, the device obtains a test image 12 from the image capture device 10, and then uses a image simulation processor 14 to test the image. 12 is projected in the image recognition model 16 established by the present invention, that is, in the feature space of the feature space conversion matrix, since the feature space is constructed according to a set of different types of training samples 18, the image simulation processor 14 The test image 12 will be compared with the training sample 18 by using the point-to-point, point-to-point, point-to-point or point-to-high-dimensional space closest to each other, so the calculator 10 calculates the group to which the test image 12 belongs, and finally The identification result of the test image is output through a display 20. If there is a time to shorten the test phase, that is, the face recognition time, the test image can be compared with the training sample by selecting the closest point-to-point distance in the feature space, thereby saving the time for face recognition.

進一步說明本發明之特徵空間轉換矩陣建置過程,首先在特徵空間中考慮某一特定的訓練樣本點y i ,而點到特徵空間上的距離可定義為∥y i -f ( P ) (y i )∥,其中f (P) 是一由p個樣本所構成的特徵子空間,例如p=2時,f (2) 為線,如第6圖所示,p=3時,f (3) 為面,如第7圖所示。而f ( P ) (y i )表示y i 在該特徵子空間上的投影點,則對y i 來說總共有個配對可以進行投影。而點到特徵子空間上投影所形成的向量及可以用來計算變異矩陣(scatter matrix),此種方式稱之為NFS嵌入(nearest feature space embedding),接續我們可以建立兩種目標函式如下列式(1)與式(2)所示:Further, the feature space conversion matrix construction process of the present invention first considers a specific training sample point y i in the feature space, and the distance from the point to the feature space can be defined as ∥ y i - f ( P ) ( y i )∥, where f (P) is a feature subspace composed of p samples, for example, when p=2, f (2) is a line, as shown in Fig. 6, when p=3, f (3) ) , as shown in Figure 7. And f ( P ) ( y i ) represents the projection point of y i on the feature subspace, then there is a total for y i Pairing can be projected. The vector formed by the projection onto the feature subspace can be used to calculate the scatter matrix. This method is called neaping feature space embedding. We can establish two kinds of target functions as follows. Equation (1) and formula (2):

其中,上述二式權重值w ( P ) (y i )可以用來表示點到特徵子空間上投影點的連結狀況,w ( P ) (y i )係為1或0,1表示將投影向量納入計算,0表示不將投影向量納入計算。而(1)式代表所有的點與其對應的投影點所形成的向量相加以後平方,(2)式則表示所有的點與其相對應的投影點所形成的向量平方後相加。舉例來說,p=1時,(1)式為局部線性嵌入(locally linear embedding,LLE)的目標函式,而(2)式為拉普拉司矩陣保留投影(laplacian matrix preserving projection,LPP)的目標函數。為了簡化表示,權重值w ( P ) (y i )可進一步以N×Np 的矩陣表示,在p=2時,y i 可以找到個點到線的投影向量,且權重是不相同的,而是不存在。Wherein, the above-mentioned binary weight value w ( P ) ( y i ) can be used to represent the connection state of the projection point on the point-to-feature subspace, w ( P ) ( y i ) is 1 or 0, and 1 represents the projection vector. Incorporate the calculation, 0 means that the projection vector is not included in the calculation. The equation (1) represents that all the points are squared with the vector formed by the corresponding projection points, and the equation (2) represents that the squares of all the points formed by their corresponding projection points are added. For example, when p=1, (1) is the local linear embedding (LLE) target function, and (2) is the laplacian matrix preserving projection (LPP). The objective function. To simplify the representation, the weight value w ( P ) ( y i ) can be further represented by a matrix of N × N p , and at p = 2, y i can be found Point-to-line projection vector with weight versus Is not the same, and It does not exist.

接續,說明如何以點至特徵子空間建制拉普拉司矩陣的演算法:Continued, showing how to construct a Laplacian matrix algorithm from point-to-feature subspace:

1、最近特徵點嵌入(nearest feature point embedding),p=11. Nearly feature point embedding, p=1

在此條件下,則f (1) 收斂為一個點,如果挑選K個最鄰近點,則式(1)中F1 則為標準的LLE函數:,而由於LLE函數與LPP函數相等,因此式(1)可表示為w T XLX T w ,而式(2)中w ( P ) (y i )可視為LPP的相似度函數,即,而式(2)同樣可以表示成w T XLX T wUnder this condition, f (1) converges to a point, if the selection of the K nearest neighbors, the formula (1) is compared with a standard F. LLE functions: Since the LLE function is equal to the LPP function, Equation (1) can be expressed as w T XLX T w , and w ( P ) ( y i ) in Equation (2) can be regarded as a similarity function of LPP, ie And the formula (2) can also be expressed as w T XLX T w .

2、最近特徵線嵌入(nearest feature line embedding),p=22, nearest feature line embedding, p=2

如第6圖所示,以點到線的距離為依據,y i 到特徵線的距離可以表示,其中投影點可以用y m y n 的線性組合來表示:,其中參數t m,n =(y i -y m ) T (y m -y n )/(y m -y n ) T (y m -y n ),而y i 所形成的向量可以用下列(3)式表示:As shown in Figure 6, based on the point-to-line distance, y i to the feature line Distance can Representation, where the projection point It can be represented by a linear combination of y m and y n : Where the parameter t m,n =( y i - y m ) T ( y m - y n )/( y m - y n ) T ( y m - y n ), and y i to The resulting vector can be expressed by the following formula (3):

而1-t m,n =t n,m ,目標函數F1 則可以表示如式(4)所示:And 1- t m,n = t n,m , the objective function F 1 can be expressed as shown in equation (4):

式(4)中W與M係為:The W and M systems in equation (4) are:

其中t m,n =(x i -x m ) T (x m -x n )/(x m -x n ) T (x m -x n ),Σ j M i,j =1,W i , j =(M +M T -M T M ) i , j ,則(1)式可以表示為拉普拉司矩陣的形式。式(2)中,可以先分解K個矩陣,而每一M i , j (1)表示點X i 與其最鄰近特徵,i,m,n可為1至N的自然數。兩個非0項M i , n (1)=t m , n M i , m (1)=t n , m M i , j (1)中的值,且,因此式(2)可以表示成下列式(7):Where t m,n =( x i - x m ) T ( x m - x n )/( x m - x n ) T ( x m - x n ), Σ j M i,j =1, W i , j = ( M + M T - M T M ) i , j , then the formula (1) can be expressed in the form of a Laplacian matrix. In equation (2), K matrices can be decomposed first, and each M i , j (1) represents the point X i and its nearest neighbor feature , i, m, n can be a natural number from 1 to N. Two non-zero terms M i , n (1)= t m , n , M i , m (1)= t n , m is the value in M i , j (1), and thus, equation (2) can Expressed as the following formula (7):

其中M i,n (k )=t m,n M i,m (k )=t n,m imnW i,j (k )=(M (k )+M (k ) T +M (k ) T M (k )) i,j ,因此式(2)亦可表示為拉普拉司矩陣的形式,因此只需將對應的矩陣M內容的值填入正確的對應值。Where M i,n ( k )= t m,n , M i,m ( k )= t n,m , imn , W i,j ( k )=( M ( k )+ M ( k ) T + M ( k ) T M ( k )) i,j , Therefore, the formula (2) can also be expressed in the form of a Laplacian matrix, so that it is only necessary to fill in the value of the corresponding matrix M content with the correct corresponding value.

3、最近特徵面/高維空間嵌入(nearest feature plane/space embedding),p=33, recent feature plane / high-dimensional space embedding (nearest feature plane / space embedding), p = 3

在此條件下,表示點y i 到特徵平面E q , m , n 上的投影點所形成的向量,其中平面E q , m , n 由三個特徵點y q y m 以及y n 為基底構成的平面,如第7圖所示,而平面上的投影點可以由公式(8)獲得,其中公式(8)如下所示:under this condition, Pointing point y i to the projection point on the feature plane E q , m , n a vector formed, wherein the plane E q , m , n is a plane formed by three feature points y q , y m and y n as shown in Fig. 7, and the projection point on the plane It can be obtained by the formula (8), where the formula (8) is as follows:

其中Y q , m , n =[(y m -y q )(y n -y q )]為一d ×2大小的矩陣,而已知三個特徵點y q y m 以及y n ,因此可以得到Where Y q , m , n =[( y m - y q )( y n - y q )] is a matrix of size d × 2, and three feature points y q , y m and y n are known , so get

而皆下來就可以把投影點(y i )以三個特徵點y q y m 以及y n 的線性組合來表示:And you can put the projection point ( y i ) is represented by a linear combination of three feature points y q , y m and y n :

其中t q +t m +t n =1。與p=2的情況相似,點y i 到鄰近K個特徵平面的向量加總後可以以表示,而M矩陣中的權重值即為上述式中的[t q t m t n ] T ,而t q =1-t m -t n X q , m , n =[(X m -x q )(x n -x q )]。當K已選定而M矩陣中的權重值也給定,式(1)中的F1 能以拉普拉司矩陣形式:w T XLX T w 表示,同樣地式(1)中的F2 亦可以w T XLX T w 表示。Where t q + t m + t n =1. Similar to the case of p=2, the point y i can be added to the vector adjacent to the K feature planes. Representation, and the weight value in the M matrix is [ t q t m t n ] T in the above formula, and , t q =1 - t m - t n , X q , m , n = [( X m - x q )( x n - x q )]. When K is selected and the weight value in the M matrix is also given, F 1 in the formula (1) can be expressed in the form of a Laplacian matrix: w T XLX T w , and F 2 in the same formula (1) is also Can be expressed as w T XLX T w .

此外,當P>3時,特徵點y i 到子空間基底上的投影點可以表示成:Furthermore, when P>3, the projection point of the feature point y i onto the subspace substrate can be expressed as:

其中Y =[y 2 -y 1 ,y 3 -y 1 ,...,y P -y 1 ]Td ×(P -1)大小的矩陣,並且。而特徵點y i 與投影點所形成的向量可以表示成:y i -f ( P ) (y i )=y i j M i , j y j ,因此,最後F1 與F2 可推導成拉普拉司矩陣的形式。Wherein Y = [ y 2 - y 1 , y 3 - y 1 , ..., y P - y 1 ] T is a matrix of size d × ( P -1), and . The vector formed by the feature point y i and the projection point can be expressed as: y i - f ( P ) ( y i ) = y i - Σ j M i , j y j , therefore, the last F 1 and F 2 can be derived In the form of a Laplacian matrix.

根據上述拉普拉司矩陣參入最近特徵空間條件後,即可進一步得到衡量區別資訊的變異矩陣,最後可獲得特徵空間轉換矩陣,即可將不同類別的影像分割開來,如第4圖所示,而各類別的樣本還保留了訓練樣本在原空間中的拓樸資訊以及線性變化關係。因此我們只需利用特徵空間轉換矩陣對應的特徵空間,以最近鄰近點方式進行比對即有相當好的辨識能力,可大幅節省習知在進行比對時所需要的時間。According to the above-mentioned Laplacian matrix, the nearest variation space condition can be obtained, and the variation matrix of the difference information can be further obtained. Finally, the feature space transformation matrix can be obtained, and the different types of images can be segmented, as shown in FIG. The samples of each category also retain the topology information and linear relationship of the training samples in the original space. Therefore, we only need to use the feature space corresponding to the feature space transformation matrix, and the comparison with the nearest neighbor point method has a fairly good recognition ability, which can greatly save the time required for the conventional comparison.

請參見第8-11圖,其係為本發明之影像辨識模型所進行的人臉辨識精準度於低維度空間之結果,與四篇習知技術作比較,如圖所示此些習知技術分別為:PCA+MFA代表主成分分析法搭配邊境費雪分析法;PCA+ONPDA代表主成分分析法搭配正交基鄰近保留判別分析;PCA+LPPface代表以主成分分析法加上區域保存投影方法;以及PCA+OLPPface代表主成分分析法搭配正交基之區域保存投影方法。Please refer to Figures 8-11, which are the results of the face recognition accuracy of the image recognition model of the present invention in a low dimensional space, compared with four prior art techniques, as shown in the prior art. They are: PCA+MFA stands for Principal Component Analysis with Border Fisher Analysis; PCA+ONPDA stands for Principal Component Analysis with Orthogonal Base Proximity Discriminant Analysis; PCA+LPPface stands for Principal Component Analysis plus Region Preservation Projection; and PCA+OLPPface stands for The principal component analysis method is used in conjunction with the orthogonal base region to preserve the projection method.

並且,本發明之影像辨識模型將p值設定為2(以點至線最近距離方式),以及K1 值與K2 值皆為10來建立特徵轉換空間矩陣。其中,第8-9圖為以CMU資料庫所提供訓練樣本所獲得的結果(第8圖為每一類組6個訓練樣本、第9圖為每一類組9個訓練樣本),第10-11圖則為IIS資料庫所提供訓練樣本所獲得的結果(第10圖為每一類組6個訓練樣本、第11圖為每一類組7個訓練樣本)。Moreover, the image recognition model of the present invention sets the p-value to 2 (in point-to-line closest distance mode), and the K 1 value and the K 2 value are all 10 to establish a feature conversion space matrix. Among them, Figures 8-9 show the results obtained from the training samples provided by the CMU database (Figure 8 shows 6 training samples for each group, and Figure 9 shows 9 training samples for each group), 10-11 The plan is the result of the training samples provided by the IIS database (Figure 10 shows 6 training samples for each group and Figure 11 shows 7 training samples for each group).

根據上述本發明的測試結果,可知本發明之影像辨識模型之建立方法具有優良的辨識度,由於本發明係改變原本使用在人臉辨識之測試階段的最近特徵空間嵌入式的概念,將最近特徵空間嵌入式融入在訓練階段所建構的特徵空間轉換矩陣中,而且本發明可以選擇採用點至不同的特徵子空間(如:點、線、面或高維空間)的方式進行特徵空間轉換的建構,並以拉普拉司矩陣方式表示之,因此不僅可以將原空間中的拓樸資訊保留,更可進一步將訓練樣本在原空間中的線性變化關係保留,可提升辨識度。而且在測試階段只需利用簡單的點對點分類器來進行待測人臉的辨識,因此,可減少一般在測試階段辨識人臉所花費的時間。According to the test result of the present invention, it can be seen that the method for establishing the image recognition model of the present invention has excellent recognition. Since the present invention changes the concept of the nearest feature space embedded in the test phase of the face recognition, the recent feature is Spatial embedding is integrated into the feature space transformation matrix constructed in the training phase, and the invention can choose to construct the feature space transformation by using points to different feature subspaces (such as: point, line, plane or high dimensional space). And it is expressed in the Laplacian matrix, so not only the topology information in the original space can be retained, but also the linear change relationship of the training samples in the original space can be further preserved, and the recognition degree can be improved. Moreover, in the test phase, only a simple point-to-point classifier is used to identify the face to be tested, thereby reducing the time it takes to identify the face generally during the test phase.

以上所述之實施例僅係為說明本發明之技術思想及特點,其目的在使熟習此項技藝之人士能夠瞭解本發明之內容並據以實施,當不能以之限定本發明之專利範圍,即大凡依本發明所揭示之精神所作之均等變化或修飾,仍應涵蓋在本發明之專利範圍內。The embodiments described above are merely illustrative of the technical spirit and the features of the present invention, and the objects of the present invention can be understood by those skilled in the art, and the scope of the present invention cannot be limited thereto. That is, the equivalent variations or modifications made by the spirit of the present invention should still be included in the scope of the present invention.

10...影像擷取器10. . . Image capture device

12...測試影像12. . . Test image

14...影像模擬處理器14. . . Image simulation processor

16...影像辨識模型16. . . Image recognition model

18...訓練樣本18. . . Training samples

20...顯示器20. . . monitor

第1圖為習知技術的訓練樣本的特徵空間轉換分佈示意圖。FIG. 1 is a schematic diagram showing a feature space conversion distribution of a training sample of the prior art.

第2圖為本發明之影像辨識模型之建立方法之步驟流程圖。FIG. 2 is a flow chart showing the steps of the method for establishing the image recognition model of the present invention.

第3(a)圖為組內變異之示意圖。Figure 3(a) is a schematic diagram of intra-group variation.

第3(b)圖為組間變異之示意圖。Figure 3(b) is a schematic diagram of variation between groups.

第4圖為本發明的訓練樣本的特徵空間轉換分佈示意圖。Figure 4 is a schematic diagram showing the feature space conversion distribution of the training samples of the present invention.

第5圖為本發明之影像辨識裝置。Figure 5 is an image recognition device of the present invention.

第6圖為點至p=2之特徵子空間之示意圖。Figure 6 is a schematic diagram of the feature subspace from point to p=2.

第7圖為點至p=3之特徵子空間之示意圖。Figure 7 is a schematic diagram of the feature subspace from point to p=3.

第8-11圖係分別為本發明與先前技術的人臉辨識精準度之成果示意圖。Figures 8-11 are diagrams showing the results of the face recognition accuracy of the present invention and the prior art, respectively.

Claims (18)

一種影像辨識模型之建立方法,包括:(a)收集複數個類組之複數個訓練樣本,並決定點至特徵子空間最短距離之方式來進行特徵空間轉換;(b)建立該等訓練樣本之一主要轉換矩陣,並將該等訓練樣本投影至降維空間;(c)根據所決定的該點至該特徵子空間最短距離之方式,計算該等訓練樣本之複數個組內投影點與複數個組間投影點,求得複數個組內向量與複數個組間向量;(d)根據該等組內向量與該等組間向量,利用一計算組內變異矩陣與組間變異矩陣之方程式計算一組內變異矩陣與一組間變異矩陣,該計算該組內變異矩陣與該組間變異矩陣係可表示為拉普拉司矩陣的形式,其中該方程式係如下所示: 其中y i 係為訓練樣本點;f (P ) (y i )係表示y i 在該特徵子空間上的投影點;w (P ) (y i )係為1或0,1表示將投影向量納入計算;(e)根據該組內變異矩陣與該組間變異矩陣,求得一變異轉換矩陣;以及(f)根據該主要轉換矩陣與該變異轉換矩陣,得到一特徵空間轉換矩陣,以作影像辨識模型使用。A method for establishing an image recognition model includes: (a) collecting a plurality of training samples of a plurality of clusters, and determining a shortest distance from the feature subspace to perform feature space conversion; (b) establishing the training samples a primary transformation matrix, and projecting the training samples into the dimensionality reduction space; (c) calculating a plurality of projection points and complex numbers of the training samples according to the determined shortest distance from the point to the feature subspace Projective points between groups, obtain a plurality of intra-group vectors and a plurality of inter-group vectors; (d) use an equation for calculating intra-group variation matrices and inter-group variation matrices according to the intra-group vectors and the inter-group vectors Calculating a set of internal variation matrices and an inter-group variation matrix, the calculation of the intra-group variation matrices and the inter-group variation matrix system can be expressed in the form of a Laplacian matrix, wherein the equation is as follows: Where y i is the training sample point; f ( P ) ( y i ) is the projection point of y i on the feature subspace; w ( P ) ( y i ) is 1 or 0, 1 indicates the projection vector Including calculation; (e) obtaining a mutation transformation matrix according to the intra-group variation matrix and the inter-group variation matrix; and (f) obtaining a feature space transformation matrix according to the main transformation matrix and the mutation transformation matrix The image recognition model is used. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該等訓練樣本或該測試影像係為人臉影像。 The method for establishing an image recognition model according to claim 1, wherein the training samples or the test images are facial images. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(c) 中根據該點至該特徵子空間之最短距離方式,計算該等訓練樣本在同該類組中之該等組內投影點與在不同該類組中之該等組間投影點,求得該等組內投影點所構成之複數個組內向量與該等組間投影點所構成之複數個組間向量。 The method for establishing an image recognition model according to claim 1, wherein the step (c) Calculating the projection point between the projection points of the training samples in the group and the groups in the different groups according to the shortest distance from the point to the feature subspace, and obtaining the A plurality of intra-group vectors formed by projection points in the group and a plurality of inter-group vectors formed by the projection points between the groups. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(a)後更包括一步驟:決定組內向量個數K1 與組間向量個數K2The method for establishing an image recognition model according to claim 1, wherein the step (a) further comprises a step of: determining a number of vectors K 1 in the group and a number K 2 between groups. 如申請專利範圍第4項所述之影像辨識模型之建立方法,其中該步驟(c)中更包括:將該等組內向量與該等組間投影向量依照距離由大而小方式排列。 The method for establishing an image recognition model according to claim 4, wherein the step (c) further comprises: arranging the intra-group vectors and the inter-group projection vectors in a large and small manner according to the distance. 如申請專利範圍第5項所述之影像辨識模型之建立方法,其中該步驟(d)中,在以距離由大而小排列之該等組內向量中,選取前K1 個該組內向量,並設定對應之至少一權重矩陣,以計算該組內變異矩陣。The method for establishing an image recognition model according to claim 5, wherein in the step (d), in the group of vectors in which the distance is arranged from large to small, the first K 1 inner vector of the group is selected. And setting corresponding at least one weight matrix to calculate the intra-group variation matrix. 如申請專利範圍第5項所述之影像辨識模型之建立方法,其中該步驟(d)中,在距離由大而小排列之該等組間向量中,選取前K2 個該組間向量,並設定對應之至少一權重矩陣,以計算該組間變異矩陣。The method for establishing an image recognition model according to claim 5, wherein in the step (d), among the inter-group vectors whose distances are arranged from large to small, the first K 2 inter-group vectors are selected. And corresponding at least one weight matrix is set to calculate the variation matrix between the groups. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(d)中,選取同類組中向量距離第一特徵臨界值之該等組內向量,並設定對應之至少一權重矩陣,以計算該組內變異矩陣。 The method for establishing an image recognition model according to claim 1, wherein in the step (d), selecting the intra-group vector of the first feature threshold value of the vector group in the same group, and setting the corresponding at least one weight A matrix to calculate the variation matrix within the group. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(d)中,選取不同類組中向量距離小於第二特徵臨界值之該等組間向量,並設定對應之至少一權重矩陣,以計算組間變異矩陣。 The method for establishing an image recognition model according to claim 1, wherein in the step (d), the inter-group vectors in which the vector distances in the different groups are smaller than the second characteristic threshold are selected, and the corresponding at least A weight matrix is used to calculate the variation matrix between groups. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(a)中該特徵子空間係包括點、線、面或高維空間。 The method for establishing an image recognition model according to claim 1, wherein the feature subspace in the step (a) comprises a point, a line, a surface or a high dimensional space. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(e)係將求得的該組內變異矩陣與該組間變異矩陣進行費雪條件式(Fisher’s Criterion)轉換計算該變異轉換矩陣。 The method for establishing an image recognition model according to claim 1, wherein the step (e) is performed by calculating the Fisher's Criterion conversion between the obtained intra-group variation matrix and the inter-group variation matrix. The mutation transformation matrix. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(e)中該變異轉換矩陣w* 係由該組內變異矩陣SB 與該組間變異矩陣Sw 求得,其關係式為The method for establishing an image recognition model according to claim 1, wherein the mutation transformation matrix w * in the step (e) is obtained by the intra-group variation matrix S B and the inter-group variation matrix S w , Its relationship is . 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該步驟(f)中該特徵空間轉換矩陣係為該主要轉換矩陣內積該變異轉換矩陣。 The method for establishing an image recognition model according to claim 1, wherein the feature space conversion matrix in the step (f) is an inner variation matrix of the main conversion matrix. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該主要轉換矩陣係由主成分分析法取得。 The method for establishing an image recognition model according to claim 1, wherein the main conversion matrix is obtained by principal component analysis. 如申請專利範圍第1項所述之影像辨識模型之建立方法,其中該影像辨識模型提供至少一測試影像投影,並根據該測試影像距離該訓練樣本的最相近距離來辨識該測試影像所屬之該類組。 The method for establishing an image recognition model according to claim 1, wherein the image recognition model provides at least one test image projection, and identifies the test image according to the closest distance of the test image from the training sample. Class group. 一種使用如申請專利範圍第1項之影像辨識模型的影像辨識方法,係將該等訓練樣本與至少一測試影像投影至該影像辨識模型中,以最近距離方式比對該測試影像與該等訓練樣本,以正確辨識該測試影像所屬之該類組。 An image recognition method using the image recognition model of claim 1 is to project the training samples and at least one test image into the image recognition model, and compare the test images with the training in the nearest distance manner. Sample to correctly identify the group to which the test image belongs. 如申請專利範圍第16項所述之影像辨識方法,其中係係採用點對特徵子空間之最短距離方式來正確辨識該測試影像之該類組。 For example, in the image recognition method described in claim 16, wherein the system uses the shortest distance method of the point-to-feature subspace to correctly identify the group of the test images. 如申請專利範圍第16項所述之影像辨識方法,其中該特徵子空間係為點、線、面或高維空間。 The image recognition method according to claim 16, wherein the feature subspace is a point, a line, a surface or a high dimensional space.
TW98135980A 2009-10-23 2009-10-23 Image recognition model and the image recognition method using the image recognition model TWI419058B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98135980A TWI419058B (en) 2009-10-23 2009-10-23 Image recognition model and the image recognition method using the image recognition model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98135980A TWI419058B (en) 2009-10-23 2009-10-23 Image recognition model and the image recognition method using the image recognition model

Publications (2)

Publication Number Publication Date
TW201115482A TW201115482A (en) 2011-05-01
TWI419058B true TWI419058B (en) 2013-12-11

Family

ID=44934484

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98135980A TWI419058B (en) 2009-10-23 2009-10-23 Image recognition model and the image recognition method using the image recognition model

Country Status (1)

Country Link
TW (1) TWI419058B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI712002B (en) * 2018-11-27 2020-12-01 國立交通大學 A 3d human face reconstruction method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI222031B (en) * 2001-12-03 2004-10-11 Microsoft Corp Automatic detection and tracking of multiple individuals using multiple cues
TW200707313A (en) * 2005-03-18 2007-02-16 Koninkl Philips Electronics Nv Method of performing face recognition
TW200739432A (en) * 2006-04-12 2007-10-16 Univ Nat Cheng Kung A method for face verification
TW200813858A (en) * 2006-03-31 2008-03-16 Toshiba Kk Face image read apparatus and method, and entrance/exit management system
TW200832237A (en) * 2007-01-19 2008-08-01 Univ Nat Chiao Tung Human activity recognition method by combining temple posture matching and fuzzy rule reasoning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI222031B (en) * 2001-12-03 2004-10-11 Microsoft Corp Automatic detection and tracking of multiple individuals using multiple cues
TW200707313A (en) * 2005-03-18 2007-02-16 Koninkl Philips Electronics Nv Method of performing face recognition
TW200813858A (en) * 2006-03-31 2008-03-16 Toshiba Kk Face image read apparatus and method, and entrance/exit management system
TW200739432A (en) * 2006-04-12 2007-10-16 Univ Nat Cheng Kung A method for face verification
TW200832237A (en) * 2007-01-19 2008-08-01 Univ Nat Chiao Tung Human activity recognition method by combining temple posture matching and fuzzy rule reasoning

Also Published As

Publication number Publication date
TW201115482A (en) 2011-05-01

Similar Documents

Publication Publication Date Title
Vieira et al. Detecting siblings in image pairs
WO2016150240A1 (en) Identity authentication method and apparatus
CN103996052B (en) Three-dimensional face gender classification method based on three-dimensional point cloud
JP6270182B2 (en) Attribute factor analysis method, apparatus, and program
WO2010035659A1 (en) Information processing apparatus for selecting characteristic feature used for classifying input data
JP4098021B2 (en) Scene identification method, apparatus, and program
CN113408605A (en) Hyperspectral image semi-supervised classification method based on small sample learning
CN110188763B (en) Image significance detection method based on improved graph model
CN106971158B (en) A kind of pedestrian detection method based on CoLBP symbiosis feature Yu GSS feature
CN110543906B (en) Automatic skin recognition method based on Mask R-CNN model
JPWO2019026104A1 (en) Information processing apparatus, information processing program, and information processing method
Wang et al. An unequal deep learning approach for 3-D point cloud segmentation
US8488873B2 (en) Method of computing global-to-local metrics for recognition
Chang et al. Intensity rank estimation of facial expressions based on a single image
CN111339960A (en) Face recognition method based on discrimination low-rank regression model
CN111091129B (en) Image salient region extraction method based on manifold ordering of multiple color features
CN110188864B (en) Small sample learning method based on distribution representation and distribution measurement
CN108491883B (en) Saliency detection optimization method based on conditional random field
CN112967296B (en) Point cloud dynamic region graph convolution method, classification method and segmentation method
Li et al. Unsupervised domain adaptation via discriminative feature learning and classifier adaptation from center-based distances
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN110472495B (en) Deep learning face recognition method based on graphic reasoning global features
TWI419058B (en) Image recognition model and the image recognition method using the image recognition model
Hettiarachchi et al. Multi-manifold-based skin classifier on feature space Voronoï regions for skin segmentation
Jena et al. Elitist TLBO for identification and verification of plant diseases