WO2008147039A1 - System and method for recognizing images using t-test - Google Patents
System and method for recognizing images using t-test Download PDFInfo
- Publication number
- WO2008147039A1 WO2008147039A1 PCT/KR2008/001665 KR2008001665W WO2008147039A1 WO 2008147039 A1 WO2008147039 A1 WO 2008147039A1 KR 2008001665 W KR2008001665 W KR 2008001665W WO 2008147039 A1 WO2008147039 A1 WO 2008147039A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- situations
- module
- recognizers
- face images
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/254—Fusion techniques of classification results, e.g. of results related to same input data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
Definitions
- the present invention relates to a system for recognizing images, and more particularly to a system for recognizing images using t-tests, which classifies recognizers using t-tests based on recognized image situations, so that recognizers capable of obtaining optimum image situations suitable for individual image situations can be selected, and face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
- Pattern recognition refers to a process of performing mapping to class-membership space. That is, it is a process of extracting important features from data acquired from the outside, classifying the closest templates based on the extracted features, and obtaining a final result.
- the main trend of the research is increasing the recognition rate by fusing multiple recognizers into a single system and combining output results together.
- the research does not present solutions on to how to fuse the results into a final result, but is limited to which recognizers are suitable for use for individual data regions . Accordingly, research into methods of efficiently classifying data and methods of classifying data so that classified data has optimum similarities is not sufficient .
- the accuracy of recognition can be guaranteed only for data regions classified under uniform conditions because the situation in which the individual data regions were classified is not considered, so that the range of application of the research is limited.
- an object of the present invention is to provide a system for recognizing images using t-tests, which can classify recognizers using t-tests based on image situations recognized through the image situation recognition module, so that recognizers capable of obtaining optimal image situations suitable for individual image situations can be selected, and so that face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
- the system including an image recognition module for loading face images,- an image normalization module for normalizing the loaded face images; an image pre-processing module for removing noise from the normalized face images and generating vector data,- an image situation classification module for classifying image situations, included in the vector data, into a user-specified number of types through clustering using a K-average clustering method; an image situation recognition module for calculating the average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing the image situations based on the calculated average values; and a recognizer fusion module for selecting recognizers suitable for the individual image situations through classification of the recognizers using t- tests based on the recognized image situations .
- the present invention provides a method of recognizing face images using t-tests in an image classification method using recognizers, including a first step for loading face images; a second step for normalizing the loaded face images,- a third step for removing noise from the normalized face images, and generating vector data; a fourth step for classifying image situations, included in the vector data, into a user-specified number of types through clustering using a K-average clustering method; a fifth step for calculating the average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing image situations based on the calculated average values; and a sixth step for selecting recognizers suitable for the individual image situations through classification of the recognizers using t-tests based on the recognized image situations.
- FIG. 1 is a diagram showing the structure of a conventional single recognizer
- FIG. 2 is a flow diagram showing the fusion of multiple recognizers in which single recognizers are fused together;
- FIG. 3 is a diagram showing the construction of a system for recognizing images using t-tests according to an embodiment of the present invention
- FIG. 4 is a diagram showing the relationships among clusters according to an embodiment of the present invention.
- FIG. 5 is a diagram showing the similarity tables of individual clusters according to an embodiment of the present invention
- FIG. 6 is a diagram showing the classification of image situations through an image situation classification module according to an embodiment of the present invention
- FIG. 7 is a diagram showing the recognition rates of individual recognizers according to an embodiment of the present invention.
- FIG. 8 is a graph showing the results of the combination of two recognizers CLS A and CLS B according to an embodiment of the present invention.
- FIG. 9 is graphs showing the results of (P A -S A ) , (P B ⁇ S B ) , and (P F -S F ) , obtained through Equation 4 according to an embodiment of the present invention.
- FIG. 10 is a flowchart showing a method of recognizing images using t-tests according to an embodiment of the present invention.
- an image recognition system 100 using t-tests includes an image recognition module 110, an image normalization module 120, an image pre-processing module 130, an image situation classification module 140, and a recognizer fusion module 160.
- the image recognition system 100 loads face images of a predetermined size from an image capture device such as a digital camcorder.
- the image normalization module 120 normalizes the loaded face images.
- the "normalization” is set to the normalization of loaded images into images of a 128 x 128 size, but the present invention is not limited thereto.
- the image pre-processing module 130 removes noise from the normalized images received from the image normalization module 120 through histogram equalization, and then generates vector data.
- the image situation classification module 140 receives the noise-free vector data from the image pre-processing module 130, and classifies image situations into a user-specified number of types through clustering using a K-average clustering method.
- the "image situations" have information that enables individual images to be distinguished from each other based on differences in the brightness (light and shade) of the individual images, and may be understood to be the illumination situations of the images that are obtained by setting the individual images to brightness (light and shade) values ranging from 0 to 255. The reason for this is to prevent a reduction in the performance of the recognition of face images attributable to the difference between image situations included in the face images .
- clustering calculates similarities, output from individual classifiers, as the probabilities of matching the right person using the following Equation 1 and then determines whether the results output from individual classifiers are true or false based on the calculation results .
- Image situations are classified through clustering, as shown in the attached FIG. 6.
- the image situation recognition module 150 calculates average values based on image situations and center points using the Euclidean distance formula based on the image situations classified through the image situation classification module 140, and recognizes the image situations based on the calculated average values .
- the recognizer fusion module 160 selects recognizers suitable for the image situations recognized through the classification of recognizers using t-tests based on the image situations recognized through the image situation recognition module 150. This selection of recognizers is performed through t-tests, and the selection of recognizers using the t- tests is performed as follows . First, it is assumed that there exist two image recognizers CLSA and CLSB. CLSA is defined as a distribution
- A and a face situation distribution is defined as
- F A ⁇ N(F A ,o A ) _ CLSB is defined as an image situation
- FIG. 8 shows the results of the combination of the two recognizers, and the results are obtained through the following Equation 3 :
- FIG. 9 shows the results of recognition rates (PA - SA) , (PB - SB) , and (PF - SF) of individual recognizers, which are obtained through the following Equation 4. It can be seen that the recognition rate (PF - SF) of a recognizer CLSF, into which the two recognizers CLSA and CLSB are combined, is higher than the best recognition rate (PA - SA) of the recognizer CLSA, as well • as the second best recognition rate (PB - SB) of the recognizer CLSB .
- the distance between the recognizer CLSA, exhibiting the best recognition rate (PA - SA) , and the recognizer CLSB, exhibiting the second best recognition rate (PB - SB) is obtained using the following Equation 5 , and the standard deviation of the distance is defined as ⁇ ⁇ 0 . 6SA.
- the recognizer fusion module 160 selects a recognizer CLSF into which the two recognizers CLSA and CLSB are combined based on the results of calculation using the following Equation 6 :
- Recognizers that are suitable for the face situations recognized through the image situation recognition module 150 can be selected based on recognition rates that are obtained by the individual recognizers through the classification of the recognizers using t-tests, as described above.
- the image recognition module 110 loads face images from an image-capture device at step SIlO, and the image normalization module 120 normalizes the loaded face images at step S120.
- the image pre-processing module 130 removes noise from the normalized face images through histogram equalization, and generates vector data at step S130.
- the image situation classification module 140 classifies image situations into a user-specified number of types through clustering using a K-average clustering method based on the noise-free vector data at step S140.
- the image situations recognition module 150 calculates average values of the classified image situations based on the image situations and center points using the Euclidean distance formula, and recognizes the image situations based on the calculated average values at step
- the recognizer fusion module 160 selects recognizers suitable for individual image situations through the classification of recognizers using t-tests based on the recognized image situations at step S160.
- recognizers can be classified using t-tests based on image situations recognized through the image situation recognition module, so that recognizers capable of obtaining optimal image situations suitable for the individual image situations can be selected, and so that face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
Abstract
Disclosed herein is a system and method for recognizing images using t-tests. The system includes an image recognition module, an image normalization module, an image pre-processing module, an image situation classification module, an image situation recognition module, and a recognizer fusion module. The image recognition module loads face images. The image normalization module normalizes the loaded face images. The image pre-processing module removes noise from the normalized face images and generates vector data. The image situation classification module classifies image situations into a user-specified number of types through clustering using a K-average clustering method. The image situation recognition module calculates the average values of the classified image situations based on the image situations and center points using the Euclidean distance formula, and recognizes the image situations. The recognizer fusion module selects recognizers suitable for the individual image situations through classification of the recognizers using t-tests based on the recognized image situations.
Description
SYSTEM AMD METHOD FOR RECOGNIZING IMAGES USING T-TEST
Technical Field
The present invention relates to a system for recognizing images, and more particularly to a system for recognizing images using t-tests, which classifies recognizers using t-tests based on recognized image situations, so that recognizers capable of obtaining optimum image situations suitable for individual image situations can be selected, and face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
Background Art
Pattern recognition refers to a process of performing mapping to class-membership space. That is, it is a process of extracting important features from data acquired from the outside, classifying the closest templates based on the extracted features, and obtaining a final result.
Accordingly, it is very important in such pattern recognition to collect data suitable for a desired application field and select recognizers for performing recognition in consideration of the characteristics of the collected individual data. Basically, most recognition methods using recognizers employ a single recognizer structure, in which input data λx' is classified based on an output xy' through a
single recognizer, as shown in FIG. 1.
However, since such a single recognizer has a problem in that the output result thereof varies with the type of data or the environment, a lot of research related to a multiple recognizer system capable of obtaining desired recognition results has been carried out recently.
However, as shown in the attached FIG. 2, the main trend of the research is increasing the recognition rate by fusing multiple recognizers into a single system and combining output results together. In the case in which a plurality of recognizers is selectively used, the research does not present solutions on to how to fuse the results into a final result, but is limited to which recognizers are suitable for use for individual data regions . Accordingly, research into methods of efficiently classifying data and methods of classifying data so that classified data has optimum similarities is not sufficient .
Furthermore, the accuracy of recognition can be guaranteed only for data regions classified under uniform conditions because the situation in which the individual data regions were classified is not considered, so that the range of application of the research is limited.
Disclosure of the Invention
Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a system for
recognizing images using t-tests, which can classify recognizers using t-tests based on image situations recognized through the image situation recognition module, so that recognizers capable of obtaining optimal image situations suitable for individual image situations can be selected, and so that face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
In order to a system for recognizing images using t- tests, the system including an image recognition module for loading face images,- an image normalization module for normalizing the loaded face images; an image pre-processing module for removing noise from the normalized face images and generating vector data,- an image situation classification module for classifying image situations, included in the vector data, into a user-specified number of types through clustering using a K-average clustering method; an image situation recognition module for calculating the average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing the image situations based on the calculated average values; and a recognizer fusion module for selecting recognizers suitable for the individual image situations through classification of the recognizers using t- tests based on the recognized image situations .
Additionally, the present invention provides a method of recognizing face images using t-tests in an image classification method using recognizers, including a first
step for loading face images; a second step for normalizing the loaded face images,- a third step for removing noise from the normalized face images, and generating vector data; a fourth step for classifying image situations, included in the vector data, into a user-specified number of types through clustering using a K-average clustering method; a fifth step for calculating the average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing image situations based on the calculated average values; and a sixth step for selecting recognizers suitable for the individual image situations through classification of the recognizers using t-tests based on the recognized image situations.
Brief Description of the Drawings
The above and other objects, features and other advantages of the present invention will be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a diagram showing the structure of a conventional single recognizer;
FIG. 2 is a flow diagram showing the fusion of multiple recognizers in which single recognizers are fused together;
FIG. 3 is a diagram showing the construction of a system for recognizing images using t-tests according to an embodiment of the present invention;
FIG. 4 is a diagram showing the relationships among
clusters according to an embodiment of the present invention;
FIG. 5 is a diagram showing the similarity tables of individual clusters according to an embodiment of the present invention; FIG. 6 is a diagram showing the classification of image situations through an image situation classification module according to an embodiment of the present invention;
FIG. 7 is a diagram showing the recognition rates of individual recognizers according to an embodiment of the present invention;
FIG. 8 is a graph showing the results of the combination of two recognizers CLSA and CLSB according to an embodiment of the present invention;
FIG. 9 is graphs showing the results of (PA-SA) , (PB~SB) , and (PF-SF) , obtained through Equation 4 according to an embodiment of the present invention; and
FIG. 10 is a flowchart showing a method of recognizing images using t-tests according to an embodiment of the present invention.
Best Mode for Carrying Out the Invention
The features and advantages of the present invention will become more apparent from the following detailed description with reference to the attached drawings. It is to be noted that, if it is determined that detailed descriptions of well-known functions related to the present invention make the gist of the present invention obscure, the detailed
descriptions will be omitted below.
Referring to FIG. 3, an image recognition system 100 using t-tests according to the present invention includes an image recognition module 110, an image normalization module 120, an image pre-processing module 130, an image situation classification module 140, and a recognizer fusion module 160.
In detail, the image recognition system 100 loads face images of a predetermined size from an image capture device such as a digital camcorder. The image normalization module 120 normalizes the loaded face images. Here, the "normalization" is set to the normalization of loaded images into images of a 128 x 128 size, but the present invention is not limited thereto.
The image pre-processing module 130 removes noise from the normalized images received from the image normalization module 120 through histogram equalization, and then generates vector data.
Referring to the attached FIG. 4, the image situation classification module 140 receives the noise-free vector data from the image pre-processing module 130, and classifies image situations into a user-specified number of types through clustering using a K-average clustering method. Here, the "image situations" have information that enables individual images to be distinguished from each other based on differences in the brightness (light and shade) of the individual images, and may be understood to be the illumination situations of the images that are obtained by setting the individual images to brightness (light and shade)
values ranging from 0 to 255. The reason for this is to prevent a reduction in the performance of the recognition of face images attributable to the difference between image situations included in the face images . Referring to the attached FIG. 5, clustering calculates similarities, output from individual classifiers, as the probabilities of matching the right person using the following Equation 1 and then determines whether the results output from individual classifiers are true or false based on the calculation results . Image situations are classified through clustering, as shown in the attached FIG. 6.
D(X)= {ω'm ,dm(*))o dM(x)= max(dj(*)) (1)
The image situation recognition module 150 calculates average values based on image situations and center points using the Euclidean distance formula based on the image situations classified through the image situation classification module 140, and recognizes the image situations based on the calculated average values .
As shown in the attached FIG. 7, since an image recognizer performs differently depending on the type of data used, the results of the recognition by individual recognizers are different, so that it can be seen that the best performance can be obtained using a method of averaging post- probabilities output from individual classifiers . The recognizer fusion module 160 selects recognizers suitable for the image situations recognized through the classification of recognizers using t-tests based on the image
situations recognized through the image situation recognition module 150. This selection of recognizers is performed through t-tests, and the selection of recognizers using the t- tests is performed as follows . First, it is assumed that there exist two image recognizers CLSA and CLSB. CLSA is defined as a distribution
Λ p having an average of recognition rates A and a variance value
A , and a face situation distribution is defined as
Λ
FA~N(FA,oA) _ CLSB is defined as an image situation
Λ distribution PB ~ N(PB><>B) _ CLSA and CLSB are combined together, and the standard deviation between recognition rates for the two recognizers is obtained using the following Equation 2 :
δF=ja2δ2+b2δ2 (2)
The attached FIG. 8 shows the results of the combination of the two recognizers, and the results are obtained through the following Equation 3 :
Sp=^a2S2+b2SB 2 (3)
The attached FIG. 9 shows the results of recognition rates (PA - SA) , (PB - SB) , and (PF - SF) of individual recognizers, which are obtained through the following Equation 4. It can be seen that the recognition rate (PF - SF) of a recognizer CLSF, into which the two recognizers CLSA and CLSB are combined, is higher than the best recognition rate (PA -
SA) of the recognizer CLSA, as well • as the second best recognition rate (PB - SB) of the recognizer CLSB .
aPA + bPB-^a2SA 2 +b2SB 2 > PA-S,
PF-SF > PA-SA
PF = aPA+bPB (4 )
The distance between the recognizer CLSA, exhibiting the best recognition rate (PA - SA) , and the recognizer CLSB, exhibiting the second best recognition rate (PB - SB) , is obtained using the following Equation 5 , and the standard deviation of the distance is defined as Δ < 0 . 6SA.
Λ Λ
The recognizer fusion module 160 selects a recognizer CLSF into which the two recognizers CLSA and CLSB are combined based on the results of calculation using the following Equation 6 :
Here, t (0.05,N-I) is the case in which t = 0.95, and t(0.05,N-l) = 1.96 is used if N > 100.
Recognizers that are suitable for the face situations recognized through the image situation recognition module 150
can be selected based on recognition rates that are obtained by the individual recognizers through the classification of the recognizers using t-tests, as described above.
Next, a method of recognizing images using t-tests according to an embodiment of the present invention will be described below. Referring to FIG. 10, the image recognition module 110 loads face images from an image-capture device at step SIlO, and the image normalization module 120 normalizes the loaded face images at step S120. The image pre-processing module 130 removes noise from the normalized face images through histogram equalization, and generates vector data at step S130.
The image situation classification module 140 classifies image situations into a user-specified number of types through clustering using a K-average clustering method based on the noise-free vector data at step S140.
Thereafter, the image situations recognition module 150 calculates average values of the classified image situations based on the image situations and center points using the Euclidean distance formula, and recognizes the image situations based on the calculated average values at step
S150.
The recognizer fusion module 160 selects recognizers suitable for individual image situations through the classification of recognizers using t-tests based on the recognized image situations at step S160.
Industrial Applicability
According to the above-described present invention, recognizers can be classified using t-tests based on image situations recognized through the image situation recognition module, so that recognizers capable of obtaining optimal image situations suitable for the individual image situations can be selected, and so that face images having different image situations can be efficiently recognized through the classification of the image situations of the face images .
Although the preferred embodiments of the present invention have been disclosed for illustrative purposes, those skilled in the art will appreciate that various modifications, additions and substitutions are possible, without departing from the scope and spirit of the invention as disclosed in the accompanying claims .
Claims
1. A system for recognizing images using t-tests, the system comprising: an image recognition module for loading face images; an image normalization module for normalizing the loaded face images,- an image pre-processing module for removing noise from the normalized face images and generating vector data; an image situation classification module for classifying image situations, included in the vector data, into a user- specified number of types through clustering using a K-average clustering method; an image situation recognition module for calculating average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing the image situations based on the calculated average values; and a recognizer fusion module for selecting recognizers suitable for the individual image situations through classification of the recognizers using t-tests based on the recognized image situations .
2. The system as set forth in claim 1, wherein the image situations are obtained by setting the individual images to brightness values ranging from 0 to 255.
3. A method of recognizing face images using t-tests in an image classification method using recognizers, comprising: a first step for loading face images; a second step for normalizing the loaded face images; a third step for removing noise from the normalized face images, and generating vector data; a fourth step for classifying image situations, included in the vector data, into a user-specified number of types through clustering using a K-average clustering method; a fifth step for calculating average values of the classified image situations based on the image situations and center points using a Euclidean distance formula, and recognizing image situations based on the calculated average values ; and a sixth step for selecting recognizers suitable for the individual image situations through classification of the recognizers using t-tests based on the recognized image situations .
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2007-0050779 | 2007-05-25 | ||
KR1020070050779A KR100870724B1 (en) | 2007-05-25 | 2007-05-25 | System for detecting image using t-test and method therefor |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2008147039A1 true WO2008147039A1 (en) | 2008-12-04 |
Family
ID=40075235
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2008/001665 WO2008147039A1 (en) | 2007-05-25 | 2008-03-25 | System and method for recognizing images using t-test |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR100870724B1 (en) |
WO (1) | WO2008147039A1 (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101984576A (en) * | 2010-10-22 | 2011-03-09 | 北京工业大学 | Method and system for authenticating anonymous identity based on face encryption |
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
ES2432479R1 (en) * | 2012-06-01 | 2014-05-28 | Universidad De Las Palmas De Gran Canaria | Method for the identification and automatic classification of arachnid species through their spider webs |
CN104036254A (en) * | 2014-06-20 | 2014-09-10 | 成都凯智科技有限公司 | Face recognition method |
DE102015200433A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and apparatus for reducing the testing effort in the evaluation of an object recognition system |
DE102015200434A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and apparatus for improving object recognition in different lighting situations |
DE102015200437A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for determining the confidence of an object recognition |
CN109977803A (en) * | 2019-03-07 | 2019-07-05 | 北京超维度计算科技有限公司 | A kind of face identification method based on Kmeans supervised learning |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9635285B2 (en) | 2009-03-02 | 2017-04-25 | Flir Systems, Inc. | Infrared imaging enhancement with fusion |
US10244190B2 (en) | 2009-03-02 | 2019-03-26 | Flir Systems, Inc. | Compact multi-spectrum imaging with fusion |
US9473681B2 (en) | 2011-06-10 | 2016-10-18 | Flir Systems, Inc. | Infrared camera system housing with metalized surface |
US10757308B2 (en) | 2009-03-02 | 2020-08-25 | Flir Systems, Inc. | Techniques for device attachment with dual band imaging sensor |
US9208542B2 (en) | 2009-03-02 | 2015-12-08 | Flir Systems, Inc. | Pixel-wise noise reduction in thermal images |
US9674458B2 (en) | 2009-06-03 | 2017-06-06 | Flir Systems, Inc. | Smart surveillance camera systems and methods |
US9235876B2 (en) | 2009-03-02 | 2016-01-12 | Flir Systems, Inc. | Row and column noise reduction in thermal images |
US9517679B2 (en) | 2009-03-02 | 2016-12-13 | Flir Systems, Inc. | Systems and methods for monitoring vehicle occupants |
US9451183B2 (en) | 2009-03-02 | 2016-09-20 | Flir Systems, Inc. | Time spaced infrared image enhancement |
US9756264B2 (en) | 2009-03-02 | 2017-09-05 | Flir Systems, Inc. | Anomalous pixel detection |
US9843742B2 (en) | 2009-03-02 | 2017-12-12 | Flir Systems, Inc. | Thermal image frame capture using de-aligned sensor array |
US9986175B2 (en) | 2009-03-02 | 2018-05-29 | Flir Systems, Inc. | Device attachment with infrared imaging sensor |
USD765081S1 (en) | 2012-05-25 | 2016-08-30 | Flir Systems, Inc. | Mobile communications device attachment with camera |
US9998697B2 (en) | 2009-03-02 | 2018-06-12 | Flir Systems, Inc. | Systems and methods for monitoring vehicle occupants |
US9948872B2 (en) | 2009-03-02 | 2018-04-17 | Flir Systems, Inc. | Monitor and control systems and methods for occupant safety and energy efficiency of structures |
US9819880B2 (en) | 2009-06-03 | 2017-11-14 | Flir Systems, Inc. | Systems and methods of suppressing sky regions in images |
US9716843B2 (en) | 2009-06-03 | 2017-07-25 | Flir Systems, Inc. | Measurement device for electrical installations and related methods |
US10091439B2 (en) | 2009-06-03 | 2018-10-02 | Flir Systems, Inc. | Imager with array of multiple infrared imaging modules |
US9843743B2 (en) | 2009-06-03 | 2017-12-12 | Flir Systems, Inc. | Infant monitoring systems and methods using thermal imaging |
US9756262B2 (en) | 2009-06-03 | 2017-09-05 | Flir Systems, Inc. | Systems and methods for monitoring power systems |
US9292909B2 (en) | 2009-06-03 | 2016-03-22 | Flir Systems, Inc. | Selective image correction for infrared imaging devices |
US9848134B2 (en) | 2010-04-23 | 2017-12-19 | Flir Systems, Inc. | Infrared imager with integrated metal layers |
US9207708B2 (en) | 2010-04-23 | 2015-12-08 | Flir Systems, Inc. | Abnormal clock rate detection in imaging sensor arrays |
US9706138B2 (en) | 2010-04-23 | 2017-07-11 | Flir Systems, Inc. | Hybrid infrared sensor array having heterogeneous infrared sensors |
CN103748867B (en) | 2011-06-10 | 2019-01-18 | 菲力尔系统公司 | Low-power consumption and small form factor infrared imaging |
US9706137B2 (en) | 2011-06-10 | 2017-07-11 | Flir Systems, Inc. | Electrical cabinet infrared monitor |
CN103828343B (en) | 2011-06-10 | 2017-07-11 | 菲力尔系统公司 | Based on capable image procossing and flexible storage system |
US10079982B2 (en) | 2011-06-10 | 2018-09-18 | Flir Systems, Inc. | Determination of an absolute radiometric value using blocked infrared sensors |
US9143703B2 (en) | 2011-06-10 | 2015-09-22 | Flir Systems, Inc. | Infrared camera calibration techniques |
US9235023B2 (en) | 2011-06-10 | 2016-01-12 | Flir Systems, Inc. | Variable lens sleeve spacer |
US10169666B2 (en) | 2011-06-10 | 2019-01-01 | Flir Systems, Inc. | Image-assisted remote control vehicle systems and methods |
CN103875235B (en) | 2011-06-10 | 2018-10-12 | 菲力尔系统公司 | Nonuniformity Correction for infreared imaging device |
US9961277B2 (en) | 2011-06-10 | 2018-05-01 | Flir Systems, Inc. | Infrared focal plane array heat spreaders |
US9058653B1 (en) | 2011-06-10 | 2015-06-16 | Flir Systems, Inc. | Alignment of visible light sources based on thermal images |
US9900526B2 (en) | 2011-06-10 | 2018-02-20 | Flir Systems, Inc. | Techniques to compensate for calibration drifts in infrared imaging devices |
US9509924B2 (en) | 2011-06-10 | 2016-11-29 | Flir Systems, Inc. | Wearable apparatus with integrated infrared imaging module |
US10841508B2 (en) | 2011-06-10 | 2020-11-17 | Flir Systems, Inc. | Electrical cabinet infrared monitor systems and methods |
US10051210B2 (en) | 2011-06-10 | 2018-08-14 | Flir Systems, Inc. | Infrared detector array with selectable pixel binning systems and methods |
US10389953B2 (en) | 2011-06-10 | 2019-08-20 | Flir Systems, Inc. | Infrared imaging device having a shutter |
US9811884B2 (en) | 2012-07-16 | 2017-11-07 | Flir Systems, Inc. | Methods and systems for suppressing atmospheric turbulence in images |
US9973692B2 (en) | 2013-10-03 | 2018-05-15 | Flir Systems, Inc. | Situational awareness by compressed display of panoramic views |
US11297264B2 (en) | 2014-01-05 | 2022-04-05 | Teledyne Fur, Llc | Device attachment with dual band imaging sensor |
CN109376693A (en) * | 2018-11-22 | 2019-02-22 | 四川长虹电器股份有限公司 | Method for detecting human face and system |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010031073A1 (en) * | 2000-03-31 | 2001-10-18 | Johji Tajima | Face recognition method, recording medium thereof and face recognition device |
US6539352B1 (en) * | 1996-11-22 | 2003-03-25 | Manish Sharma | Subword-based speaker verification with multiple-classifier score fusion weight and threshold adaptation |
US20030063780A1 (en) * | 2001-09-28 | 2003-04-03 | Koninklijke Philips Electronics N.V. | System and method of face recognition using proportions of learned model |
KR20060063599A (en) * | 2004-12-07 | 2006-06-12 | 한국전자통신연구원 | User recognition system and that method |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3530363B2 (en) | 1997-11-26 | 2004-05-24 | 日本電信電話株式会社 | Recognition model generation method and image recognition method |
JP4121808B2 (en) | 2002-08-29 | 2008-07-23 | 三菱電機株式会社 | Dictionary compression device |
-
2007
- 2007-05-25 KR KR1020070050779A patent/KR100870724B1/en not_active IP Right Cessation
-
2008
- 2008-03-25 WO PCT/KR2008/001665 patent/WO2008147039A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6539352B1 (en) * | 1996-11-22 | 2003-03-25 | Manish Sharma | Subword-based speaker verification with multiple-classifier score fusion weight and threshold adaptation |
US20010031073A1 (en) * | 2000-03-31 | 2001-10-18 | Johji Tajima | Face recognition method, recording medium thereof and face recognition device |
US20030063780A1 (en) * | 2001-09-28 | 2003-04-03 | Koninklijke Philips Electronics N.V. | System and method of face recognition using proportions of learned model |
KR20060063599A (en) * | 2004-12-07 | 2006-06-12 | 한국전자통신연구원 | User recognition system and that method |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102156871A (en) * | 2010-02-12 | 2011-08-17 | 中国科学院自动化研究所 | Image classification method based on category correlated codebook and classifier voting strategy |
CN101984576A (en) * | 2010-10-22 | 2011-03-09 | 北京工业大学 | Method and system for authenticating anonymous identity based on face encryption |
ES2432479R1 (en) * | 2012-06-01 | 2014-05-28 | Universidad De Las Palmas De Gran Canaria | Method for the identification and automatic classification of arachnid species through their spider webs |
CN104036254A (en) * | 2014-06-20 | 2014-09-10 | 成都凯智科技有限公司 | Face recognition method |
DE102015200433A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and apparatus for reducing the testing effort in the evaluation of an object recognition system |
DE102015200434A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and apparatus for improving object recognition in different lighting situations |
DE102015200437A1 (en) * | 2015-01-14 | 2016-07-14 | Bayerische Motoren Werke Aktiengesellschaft | Method and device for determining the confidence of an object recognition |
CN109977803A (en) * | 2019-03-07 | 2019-07-05 | 北京超维度计算科技有限公司 | A kind of face identification method based on Kmeans supervised learning |
Also Published As
Publication number | Publication date |
---|---|
KR100870724B1 (en) | 2008-11-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2008147039A1 (en) | System and method for recognizing images using t-test | |
US8787629B2 (en) | Image processing based on line-of-sight of a person | |
JP5174445B2 (en) | Computer-implemented video scene boundary detection method | |
US8880566B2 (en) | Assembler and method thereof for generating a complex signature of an input multimedia data element | |
US10706549B2 (en) | Iterative method for salient foreground detection and multi-object segmentation | |
US20200151434A1 (en) | Face image retrieval methods and systems, photographing apparatuses, and computer storage media | |
US9773189B2 (en) | Recognition apparatus and recognition method | |
US8150169B2 (en) | System and method for object clustering and identification in video | |
JP5385759B2 (en) | Image processing apparatus and image processing method | |
JP6448325B2 (en) | Image processing apparatus, image processing method, and program | |
US9336433B1 (en) | Video face recognition | |
EP3149611A1 (en) | Learning deep face representation | |
JP4098021B2 (en) | Scene identification method, apparatus, and program | |
US20140133743A1 (en) | Method, Apparatus and Computer Readable Recording Medium for Detecting a Location of a Face Feature Point Using an Adaboost Learning Algorithm | |
US9349193B2 (en) | Method and apparatus for moving object detection using principal component analysis based radial basis function network | |
Willamowski et al. | Probabilistic automatic red eye detection and correction | |
US9299008B2 (en) | Unsupervised adaptation method and automatic image classification method applying the same | |
CN115311262A (en) | Printed circuit board defect identification method | |
CN102722732B (en) | Image set matching method based on data second order static modeling | |
JP5755046B2 (en) | Image recognition apparatus, image recognition method, and program | |
Yi et al. | Long-range hand gesture recognition with joint ssd network | |
US7734096B2 (en) | Method and device for discriminating obscene video using time-based feature value | |
US20120201470A1 (en) | Recognition of objects | |
JP4447602B2 (en) | Signal detection method, signal detection system, signal detection processing program, and recording medium recording the program | |
JP2006133941A (en) | Image processing device, image processing method, image processing program, and portable terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08723701 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08723701 Country of ref document: EP Kind code of ref document: A1 |