WO2011088520A1 - Identifying matching images - Google Patents
Identifying matching images Download PDFInfo
- Publication number
- WO2011088520A1 WO2011088520A1 PCT/AU2011/000071 AU2011000071W WO2011088520A1 WO 2011088520 A1 WO2011088520 A1 WO 2011088520A1 AU 2011000071 W AU2011000071 W AU 2011000071W WO 2011088520 A1 WO2011088520 A1 WO 2011088520A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- images
- resolution
- image
- underlying
- probe image
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/285—Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
Definitions
- the disclosure concerns identifying candidate matching images to a probe image, such as in face recognition systems.
- the disclosure concerns matching images having different resolutions.
- Aspects include methods, computer systems and software.
- underlying resolution is taken to mean the inherent resolution or quality of the image, which is the amount of specific detail/optical information in the image. It represents the finest detail discernable in the image. It is not to be understood as a measure of the file size of the image or dots per inch (DPI) of the image.
- the size of the image file is not by itself a suitable measure of underlying resolution, for example, the image may have poor optics or features in the image may not be discernable.
- Fig. 1 The process performed by a typical face recognition system is shown in Fig. 1.
- the aim is to identify the person whose face is represented in the probe image 8 by comparing the probe image to a set of gallery images 14.
- Each image in the set of gallery images 14 includes the face of a person whose identity is known.
- the size of the probe image 8 and gallery images 14 must be the same prior to feature extraction [3].
- the images are normally resized during pre-processing 10 to a common intermediate format (IF) size (e.g. small sized images are up-scaled to this IF size while large sized images are downscaled to this IF size).
- IF intermediate format
- the face matching method is previously tuned to work with that particular IF image size. Then this face matching method 12 is applied to each probe image 8 by comparison to each of the gallery images 14 to identify candidate matching images in . the set of gallery images 14. Face matching methods can be placed into two general families: holistic and local- feature based. In typical holistic methods, a single feature vector describes the entire face and the spatial relations between face characteristics (e.g. eyes) are rigidly kept.
- Examples of such systems include PC A and Fisherfaces [2],
- local-feature based methods describe each face as a set of feature vectors (with each vector describing a small part of the face), with relaxed constraints on the spatial relations between face parts [4]
- Examples include systems based on elastic graph matching, hidden Markov models (HMMs) and Gaussian mixture models (GMMs) [4]
- HMMs hidden Markov models
- GMMs Gaussian mixture models
- Local-feature based methods have the advantage of being considerably more robust against misalignment as well as variations in illumination and pose [4, 1 1].
- face recognition systems using local-feature based approaches are more suitable for dealing with faces obtained in surveillance contexts.
- Post processing 16 is then performed on the results of the face matching method 12 such as referencing details of the people that were identified as candidate matches from an external database (not shown).
- the likely identity information 18 of the candidate match(es) from the set of gallery images 14 are presented to the user.
- upscaling does not introduce any new information, and can potentially introduce artifacts or noise. Also, upscaled images are blurry, which causes the extracted features to be very different than those obtained from the downscaled faces with high underlying resolution, resulting in a significant drop in recognition accuracy.
- Downscaling reduces the amount of information available, thereby reducing the performance of the face matching method.
- a method for identifying candidate matching images to a probe image comprising:
- the method advantageously dynamically selects the most appropriate matching method for a probe image.
- This allows the overall method to be tuned to work well for both high and low resolution images and new matching methods can be incorporated to enhance the accuracy of the overall method.
- the method of dynamically selecting the most appropriate matching method does not add much computational overhead meaning the overall method can remain fast and scalable.
- the probe and set of gallery images may be images representative of people's faces.
- the method of matching images may be a local-feature based or a holistic based method.
- the method may comprise determining the underlying/actual resolution of the probe image. Determining the underlying resolution of the image may comprise accessing a previously determined underlying resolution or calculating the underlying resolution as required. Determining the underlying resolution of the probe image may comprise selecting one of multiple resolution bands that the probe image belongs to, such as a high or low resolution band.
- Determining the underlying resolution of the probe image may comprise comparing the probe image to two or more sets of reference images, where for each set of reference images the underlying resolution of all the images of that set are substantially the same. Determining the underlying resolution of the probe image may be based on which set of reference images the probe image is most similar to, and determining (i.e. classifying) the underlying resolution of the probe image may be similar to the resolution of the images in that set of reference images.
- Determining the underlying resolution of the probe image may comprise determining an energy of the probe image based on a discrete cosine transformation of the probe image.
- the method may further comprise determining the underlying resolution of images in the set of gallery images.
- Selecting the method of matching images may be further based on the underlying resolution of the images in the set of gallery images.
- Selecting the method of matching images may be performed for each gallery image in the set of gallery images, and selecting the method of matching images for a gallery image is based on the underlying resolution of that gallery image.
- Performing the selected method may comprise performing the method of matching images selected for that gallery image on the probe image and that gallery image to determine whether that gallery image is a candidate matching image to the probe image.
- the method of matching images selected for a first and second gallery image may be different if the underlying resolutions of the first and second gallery images are different.
- Selecting a method of matching images may include tuning a matching method to be most suited to the underlying resolution of the probe image and/or the gallery image.
- the selected method may comprise selecting from the set of gallery images the image of that item having the optimal underlying resolution for the selected method and performing the selected method on only that image of the item of the two or more images of the item.
- the method of matching images may be a Multi Region Histogram (MRH) analysis [12].
- Determining the underlying resolution may comprise classifying the underlying resolution as one of two or more resolution bands, each band having a corresponding method of matching images, and selecting a method of matching images comprises selecting the method of matching images corresponding to the resolution band of the probe image.
- MSH Multi Region Histogram
- the determined underlying resolution may be either a high resolution or a low resolution, wherein the boundary between the high and low resolution is dependent on a predetermined underlying resolution or underlying resolution of the image in the set of gallery images that the selected method of matching images is to be performed on.
- Two indeterminate formats may be used, one being higher than the other, such that if the underlying resolution of the probe image is determined as being a high resolution, the selected method comprises converting the probe image to the higher intermediate format. Alternatively, if the resolution of the probe image is determined as being a low resolution, the selected method comprises converting the probe image to the lower intermediate format.
- a computer system for identifying candidate matching images to a probe image comprising:
- the computer system of the second aspect may further comprise an underlying resolution detector to determine the underlying resolution of the probe image.
- a method for determining an underlying resolution of an image comprising:
- An image may be most similar to the set if the average distance to the images in that set is the shortest.
- the invention provides a resolution detector that is operable to perform the method of the fourth aspect, such as a resolution detector of the third aspect.
- a method for determining an underlying resolution of an image comprising:
- comparing the proportion to a predetermined threshold the threshold being indicative of underlying resolution of the image.
- software is provided, that when installed on a computer system causes it to perform the method of the seventh aspect.
- a resolution detector is provided that is operable to perform the method of the seventh aspect, such as a resolution detector of the third aspect.
- Fig. 1 is a flow chart of the steps performed by a typical face recognition system.
- Fig. 2 is a flow chart of the steps performed by a face recognition system of a first example
- Fig. 3 is a schematic diagram of a computer system to perform the method shown in Fig. 2;
- Figs. 4 and 5 are tables showing comparative results of an implementation of the first example.
- Fig. 6 is a flow chart of the steps performed by a face recognition system of a second example.
- Mismatched underlying resolutions between probe and gallery images can cause significant performance degradation for face recognition systems, particularly those which use high-resolution faces (e.g. mugshots or passport photos) as gallery images.
- Another source of underlying resolution mismatches is due to the fact that the size (in terms of pixels) of a given face image may not be a reliable indicator of the underlying optical resolution. For example poor quality optics in low-cost cameras can act as low- pass filters. Also poor focus and over-exposure can result in blur and loss of detail.
- typical local feature based recognition approaches pre-suppose that the original sizes of the given images are an indicator of the underlying resolutions. Situations can arise where the given probe face image has an underlying resolution larger than the resolution that can be captured in the IF image size (e.g. such as probe images obtained through a telephoto lens).
- the face recognition system of this first example is able to classify those situations in which a method using high-to-high resolution comparison is possible (i.e. using a larger IF) and when a method using a low-to-high resolution face comparison (i.e. using a smaller IF size).
- the face recognition system can handle resolution mismatches for the recently proposed Multi-Region Histograms (MRH) local-feature face matching method.
- MHM Multi-Region Histograms
- a probe image 8 is received and a underlying resolution detector operates to determine 20 the resolution of this probe image 8.
- all the possible probe images 8 are the same size (64x64) but the underlying (e.g. actual) resolutions are not the same.
- This method has two sets of cohort images (reference face images). One set has high resolution images ⁇ A and the second set has low resolution images The resolution detector measures whether the probe image ⁇ is more similar to either low resolution cohort images or high-resolution cohort images
- d raw is the match distance between the probe and individual images in set 1 * . That match distance (also known as similarity distance) is dependent on the matching algorithm.
- MRH is an example of one such matching algorithm [12]. The smallest average distance d ⁇ s(Q> S A ) or ⁇ kvg(Q, B ) is determined. If the distance to ⁇ B is shorter that the distance to &A s it is determined that the probe image is low resolution, otherwise it is determined that it the is a high resolution probe image.
- This energy-based method analyses the amount of energy within a subset of frequency domain.
- a 2 Dimensional (2D) Discrete Cosine Transform (DCT) analysis on the whole probe image (i.e. holistic face) is performed.
- the 2D DCT analysis extracts a set of coefficients, or weights, of cosine functions oscillating at different frequencies.
- the absolute value of the coefficients are summed to get a total "energy” normaliser.
- sum coefficients from low frequency to high frequency are also summed and divided over the total "energy” to get the cumulative percentage of total energy up to a particular frequency.
- This cumulative percentage of total energy level is compared to a predetermined threshold. For example, summing the first 25% of the low frequency domain can give an indication of the underlying resolution of a given image.
- the image can be classified as containing low underlying resolution
- the method used to determine the underlying resolution of the probe image is the cohort-based method (method 1 listed above), with the value of d ra w being obtained through the MRH face matching method.
- the MRH-based face matching method is now briefly described.
- the MRH local - feature face matching method can be thought of as a hybrid between the HMM and GMM based systems [12].
- the MRH approach is motivated by the 'visual words' technique originally used in image categorisation [10].
- Each face is divided into several fixed and adjacent regions, with each region comprising a relatively large part of the face.
- Each block has a size of 8X8 pixels, which is the typical size used for DCT analysis.
- each block is normalised to have zero mean and unit variance.
- coefficients from the top-left 4x4 sub-matrix of the 8x8 DCT coefficient matrix are used, excluding the 0-th coefficient (which has no information due to the normalisation).
- a probabilistic histogram is computed: where the ⁇ ⁇ element in ' 3 ⁇ 4r, ⁇ is the posterior probability of ⁇ r ⁇ according to the ⁇ ⁇ component of a visual dictionary model.
- the mean of each Gaussian can be thought of as a particular 'visual word'.
- the DCT decomposition acts like a low-pass filter, with the information retained from each block being robust to small alterations (e.g. due to niinor in-plane rotations).
- the best matching method is performed on the probe image 8. If the resolution of the probe image is classified as high 22 then the method of matching images 24 that has superior performance on such high resolution probe images is selected.
- the method 24 is MRH tuned for high resolution images, that is, it is trained on a set of high resolution images with a similarly high IF (i.e. an IF size that is sufficiently large enough to capture the detail of probe images classified as high resolution) to learn a model.
- the probe image 8 is first converted to the size of the a high IF 24(a) being 64x64 in this example and then MRH tuned to high resolution images is performed 24(b).
- the method 26 is MRH tuned for low resolution images, that is, it is trained on a set of low resolution images with lower IF to learn a model.
- the probe image 8 is first converted to an low IF (i.e an IF size that is sufficiently large enough to capture the detail of probe images classified as low resolution but is smaller than the large IF size) 28(a) being 32x32 in this example and then MRH tuned to low resolution images is performed 28(b).
- the boundary that defines a high and low resolution is predetermined and remains the same for all probe images 8 that are assessed.
- the boundary between the high and low images may be dependent on the resolution of all the images in the gallery or may be adjusted based on the image in the gallery that the selected method will be performed next.
- the selected method may also comprise selecting from the multiple faces of the same person the image of that person having the best resolution for the comparison. For example, it may select the image of the person in the gallery that has a resolution most similar to the probe resolution. After which the face recognition method most suitable (i.e. using the smallest IF size that is able to capture this resolution) will be applied for the comparison.
- MRH-based recognition tuned for high resolution (where all given images are resized to high IF of 64x64) is able to handle images which have a high underlying resolution of 32x32 or higher, while MRH-based recognition tuned for low underlying resolution (where all images are resized to low IF of 32x32) is more suited for lower resolutions. This results in the sensitivity of local DCT features to resolution mismatches being exploited.
- Post-processing 16 and identity steps 18 are then performed. Additional pre-processing steps (not shown) may be performed before or after the resolution is detected as appropriate. For example, cropping the probe image 8.
- Fig. 3 shows a computer face recognition system 20 that is able to perform the method of Fig. 2.
- the computer system 30 comprises an input port 32, an output port 34, internal memory 36 and a processor 38.
- the internal memory stores the gallery of images 14 and the associated resolution and identity information of the person represented in each image.
- the processor 38 is comprised of a resolution detector 42, a matching method selector 44 and matching module 46.
- the probe image 8 is received at the input port 32 and the processor 38 operates according to software installed on the computer 30 to cause the resolution detector 42 to determine the resolution of the probe image (and in example two below the resolution of each gallery image).
- the method selector 44 uses the determined resolution to select the most appropriate method of matching images 24 or 28.
- the processor 38 then provides the result of the matching method to the output port 34.
- the output port may be connected to a monitor (not shown) and the processor 38 may also drive a user interface to display the candidate matches from the set of gallery images to the user.
- the set of gallery images is the Labeled Faces in the Wild (LFW) dataset which contains 13,233 face images (from 5749 unique persons) collected from the Internet [8], The faces exhibit several compound problems such as misalignment and variations in pose, expression and illumination. Initially a pre-processing step is performed where closely cropped faces (to exclude the background) were extracted from each image using a fixed bounding box placed in the same location in each LFW image.
- the first image in the each pair was rescaled to 64x64 while the second image was first rescaled to a size equal to or smaller than 64x64, followed by up-scaling to the same size as the first image (i.e. deliberate loss of information, causing the image size to be uninformative as to the underlying resolution).
- the underlying resolution of the second image varied from 8x8 to 64x64.
- implementation 2 we evaluated the performance of three MRH-based systems for classifying LFW image pairs subject to resolution mismatches. Matching methods A and B were tuned for size A and B, respectively, while the dynamic system 44 applies the proposed compensation framework to switch between methods A and B according to the classification result of the resolution detector 42.
- the proposed dynamic system is able to retain the best aspect of system A (i.e. good accuracy at the highest resolution) with performance similar to system B at lower resolutions. Consequently, the dynamic system of the example obtains the best overall performance.
- the two systems were tuned to different underlying resolutions.
- System A tuned for underlying resolutions of 32x32 and higher sizes, was shown to outperform System B when being compared to images of similar underlying resolution, while underperforming when comparing images of very different underlying resolution (16x16 and 8x8).
- the reverse was true for System B, tuned for lower resolutions.
- the dynamic face recognition system of this example is able to maximise performance by applying the face matching method best tuned for any given pair of images based on their underlying resolutions. This examples shows that higher overall face discrimination accuracy (across several resolutions) compared to the individual baseline face recognition systems. It is an advantage of this example that the face recognition system can handle both high-to-high and low-to-high resolution comparisons.
- the face recognition system of this example is able to retain the best aspect of system A (i.e. good accuracy at the highest resolution) with performance similar to system B at lower resolutions. Consequently, the dynamic system obtains the best overall performance.
- the underlying resolution of the images in the set of gallery images is not yet known.
- the gallery includes images having different resolutions, such mug shots of high resolution as well as low resolution CCTV images.
- the underlying resolution of each of the gallery images is determined and is stored in memory.
- the underlying resolution of the probe image 8 is determined 20.
- the resolution of the current gallery image is determined 80. Initially, this will be the first image in the gallery. In one example, the resolution of the current image in the gallery is obtained from memory. Alternatively, the resolution of the first gallery image could be determined by analysing 20 the current gallery image in the same way as the resolution of the probe image 8 was determined.
- the resolution of that gallery image and the probe image is assessed to select 82 the optimal face matching method to be used to compare the current gallery image and the probe image to determine whether the gallery image is a candidate match.
- the matching method of Fisherfaces (LDA), Eigenfaces (PCA), MRH with IF tuned to that resolution (i.e. a image size that can capture that resolution), or a number of other methods can be selected.
- LDA Fisherfaces
- PCA Eigenfaces
- MRH with IF tuned to that resolution i.e. a image size that can capture that resolution
- the resolution of the probe image is not similar to the resolution of the current gallery image
- the method of MRH with downscaling and IF tuned to the lower resolution image, or simultaneous super-resolution image reconstruction and recognition is selected.
- the number of different methods of identifying candidate matching images may be more than two and may be specific to the particular combination of probe and gallery image resolutions that are to be compared. The aim is that the method for any combination of resolutions will be optimal for that combination.
- the probe image and current gallery image are compared 84 using the selected matching method to determine whether they are a candidate match.
- Steps 80, 82 and 84 are repeated for each image of the gallery, at every repeat the next gallery image is Used until there are not more images in the gallery. That is, for each repeat the current image becomes the next image in the gallery that has not yet been analysed.
- the resolution of the probe image was classified as either high or low.
- the resolution of the probe image can be classified into one of three of more resolution bands and each resolution band having an associated matching method that can be optimally deployed for that resolution.
- three or more IFs may be used by the face recognition system.
- the examples described here relate to face recognition, however the method may be applied to different types of images where candidate matches between a probe image and a set of gallery images is required to be identified. Such as images representing materials or animals.
- Suitable computer readable media may include volatile (e.g. RAM) and/or non-volatile (e.g. ROM, disk) memory, carrier waves and transmission media (e.g. copper wire, coaxial cable, fibre optic media).
- exemplary carrier waves may take the form of electrical, electromagnetic or optical signals conveying digital data steams along a local network or a publically accessible network such as the internet.
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2011207120A AU2011207120B8 (en) | 2010-01-25 | 2011-01-24 | Identifying matching images |
US13/574,555 US9165184B2 (en) | 2010-01-25 | 2011-01-24 | Identifying matching images |
AU2017201281A AU2017201281B2 (en) | 2010-01-25 | 2017-02-24 | Identifying matching images |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2010900281 | 2010-01-25 | ||
AU2010900281A AU2010900281A0 (en) | 2010-01-25 | Identifying matching images |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2011088520A1 true WO2011088520A1 (en) | 2011-07-28 |
WO2011088520A8 WO2011088520A8 (en) | 2011-10-06 |
Family
ID=44306311
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/AU2011/000071 WO2011088520A1 (en) | 2010-01-25 | 2011-01-24 | Identifying matching images |
Country Status (3)
Country | Link |
---|---|
US (1) | US9165184B2 (en) |
AU (2) | AU2011207120B8 (en) |
WO (1) | WO2011088520A1 (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2514132A (en) | 2013-05-14 | 2014-11-19 | Ibm | Two-directional biometric matching |
US9864758B2 (en) | 2013-12-12 | 2018-01-09 | Nant Holdings Ip, Llc | Image recognition verification |
US10318576B2 (en) | 2013-12-12 | 2019-06-11 | Nant Holdings Ip, Llc | Image recognition verification |
JP6438209B2 (en) * | 2014-04-07 | 2018-12-12 | 株式会社荏原製作所 | Control device for generating timing signal for imaging device in inspection device, and method for transmitting timing signal to imaging device |
US9384386B2 (en) | 2014-08-29 | 2016-07-05 | Motorola Solutions, Inc. | Methods and systems for increasing facial recognition working rang through adaptive super-resolution |
US11538257B2 (en) * | 2017-12-08 | 2022-12-27 | Gatekeeper Inc. | Detection, counting and identification of occupants in vehicles |
US11068741B2 (en) | 2017-12-28 | 2021-07-20 | Qualcomm Incorporated | Multi-resolution feature description for object recognition |
US10867193B1 (en) | 2019-07-10 | 2020-12-15 | Gatekeeper Security, Inc. | Imaging systems for facial detection, license plate reading, vehicle overview and vehicle make, model, and color detection |
US11196965B2 (en) | 2019-10-25 | 2021-12-07 | Gatekeeper Security, Inc. | Image artifact mitigation in scanners for entry control systems |
WO2021115483A1 (en) * | 2019-12-13 | 2021-06-17 | 华为技术有限公司 | Image processing method and related apparatus |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5842194A (en) | 1995-07-28 | 1998-11-24 | Mitsubishi Denki Kabushiki Kaisha | Method of recognizing images of faces or general images using fuzzy combination of multiple resolutions |
US6393137B1 (en) | 1999-06-17 | 2002-05-21 | Raytheon Company | Multi-resolution object classification method employing kinematic features and system therefor |
US7233699B2 (en) * | 2002-03-18 | 2007-06-19 | National Instruments Corporation | Pattern matching using multiple techniques |
US20040228504A1 (en) * | 2003-05-13 | 2004-11-18 | Viswis, Inc. | Method and apparatus for processing image |
US7929774B2 (en) | 2006-06-28 | 2011-04-19 | Intel Corporation | Method of inferential analysis of low resolution images |
JP5012092B2 (en) * | 2007-03-02 | 2012-08-29 | 富士通株式会社 | Biometric authentication device, biometric authentication program, and combined biometric authentication method |
US8351688B2 (en) * | 2009-12-17 | 2013-01-08 | Xerox Corp | Categorization quality through the combination of multiple categorizers |
JP5668587B2 (en) * | 2011-04-19 | 2015-02-12 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
US8441548B1 (en) * | 2012-06-15 | 2013-05-14 | Google Inc. | Facial image quality assessment |
-
2011
- 2011-01-24 WO PCT/AU2011/000071 patent/WO2011088520A1/en active Application Filing
- 2011-01-24 US US13/574,555 patent/US9165184B2/en active Active
- 2011-01-24 AU AU2011207120A patent/AU2011207120B8/en active Active
-
2017
- 2017-02-24 AU AU2017201281A patent/AU2017201281B2/en active Active
Non-Patent Citations (3)
Title |
---|
AHMED, N., DISCRETE COSINE TRANSFORM, COMPUTERS IEEE TRANSACTIONS ON, January 1974 (1974-01-01), pages 90 - 93 * |
LU, X., IMAGE ANALYSIS FOR FACE RECOGNITION, May 2003 (2003-05-01), Retrieved from the Internet <URL:http://citeseerx.ist.psu.edu/viewdoc/down load?doi=10.1.1.101.4026&rep=rep1&type=p> [retrieved on 20100830] * |
PARK, U., FACE RECOGNITION: FACES IN VIDEO, AGE INVARIANCE, AND FACIAL MARKS, 2009, Retrieved from the Internet <URL:http://citeseerx. ist.psu.edu/viewdoc/dovvnload?doi=10.1.1.153.7777&rep=rep I &type=p> [retrieved on 20100830] * |
Also Published As
Publication number | Publication date |
---|---|
AU2011207120A1 (en) | 2012-08-09 |
AU2017201281B2 (en) | 2019-03-07 |
US9165184B2 (en) | 2015-10-20 |
US20120328197A1 (en) | 2012-12-27 |
AU2011207120B8 (en) | 2016-12-15 |
AU2017201281A1 (en) | 2017-03-16 |
WO2011088520A8 (en) | 2011-10-06 |
AU2011207120B2 (en) | 2016-11-24 |
AU2011207120A8 (en) | 2016-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
AU2017201281B2 (en) | Identifying matching images | |
Qureshi et al. | A bibliography of pixel-based blind image forgery detection techniques | |
Nishiyama et al. | Facial deblur inference using subspace analysis for recognition of blurred faces | |
JP6204199B2 (en) | Image quality assessment | |
Kandaswamy et al. | Efficient texture analysis of SAR imagery | |
Benzaoui et al. | Ear biometric recognition using local texture descriptors | |
CN109902618A (en) | A kind of sea ship recognition methods and device | |
KR20080021181A (en) | Video data processing method and system thereof | |
CN110309810B (en) | Pedestrian re-identification method based on batch center similarity | |
Bianco et al. | Robust smile detection using convolutional neural networks | |
Aneesh et al. | Optimal feature selection based on image pre-processing using accelerated binary particle swarm optimization for enhanced face recognition | |
Wong et al. | Dynamic amelioration of resolution mismatches for local feature based identity inference | |
Ruchay et al. | Removal of impulse noise clusters from color images with local order statistics | |
KR20090065099A (en) | System for managing digital image features and its method | |
Raghavendra et al. | A novel image fusion scheme for robust multiple face recognition with light-field camera | |
Groeneweg et al. | A fast offline building recognition application on a mobile telephone | |
EP3137895A1 (en) | Method and apparatus for processing block to be processed of urine sediment image | |
Ajitha et al. | Face recognition system using Combined Gabor Wavelet and DCT approach | |
CN114445916A (en) | Living body detection method, terminal device and storage medium | |
Jyothy et al. | Texture-based multiresolution steganalytic features for spatial image steganography | |
Dawood et al. | Combining the contrast information with LPQ for texture classification | |
Shri et al. | Video Analysis for Crowd and Traffic Management | |
Jayanthi et al. | Efficient fuzzy color and texture feature extraction technique for content based image retrieval system | |
Huang et al. | Blind super-resolution image reconstruction based on novel blur type identification | |
Bellman et al. | A classification approach to finding buildings in large scale aerial photographs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11734262 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011207120 Country of ref document: AU |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13574555 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2011207120 Country of ref document: AU Date of ref document: 20110124 Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11734262 Country of ref document: EP Kind code of ref document: A1 |