US20070140551A1 - Banknote validation - Google Patents

Banknote validation Download PDF

Info

Publication number
US20070140551A1
US20070140551A1 US11/366,147 US36614706A US2007140551A1 US 20070140551 A1 US20070140551 A1 US 20070140551A1 US 36614706 A US36614706 A US 36614706A US 2007140551 A1 US2007140551 A1 US 2007140551A1
Authority
US
United States
Prior art keywords
images
classifier
banknote
information
training set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/366,147
Inventor
Chao He
Gary Ross
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NCR Voyix Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=37529297&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=US20070140551(A1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Individual filed Critical Individual
Priority to US11/366,147 priority Critical patent/US20070140551A1/en
Assigned to NCR CORPORATION reassignment NCR CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HE, CHAO, ROSS, GARY
Priority to BRPI0619845-7A priority patent/BRPI0619845A2/en
Priority to PCT/GB2006/003565 priority patent/WO2007068867A1/en
Priority to EP06779545A priority patent/EP1964073A1/en
Priority to JP2008545069A priority patent/JP5219211B2/en
Priority to CN2006800473583A priority patent/CN101331526B/en
Priority to CN2006800472788A priority patent/CN101366060B/en
Priority to BRPI0620625-5A priority patent/BRPI0620625A2/en
Priority to PCT/GB2006/004663 priority patent/WO2007068923A1/en
Priority to EP06831386A priority patent/EP1964076A1/en
Priority to JP2008545086A priority patent/JP5177817B2/en
Priority to JP2008545088A priority patent/JP5175210B2/en
Priority to CN2006800473687A priority patent/CN101366061B/en
Priority to PCT/GB2006/004676 priority patent/WO2007068930A1/en
Priority to BRPI0619926-7A priority patent/BRPI0619926A2/en
Priority to CN2006800475165A priority patent/CN101331527B/en
Priority to JP2008545085A priority patent/JP5044567B2/en
Priority to EP06820517A priority patent/EP1964075A1/en
Priority to BRPI0620308-6A priority patent/BRPI0620308A2/en
Priority to PCT/GB2006/004670 priority patent/WO2007068928A1/en
Priority to EP06820512A priority patent/EP1964074A1/en
Priority to US11/639,593 priority patent/US20070154078A1/en
Priority to US11/639,597 priority patent/US20070154079A1/en
Priority to US11/639,576 priority patent/US8086017B2/en
Publication of US20070140551A1 publication Critical patent/US20070140551A1/en
Assigned to JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT reassignment JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT SECURITY AGREEMENT Assignors: NCR CORPORATION, NCR INTERNATIONAL, INC.
Assigned to NCR VOYIX CORPORATION reassignment NCR VOYIX CORPORATION RELEASE OF PATENT SECURITY INTEREST Assignors: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/202Testing patterns thereon using pattern matching
    • G07D7/206Matching template patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Definitions

  • the present invention relates to a method and apparatus for banknote validation.
  • a currency validator determines whether a given banknote is genuine or counterfeit.
  • Previous automatic validation methods typically require a relatively large number of examples of counterfeit banknotes to be known in order to train the classifier.
  • those previous classifiers are trained to detect known counterfeits only. This is problematic because often little or no information is available about possible counterfeits. For example, this is particularly problematic for newly introduced denominations or newly introduced currency.
  • the invention seeks to provide an improved method and apparatus for banknote validation which overcomes or at least mitigates one or more of the problems mentioned above.
  • a method of creating a classifier for banknote validation is described.
  • Information from all of a set of training images from genuine banknotes is used to form a segmentation template which is then used to segment each of the training set images.
  • Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier.
  • Classifiers can be quickly and simply formed for different currencies and denominations in this way and without the need for examples of counterfeit banknotes.
  • a banknote validator using such a classifier is described as well as a method of validating a banknote using such a classifier.
  • the banknote validator is incorporated in a self-service apparatus such as an automated teller machine.
  • the information from all images in the training set comprises morphological information.
  • This can be pattern, color, texture and the like in the training set. We have found empirically that use of this type of information leads to improved banknote validation performance.
  • the information from all images in the training set comprises information about a pixel at the same location in each of the training set images.
  • This can comprises pixel intensity profiles as explained in more detail below.
  • the segmentation template is created by using a clustering algorithm to cluster pixel locations in an image plane on the basis of the information from all the images in the training set.
  • a clustering algorithm Any suitable clustering algorithm can be used as known in the art.
  • the classifier is a one-class classifier. This is advantageous because by using a one-class classifier and the method of forming the segmentation template described above, we can remove the need for examples of counterfeit banknotes in the training set. Thus, preferably the training set images are of genuine banknotes only.
  • the classifier is a statistical one-class classifier. These are typically less computationally intensive and perform better than neural network based approaches.
  • the step of forming the classifier comprises estimating a distribution of a statistic relating to banknotes in a target class, said target class comprising genuine currency.
  • the training set images are selected from any of reflection images, transmission images, visible information, non-visible information and other images such as magnetic, thermal and x-ray images.
  • the classifier can be formed on the basis of specified information about a particular denomination and currency of banknotes. For example, information about particularly data rich regions in terms of color or other information, spatial frequency or shapes in a given currency and denomination.
  • the invention also encompasses an apparatus for creating a banknote classifier comprising:
  • the invention also encompasses a banknote validator comprising:
  • the banknote validator further comprises a plurality of classifiers and a combiner arranged to combine the results of each of the classifiers.
  • the invention also encompasses a method of validating a banknote comprising:
  • the invention also encompasses a computer program comprising computer program code means adapted to perform all the steps of any of the methods described above when said program is run on a computer.
  • the computer program can be embodied on a computer readable medium.
  • the invention also encompasses a self-service apparatus comprising:
  • the method may be performed by software in machine readable form on a storage medium.
  • the method steps may be carried out in any suitable order and/or in parallel as is apparent to the skilled person in the art.
  • FIG. 1 is a flow diagram of a method of creating a classifier for banknote validation
  • FIG. 2 is a schematic diagram of an apparatus for creating a classifier for banknote validation
  • FIG. 3 is a schematic diagram of a banknote validator
  • FIG. 4 is a flow diagram of a method of validating a banknote
  • FIG. 5 is a schematic diagram of a self-service apparatus with a banknote validator.
  • Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved.
  • one class classifier is used to refer to a classifier that is formed or built using information about examples only from a single class but which is used to allocate newly presented examples either to that single class or not. This differs from a conventional binary classifier which is created using information about examples from two classes and which is used to allocate new examples to one or other of those two classes.
  • a one-class classifier can be thought of as defining a boundary around a known class such that examples falling out with that boundary are deemed not to belong to the known class.
  • FIG. 1 is a high level flow diagram of a method of creating a classifier for banknote validation.
  • a training set of images of genuine banknotes (see box 10 of FIG. 1 ). These are images of the same type taken of banknotes of the same currency and denomination.
  • the type of image relates to how the images are obtained, and this may be in any manner known in the art. For example, reflection images, transmission images, images on any of a red, blue or green channel, thermal images, infrared images, ultraviolet images, x-ray images or other image types.
  • the images in the training set are in registration and are the same size. Pre-processing can be carried out to align the images and scale them to size if necessary, as known in the art.
  • the segmentation template comprises information about how to divide an image into a plurality of segments.
  • the segments may be non-continuous, that is, a given segment can comprise more than one patch in different regions of the image.
  • the segmentation template also comprises a specified number of segments to be used.
  • feature we mean any statistic or other characteristic of a segment. For example, the mean pixel intensity, median pixel intensity, mode of the pixel intensities, texture, histogram, Fourier transform descriptors, wavelet transform descriptors and/or any other statistics in a segment.
  • a classifier is then formed using the feature information (see box 18 of FIG. 1 ).
  • Any suitable type of classifier can be used as known in the art.
  • the classifier is a one-class classifier and no information about counterfeit banknotes is needed.
  • the method in FIG. 1 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and effectively. To create classifiers for other currencies or denominations the method is repeated with appropriate training set images.
  • the present invention uses a different method of forming the segmentation template which removes the need for using a genetic algorithm or equivalent method to search for a good segmentation template within a large number of possible segmentation templates. This reduces computational cost and improves performance. In addition the need for information about counterfeit banknotes is removed.
  • this method can be thought of as specifying how to divide the image plane into a plurality of segments, each comprising a plurality of specified pixels.
  • the segments can be non-continuous as mentioned above.
  • this specification is made on the basis of information from all images in the training set.
  • segmentation using a rigid grid structure does not require information from images in the training set.
  • pixel intensity profiles In a preferred example we use these pixel intensity profiles. However, it is not essential to use pixel intensity profiles. It is also possible to use other information from all images in the training set. For example, intensity profiles for blocks of 4 neighboring pixels or mean values of pixel intensities for pixels at the same location in each of the training set images.
  • a row vector ⁇ a ji , a j2 , . . . , a jN ⁇ in A can be seen as an intensity profiled for a particular pixel (jth) across N images. If two pixels come from the same pattern region of the image they are likely to have the similar intensity values and hence have a strong temporal correlation. Note the term “temporal” here need not exactly correspond to the time axis but is borrowed to indicate the axis across different images in the ensemble. Our algorithm tries to find these correlations and segments the image plane spatially into regions of pixels that have similar temporal behavior. We measure this correlation by defining a metric between intensity profiles. A simple way is to use the Euclidean distance, i.e.
  • d(j,k) the stronger the correlation between the two pixels.
  • the image plane In order to decompose the image plane spatially using the temporal correlations between pixels, we run a clustering algorithm on the pixel intensity profiles (the rows of the design matrix A. It will produce clusters of temporally correlated pixels. The most straightforward choice is to employ the K-means algorithm, but it could be any other clustering algorithm. As a result the image plane is segmented into several segments of temporally correlated pixels. This can then be used as a template to segment all images in the training set; and a classifier can be built on features extracted from those segments of all images in the training set.
  • one-class classifier is preferable. Any suitable type of one-class classifier can be used as known in the art. For example, neural network based one-class classifiers and statistical based one-class classifiers.
  • Suitable statistical methods for one-class classification are in general based on maximization of the log-likelihood ratio under the null-hypothesis that the observation under consideration is drawn from the target class and these include the D 2 test (described in Morrison, DF: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990) which assumes a multivariate Gaussian distribution for the target class (genuine currency).
  • the density of the target class can be estimated using for example a semi-parametric Mixture of Gaussians (described in Bishop, CM: Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995) or a non-parametric Parzen window (described in Duda, R.
  • Support Vector Data Domain Description (described in Tax, DMJ, Duin, RPW: Support vector domain description, Pattern Recognition Letters, 20(11-12). 1191 -1199, 1999), also known as ‘support estimation’ (described in Hayton, P, Schblkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing Systems, 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946-952, 2001) and Extreme Value Theory (EVT) (described in Roberts, S.
  • SVDD Support Vector Data Domain Description
  • EDT Extreme Value Theory
  • ⁇ ) sup ⁇ ⁇ ⁇ n 1 N ⁇ ⁇ ( x n
  • semi-parametric e.g. Gaussian Mixture Model
  • non-parametric e.g. Parzen window method
  • B bootstrap replicates of the test statistic ⁇ crit i , i 1, . . . , B can be obtained by randomly selecting an N+1′th sample and computing ⁇ circumflex over (p) ⁇ (x N+1 ; ⁇ circumflex over ( ⁇ ) ⁇ N i ) ⁇ crit i .
  • the method of forming the classifier is repeated for different numbers of segments and tested using images of banknotes known to be either counterfeit or not.
  • the number of segments giving the best performance is then selected and the classifier using that number of segments used. We found that the best number of segments to be from about 2 to 12 although any suitable number of segments can be used.
  • FIG. 2 is a schematic diagram of an apparatus 20 for creating a classifier 22 for banknote validation. It comprises:
  • FIG. 3 is a schematic diagram of a banknote validator 31 . It comprises:
  • FIG. 4 is a flow diagram of a method of validating a banknote. The method comprises:
  • the explicit information in the segmentation template can be a simple file with a list of pixel addresses to be included in each segment.
  • FIG. 5 is a schematic diagram of a self-service apparatus 51 with a banknote validator 53 . It comprises:
  • the means for accepting banknotes is of any suitable type as known in the art as is the imaging means.

Abstract

A method of creating a classifier for banknote validation is described. Information from all of a set of training images from genuine banknotes is used to form a segmentation template which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A banknote validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment the banknote validator is incorporated in a self-service apparatus such as an automated teller machine.

Description

  • This application is a continuation-in-part application of co-pending application Ser. No. 11/305,537 filed Dec. 16, 2005.
  • TECHNICAL FIELD
  • The present invention relates to a method and apparatus for banknote validation.
  • BACKGROUND
  • There is a growing need for automatic verification and validation of banknotes of different currencies and denominations in a simple, reliable, and cost effective manner. This is required, for example, in self-service apparatus which receives banknotes, such as self-service kiosks, ticket vending machines, automated teller machines arranged to take deposits, self-service currency exchange machines and the like.
  • Previously, manual methods of currency validation have involved image examination, transmission effects such as watermarks and thread registration marks, feel and even smell of banknotes. Other known methods have relied on semi-overt features requiring semi-manual interrogation. For example, using magnetic means, ultraviolet sensors, fluorescence, infrared detectors, capacitance, metal strips, image patterns and similar. However, by their very nature these methods are manual or semi-manual and are not suitable for many applications where manual intervention is unavailable for long periods of time. For example, in self-service apparatus.
  • There are significant problems to be overcome in order to create an automatic currency validator. For example, many different types of currency exist with different security features and even substrate types. Within those different denominations also exist commonly with different levels of security features. There is therefore a need to provide a generic method of easily and simply performing currency validation for those different currencies and denominations.
  • Put simply, the task of a currency validator is to determine whether a given banknote is genuine or counterfeit. Previous automatic validation methods typically require a relatively large number of examples of counterfeit banknotes to be known in order to train the classifier. In addition, those previous classifiers are trained to detect known counterfeits only. This is problematic because often little or no information is available about possible counterfeits. For example, this is particularly problematic for newly introduced denominations or newly introduced currency.
  • In an earlier paper entitled, “Employing optimized combinations of one-class classifiers for automated currency validation”, published in Pattern Recognition 37, (2004) pages 1085-1096, by Chao He, Mark Girolami and Gary Ross (two of whom are inventors of the present application) an automated currency validation method is described (Pat. No. EP1484719, US2004247169). This involves segmenting an image of a whole banknote into regions using a grid structure. Individual “one-class” classifiers are built for each region and a small subset of the region specific classifiers are combined to provide an overall decision. (The term, “one-class” is explained in more detail below.) The segmentation and combination of region specific classifiers to achieve good performance is achieved by employing a genetic algorithm. This method requires a small number of counterfeit samples at the genetic algorithm stage and as such is not suitable when counterfeit data is unavailable.
  • There is also a need to perform automatic currency validation in a computationally inexpensive manner which can be performed in real time.
  • The invention seeks to provide an improved method and apparatus for banknote validation which overcomes or at least mitigates one or more of the problems mentioned above.
  • SUMMARY
  • A method of creating a classifier for banknote validation is described. Information from all of a set of training images from genuine banknotes is used to form a segmentation template which is then used to segment each of the training set images. Features are extracted from the segments and used to form a classifier which is preferably a one-class statistical classifier. Classifiers can be quickly and simply formed for different currencies and denominations in this way and without the need for examples of counterfeit banknotes. A banknote validator using such a classifier is described as well as a method of validating a banknote using such a classifier. In a preferred embodiment the banknote validator is incorporated in a self-service apparatus such as an automated teller machine.
  • We describe a method of creating a classifier for banknote validation. The method comprising the steps of:
      • accessing a training set of banknote images;
      • creating a segmentation template using the training set images;
      • segmenting each of the training set images using the segmentation template;
      • extracting one or more features from each segment in each of the training set images; and
      • forming the classifier using the feature information;
        wherein the segmentation template is created on the basis of information from all images in the training set.
  • By creating the segmentation template on the basis of information from all images in the training set we have found improved performance in banknote validation. In contrast, previous methods have used rigid grid structures for segmentation which do not require information from all the training set images to perform segmentation.
  • For example, the information from all images in the training set comprises morphological information. This can be pattern, color, texture and the like in the training set. We have found empirically that use of this type of information leads to improved banknote validation performance.
  • In an example, the information from all images in the training set comprises information about a pixel at the same location in each of the training set images. This can comprises pixel intensity profiles as explained in more detail below.
  • Preferably the segmentation template is created by using a clustering algorithm to cluster pixel locations in an image plane on the basis of the information from all the images in the training set. Any suitable clustering algorithm can be used as known in the art.
  • In a preferred embodiment the classifier is a one-class classifier. This is advantageous because by using a one-class classifier and the method of forming the segmentation template described above, we can remove the need for examples of counterfeit banknotes in the training set. Thus, preferably the training set images are of genuine banknotes only.
  • Preferably, the classifier is a statistical one-class classifier. These are typically less computationally intensive and perform better than neural network based approaches.
  • Preferably the step of forming the classifier comprises estimating a distribution of a statistic relating to banknotes in a target class, said target class comprising genuine currency.
  • In a particularly preferred embodiment the training set images are selected from any of reflection images, transmission images, visible information, non-visible information and other images such as magnetic, thermal and x-ray images.
  • It is also possible to use a feature selection algorithm to select one or more types of feature to use in the step of extracting features.
  • In addition the classifier can be formed on the basis of specified information about a particular denomination and currency of banknotes. For example, information about particularly data rich regions in terms of color or other information, spatial frequency or shapes in a given currency and denomination.
  • The invention also encompasses an apparatus for creating a banknote classifier comprising:
      • an input arranged to access a training set of banknote images;
      • a processor arranged to create a segmentation template using the training set images;
      • a segmentor arranged to segmenting each of the training set images using the segmentation template;
      • a feature extractor arranged to extract one or more features from each segment in each of the training set images; and
      • classification forming means arranged to form the classifier using the feature information;
        wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set.
  • The invention also encompasses a banknote validator comprising:
      • an input arranged to receive at least one image of a banknote to be validated;
      • a segmentation template;
      • a processor arranged to segment the image of the banknote using the segmentation template;
      • a feature extractor arranged to extract one or more features from each segment of the banknote image;
      • a classifier arranged to classify the banknote as being either valid or not on the basis of the extracted features;
        wherein the segmentation template has been formed on the basis of information about each of a set of training images of banknotes.
  • In one example the banknote validator further comprises a plurality of classifiers and a combiner arranged to combine the results of each of the classifiers.
  • The invention also encompasses a method of validating a banknote comprising:
      • accessing at least one image of a banknote to be validated;
      • accessing a segmentation template;
      • segmenting the image of the banknote using the segmentation template;
      • extracting features from each segment of the banknote image;
      • classifying the banknote as being either valid or not on the basis of the extracted features using a classifier;
        wherein the segmentation template has been formed on the basis of information about each of a set of training images of banknotes.
  • The invention also encompasses a computer program comprising computer program code means adapted to perform all the steps of any of the methods described above when said program is run on a computer.
  • The computer program can be embodied on a computer readable medium.
  • The invention also encompasses a self-service apparatus comprising:
      • a means for accepting banknotes,
      • imaging means for obtaining digital images of the banknotes; and
      • a banknote validator as described above.
  • The method may be performed by software in machine readable form on a storage medium. The method steps may be carried out in any suitable order and/or in parallel as is apparent to the skilled person in the art.
  • This acknowledges that software can be a valuable, separately tradable commodity. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions, (and therefore the software essentially defines the functions of the register, and can therefore be-termed a register, even before it is combined with its standard hardware). For similar reasons, it is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
  • The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
  • FIG. 1 is a flow diagram of a method of creating a classifier for banknote validation;
  • FIG. 2 is a schematic diagram of an apparatus for creating a classifier for banknote validation;
  • FIG. 3 is a schematic diagram of a banknote validator;
  • FIG. 4 is a flow diagram of a method of validating a banknote;
  • FIG. 5 is a schematic diagram of a self-service apparatus with a banknote validator.
  • DETAILED DESCRIPTION
  • Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved.
  • The term “one class classifier” is used to refer to a classifier that is formed or built using information about examples only from a single class but which is used to allocate newly presented examples either to that single class or not. This differs from a conventional binary classifier which is created using information about examples from two classes and which is used to allocate new examples to one or other of those two classes. A one-class classifier can be thought of as defining a boundary around a known class such that examples falling out with that boundary are deemed not to belong to the known class.
  • FIG. 1 is a high level flow diagram of a method of creating a classifier for banknote validation.
  • First we obtain a training set of images of genuine banknotes (see box 10 of FIG. 1). These are images of the same type taken of banknotes of the same currency and denomination. The type of image relates to how the images are obtained, and this may be in any manner known in the art. For example, reflection images, transmission images, images on any of a red, blue or green channel, thermal images, infrared images, ultraviolet images, x-ray images or other image types. The images in the training set are in registration and are the same size. Pre-processing can be carried out to align the images and scale them to size if necessary, as known in the art.
  • We next create a segmentation template using information from the training set images (see box 12 of FIG. 1). The segmentation template comprises information about how to divide an image into a plurality of segments. The segments may be non-continuous, that is, a given segment can comprise more than one patch in different regions of the image. Preferably, but not essentially, the segmentation template also comprises a specified number of segments to be used.
  • Using the segmentation template we segment each of the images in the training set (see box 14 of FIG. 1). We then extract one or more features from each segment in each of the training set images (see box 16 of FIG. 1). By the term “feature” we mean any statistic or other characteristic of a segment. For example, the mean pixel intensity, median pixel intensity, mode of the pixel intensities, texture, histogram, Fourier transform descriptors, wavelet transform descriptors and/or any other statistics in a segment.
  • A classifier is then formed using the feature information (see box 18 of FIG. 1). Any suitable type of classifier can be used as known in the art. In a particularly preferred embodiment of the invention the classifier is a one-class classifier and no information about counterfeit banknotes is needed. However, it is also possible to use a binary classifier or other type of classifier of any suitable type as known in the art.
  • The method in FIG. 1 enables a classifier for validation of banknotes of a particular currency and denomination to be formed simply, quickly and effectively. To create classifiers for other currencies or denominations the method is repeated with appropriate training set images.
  • Previously, (as mentioned in the background section) we used a segmentation technique that involved using a grid structure over the image plane and a genetic algorithm method to form the segmentation template. This necessitated using some information about counterfeit notes.
  • The present invention uses a different method of forming the segmentation template which removes the need for using a genetic algorithm or equivalent method to search for a good segmentation template within a large number of possible segmentation templates. This reduces computational cost and improves performance. In addition the need for information about counterfeit banknotes is removed.
  • We believe that generally it is difficult in the counterfeiting process to provide a uniform quality of imitation across the whole note and therefore certain regions of a note are more difficult than others to be copied successfully. We therefore recognized that rather than using a rigidly uniform grid segmentation we could improve banknote validation by using a more sophisticated segmentation. Empirical testing that we carried out indicated that this is indeed the case. Segmentation based on morphological characteristics such as pattern, color and texture led to a better performance in detecting counterfeits. However, traditional image segmentation methods, such as using edge detectors, when applied to each image in the training set were difficult to use. This is because varying results are obtained for each training set member and it is difficult to align corresponding features in different training set images. In order to avoid this problem of aligning segments we used, in one preferred embodiment, a so called “spatio-temporal image decomposition”.
  • Details about the method of forming the segmentation template are now given. At a high level this method can be thought of as specifying how to divide the image plane into a plurality of segments, each comprising a plurality of specified pixels. The segments can be non-continuous as mentioned above. In the present invention, this specification is made on the basis of information from all images in the training set. In contrast, segmentation using a rigid grid structure does not require information from images in the training set.
  • Consider the images in the training set as being stacked and in registration with one another in the same orientation. Taking a given pixel in the note image plane this pixel is thought of as having a “pixel intensity profile” comprising information about the pixel intensity at that particular pixel position in each of the training set images. Using any suitable clustering algorithm, pixel positions in the image plane are clustered into segments, where pixel positions in those segments have similar or correlated pixel intensity profiles.
  • In a preferred example we use these pixel intensity profiles. However, it is not essential to use pixel intensity profiles. It is also possible to use other information from all images in the training set. For example, intensity profiles for blocks of 4 neighboring pixels or mean values of pixel intensities for pixels at the same location in each of the training set images.
  • A particularly preferred embodiment of our method of forming the segmentation template is now described in detail. This is based on the method taught in the following publication “EigenSegments: A spatio-temporal decomposition of an ensemble of images” by Avidan, S. Lecture Notes in Computer Science, 2352: 747-758, 2002.
  • Given an ensemble of images {Ii}i=1, 2, . . . , N which have been registered and scaled to the same size r×c, each image Ii can be represented by its pixels as [a1i,a2i, . . . , aMi]T in vector form, where aji(j=1,2, . . . , M) is the intensity of the jth pixel in the ith image and M=r·c is the total number of pixels in the image. A design matrix A∈
    Figure US20070140551A1-20070621-P00900
    M×N can then be generated by stacking vectors Ii (zeroed using the mean value) of all images in the ensemble, thus A=└I1,I2, . . . , IN]. A row vector └aji, aj2, . . . , ajN┘ in A can be seen as an intensity profiled for a particular pixel (jth) across N images. If two pixels come from the same pattern region of the image they are likely to have the similar intensity values and hence have a strong temporal correlation. Note the term “temporal” here need not exactly correspond to the time axis but is borrowed to indicate the axis across different images in the ensemble. Our algorithm tries to find these correlations and segments the image plane spatially into regions of pixels that have similar temporal behavior. We measure this correlation by defining a metric between intensity profiles. A simple way is to use the Euclidean distance, i.e. the temporal correlation between two pixels j and k can be denoted as d ( j , k ) = i = 1 N ( a ji - a ki ) 2
    The smaller d(j,k), the stronger the correlation between the two pixels.
  • In order to decompose the image plane spatially using the temporal correlations between pixels, we run a clustering algorithm on the pixel intensity profiles (the rows of the design matrix A. It will produce clusters of temporally correlated pixels. The most straightforward choice is to employ the K-means algorithm, but it could be any other clustering algorithm. As a result the image plane is segmented into several segments of temporally correlated pixels. This can then be used as a template to segment all images in the training set; and a classifier can be built on features extracted from those segments of all images in the training set.
  • In order to achieve the training without utilizing counterfeit notes, one-class classifier is preferable. Any suitable type of one-class classifier can be used as known in the art. For example, neural network based one-class classifiers and statistical based one-class classifiers.
  • Suitable statistical methods for one-class classification are in general based on maximization of the log-likelihood ratio under the null-hypothesis that the observation under consideration is drawn from the target class and these include the D2 test (described in Morrison, DF: Multivariate Statistical Methods (third edition). McGraw-Hill Publishing Company, New York, 1990) which assumes a multivariate Gaussian distribution for the target class (genuine currency). In the case of an arbitrary non-Gaussian distribution the density of the target class can be estimated using for example a semi-parametric Mixture of Gaussians (described in Bishop, CM: Neural Networks for Pattern Recognition, Oxford University Press, New York, 1995) or a non-parametric Parzen window (described in Duda, R. O., Hart, P. E., Stork, D. G.: Pattern Classification (second edition), John Wiley & Sons, INC, New York, 2001) and the distribution of the log-likelihood ratio under the null-hypothesis can be obtained by sampling techniques such as the bootstrap (described in Wang, S, Woodward, W. A., Gary, H. L. et al: A new test for outlier detetion from a multivariate mixture distribution, Journal of Computational and Graphical Statistics, 6(3): 285-299, 1997).
  • Other methods which can be employed for one-class classification are Support Vector Data Domain Description (SVDD) (described in Tax, DMJ, Duin, RPW: Support vector domain description, Pattern Recognition Letters, 20(11-12). 1191 -1199, 1999), also known as ‘support estimation’ (described in Hayton, P, Schblkopf, B, Tarrassenko, L, Anuzis, P: Support Vector Novelty Detection Applied to Jet Engine Vibration Spectra, Advances in Neural Information Processing Systems, 13, eds Leen, Todd K and Dietterich, Thomas G and Tresp, Volker, MIT Press, 946-952, 2001) and Extreme Value Theory (EVT) (described in Roberts, S. J.: Novelty detection using extreme value statistics. IEE Proceedings on Vision, Image & Signal Processing, 146(3): 124-129, 1999). In SVDD the support of the data distribution is estimated, whilst the EVT estimates the distribution of extreme values. For this particular application, large numbers of examples of genuine notes are available, so in this case it is possible to obtain reliable estimates of the target class distribution. We therefore choose one-class classification methods that can estimate the density distribution explicitly in a preferred embodiment, although this is not essential. In a preferred embodiment we use one-class classification methods based on the parametric D2 test).
  • In a preferred embodiment, the statistical hypothesis tests used for our one-class classifier are detailed as follows:
  • Consider N independent and identically distributed p-dimensional vector samples (the feature set for each banknote) x1, . . . , xN∈ C with an underlying density function with parameters θ given as p(x|θ). The following hypothesis test is given for a new point xN+1 such that H0: xN+1∈C vs. H1: xN+1∉C, where C denotes the region where the null hypothesis is true and is defined by p(x|θ). Assuming that the distribution under the alternate hypothesis is uniform then the standard log-likelihood ratio for the null and alternate hypothesis λ = sup θ Θ L 0 ( θ ) sup θ Θ L 1 ( θ ) = sup θ n = 1 N + 1 p ( x n | θ ) sup θ n = 1 N ( x n | θ ) ( 1 )
    can be employed as a test statistic for the null-hypothesis. In this preferred embodiment we can use the log-likelihood ratio as test statistic for the validation of a newly presented note.
  • 1) feature vectors with multivariate Gaussian density: Under the assumption that the feature vectors describing individual points in a sample are multivariate Gaussian, a test that emerges from the above likelihood ratio (1), to assess whether each point in a sample shares a common mean. Consider N independent and identically distributed p-dimensional vector samples x1, . . . , xN from a multivariate normal distribution with mean μ and covariance C, whose sample estimates are {circumflex over (μ)}N and ĈN. From the sample consider a random selection denoted as x0, the associated squared Mahalanobis distance
    D 2=(x 0−{circumflex over (μ)}N)T Ĉ N −1(x0−{circumflex over (μ)}N)  (2)
    can be shown to be distributed as a central F -distribution with p and N−p−1 degrees of freedom by F = ( N - p - 1 ) ND 2 p ( N - 1 ) 2 - NpD 2 . ( 3 )
  • Then, the null hypothesis of a common population mean vector x0 and the remaining xi will be rejected if
    F>Fa;p,N−P−1,  (4)
    where Fa;p,N−p−1 is the upper α·100% point of the F -distribution with (p,N−p−1) degrees of freedom. We can make use of the following incremental estimates of the mean and covariance in devising a test for new examples which do not form part of the original sample when an additional datum xN+1 is made available, i.e. the mean μ ^ N + 1 = 1 N + 1 { N μ ^ N + x N + 1 } ( 5 )
    and the covariance C ^ N + 1 = N N + 1 C ^ N + N ( N + 1 ) 2 ( x N + 1 - μ ^ N ) ( x N + 1 - μ ^ N ) T . ( 6 )
    By using the expression of (5), (6) and the matrix inversion lemma, Equation (2) for an N -sample reference set and an N+1′th test point becomes D 2 = σ N + 1 T C ^ N + 1 - 1 σ N + 1 , where ( 7 ) σ N + 1 = N N + 1 ( x N + 1 - μ ^ N ) , and ( 8 ) C ^ N + 1 - 1 = N + 1 N C ^ N - 1 - C ^ N - 1 σ N + 1 σ N + 1 T C ^ N - 1 N + N N + 1 σ N + 1 T C ^ N - 1 σ N + 1 . ( 9 )
    Denoting σN+1 TĈN −1σN+1 by DN+1,N 2, then D 2 = N + 1 N D N + 1 , N 2 { 1 - D N + 1 , N 2 D N + 1 , N 2 + N + 1 } . ( 10 )
    So a new point xN+1 can be tested against an estimated and assumed normal distribution for a common estimated mean {circumflex over (μ)}N and covariance ĈN. Employing the log-likelihood ratio (1) for the one-class hypothesis test we derive the test statistic (10) directly. The assumption of multivariate Gaussian feature vectors often does not hold in practice, though we have found it an appropriate pragmatic choice in many applications. We relax-this assumption and consider arbitrary densities in the following section.
  • 2) Feature Vectors with arbitrary Density: A probability density estimate {circumflex over (p)}(x;θ) can be obtained from the finite data sample S={x1, . . . , xN}∈
    Figure US20070140551A1-20070621-P00900
    d drawn from an arbitrary density p(x), by using any suitable semi-parametric (e.g. Gaussian Mixture Model) or non-parametric (e.g. Parzen window method) density estimation methods as known in the art. This density can then be employed in computing the log-likelihood ratio (1). Unlike the case of the multivariate Gaussian distribution there is no analytic distribution for the test statistic (λ) under the null hypothesis. So to obtain this distribution, numerical bootstrap methods can be employed to obtain the otherwise non-analytic null distribution under the estimated density and so the various critical values of λcrit can be established from the empirical distribution obtained. It can be shown that in the limit as N→∞, the likelihood ratio can be estimated by the following λ = sup θ Θ L 0 ( θ ) sup θ Θ L 1 ( θ ) p ^ ( x N + 1 ; θ ^ N ) ( 13 )
    where {circumflex over (p)}(xN+1;{circumflex over (θ)}N) denotes the probability density of xN+1 under the model estimated by the original N samples.
  • After generating B sets bootstrap of N samples from the reference data set and using each of these to estimate the parameters of the density distribution {circumflex over (θ)}N i, B bootstrap replicates of the test statistic λcrit i, i=1, . . . , B can be obtained by randomly selecting an N+1′th sample and computing {circumflex over (p)}(xN+1;{circumflex over (θ)}N i)≈λcrit i. By ordering λcrit i in ascending order, the critical value α can be defined to reject the null-hypothesis at the desired significance level if λ≦λα, where λα is the jth smallest value of λcrit i, and α=j/(B+1).
  • Preferably the method of forming the classifier is repeated for different numbers of segments and tested using images of banknotes known to be either counterfeit or not. The number of segments giving the best performance is then selected and the classifier using that number of segments used. We found that the best number of segments to be from about 2 to 12 although any suitable number of segments can be used.
  • FIG. 2 is a schematic diagram of an apparatus 20 for creating a classifier 22 for banknote validation. It comprises:
      • an input 21 arranged to access a training set of banknote images;
      • a processor 23 arranged to create a segmentation template using the training set images; p1 a segmentor 24 arranged to segmenting each of the training set images using the segmentation template;
      • a feature extractor 25 arranged to extract one or more features from each segment in each of the training set images; and
      • classification forming means 26 arranged to form the classifier using the feature information;
        wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set. For example, by using spatio-temporal image decomposition described above.
  • FIG. 3 is a schematic diagram of a banknote validator 31. It comprises:
      • an input arranged to receive at least one image 30 of a banknote to be validated;
      • a segmentation template 32;
      • a processor 33 arranged to segment the image of the banknote using the segmentation template;
      • a feature extractor 34 arranged to extract one or more features from each segment of the banknote image;
      • a classifier 35 arranged to classify the banknote as being either valid or not on the basis of the extracted features;
        wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes. It is noted that it is not essential for the components of FIG. 3 to be independent of one another, these may be integral.
  • FIG. 4 is a flow diagram of a method of validating a banknote. The method comprises:
      • accessing at least one image of a banknote to be validated;
      • accessing a segmentation template;
      • segmenting the image of the banknote using the segmentation template;
      • extracting features from each segment of the banknote image;
      • classifying the banknote as being either valid or not on the basis of the extracted features using a classifier;
        wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes. These method steps can be carried out in any suitable order or in combination as is known in the art. The segmentation template can be said to implicitly comprise information about each of the images in the training set because it has been formed on the basis of that information.
  • However, the explicit information in the segmentation template can be a simple file with a list of pixel addresses to be included in each segment.
  • FIG. 5 is a schematic diagram of a self-service apparatus 51 with a banknote validator 53. It comprises:
      • a means for accepting banknotes 50,
      • imaging means for obtaining digital images of the banknotes 52; and
      • a banknote validator 53 as described above.
  • The means for accepting banknotes is of any suitable type as known in the art as is the imaging means.
  • Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
  • It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art.

Claims (24)

1. A method of creating a classifier for banknote validation said method comprising the steps of:
(i) accessing a training set of banknote images;
(ii) creating a segmentation template using the training set images;
(iii) segmenting each of the training set images using the segmentation template;
(iv) extracting one or more features from each segment in each of the training set images; and
(v) forming the classifier using the feature information;
wherein the segmentation template is created on the basis of information from all images in the training set.
2. A method as claimed in claim 1 wherein the information from all images in the training set comprises morphological information.
3. A method as claimed in claim 1 wherein the information from all images in the training set comprises information about a pixel at the same location in each of the training set images.
4. A method as claimed in claim 2, wherein the information from all the images comprises pixel intensity profiles.
5. A method as claimed in claim 1, wherein the segmentation template is created by using a clustering algorithm to cluster pixel locations in an image plane on the basis of the information from all the images in the training set.
6. A method as claimed in claim 1, wherein the classifier is a one-class classifier.
7. A method as claimed in claim 1, wherein the classifier is a statistical one-class classifier.
8. A method as claimed in claim 7 and wherein the step of forming the classifier comprises estimating a distribution of a statistic relating to banknotes in a target class, said target class comprising genuine currency.
9. A method as claimed in claim 1, which further comprises using a feature selection algorithm to select one or more types of feature to use in step (iv) of extracting features.
10. A method as claimed in claim 1, which further comprises forming the classifier on the basis of specified information about a particular denomination and currency of banknotes.
11. A method as claimed in claim 1, which further comprises combining classifiers where necessary in step (v) of forming the classifier.
12. An apparatus for creating a banknote classifier comprising:
(i) an input arranged to access a training set of banknote images;
(ii) a processor arranged to create a segmentation template using the training set images;
(iii) a segmentor arranged to segmenting each of the training set images using the segmentation template;
(iv) a feature extractor arranged to extract one or more features from each segment in each of the training set images; and
(v) classification forming means arranged to form the classifier using the feature information;
wherein the processor is arranged to create the segmentation template on the basis of information from all images in the training set.
13. A banknote validator comprising:
(i) an input arranged to receive at least one image of a banknote to be validated;
(ii) a segmentation template;
(iii) a processor arranged to segment the image of the banknote using the segmentation template;
(iv) a feature extractor arranged to extract one or more features from each segment of the banknote image;
(v) a classifier arranged to classify the banknote as being either valid or not on the basis of the extracted features;
wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes.
14. A banknote validator as claimed in claim 13 wherein the information about each of a set of training images comprises morphological information.
15. A banknote validator as claimed in claim 13, wherein the information about each of a set of training images comprises information about a pixel at the same location in each of the training set images.
16. A banknote validator as claimed in claim 13, wherein the information about each of a set of training images comprises pixel intensity profiles.
17. A banknote validator as claimed in claim 13, wherein the classifier is a one-class classifier.
18. A banknote validator as claimed in claim 13, wherein the classifier is a statistical one-class classifier.
19. A banknote validator as claimed in claim 13, which further comprises a plurality of classifiers and a combiner arranged to combine the results of each of the classifiers.
20. A method of validating a banknote comprising:
(i) accessing at least one image of a banknote to be validated;
(ii) accessing a segmentation template;
(iii) segmenting the image of the banknote using the segmentation template;
(iv) extracting features from each segment of the banknote image;
(v) classifying the banknote as being either valid or not on the basis of the extracted features using a classifier;
wherein the segmentation template is formed on the basis of information about each of a set of training images of banknotes.
21. A method as claimed in claim 20, wherein said classifier is a one-class classifier.
22. A method as claimed in claim 20, wherein said classifier is a statistical classifier.
23. A computer program comprising computer program code means adapted to perform method of creating a classifier for banknote validation said method comprising the steps of: (i) accessing a training set of banknote images; (ii) creating a segmentation template using the training set images; (iii) segmenting each of the training set images using the segmentation template; (iv) extracting one or more features from each segment in each of the training set images; and (v) forming the classifier using the feature information; wherein the segmentation template is created on the basis of information from all images in the training set.
24. A computer program as claimed in claim 23 embodied on a computer readable medium.
US11/366,147 2005-12-16 2006-03-02 Banknote validation Abandoned US20070140551A1 (en)

Priority Applications (24)

Application Number Priority Date Filing Date Title
US11/366,147 US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
BRPI0619845-7A BRPI0619845A2 (en) 2005-12-16 2006-09-26 banknote validation
PCT/GB2006/003565 WO2007068867A1 (en) 2005-12-16 2006-09-26 Banknote validation
EP06779545A EP1964073A1 (en) 2005-12-16 2006-09-26 Banknote validation
JP2008545069A JP5219211B2 (en) 2005-12-16 2006-09-26 Banknote confirmation method and apparatus
CN2006800473583A CN101331526B (en) 2005-12-16 2006-09-26 Banknote validation
PCT/GB2006/004670 WO2007068928A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media
EP06820512A EP1964074A1 (en) 2005-12-16 2006-12-14 Processing images of media items before validation
JP2008545088A JP5175210B2 (en) 2005-12-16 2006-12-14 Media confirmation device and confirmation method
EP06820517A EP1964075A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media
PCT/GB2006/004663 WO2007068923A1 (en) 2005-12-16 2006-12-14 Processing images of media items before validation
EP06831386A EP1964076A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items
JP2008545086A JP5177817B2 (en) 2005-12-16 2006-12-14 Medium confirmation method and medium confirmation apparatus
CN2006800472788A CN101366060B (en) 2005-12-16 2006-12-14 Media validation
CN2006800473687A CN101366061B (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items
PCT/GB2006/004676 WO2007068930A1 (en) 2005-12-16 2006-12-14 Detecting improved quality counterfeit media items
BRPI0619926-7A BRPI0619926A2 (en) 2005-12-16 2006-12-14 detection of best quality counterfeit media items
CN2006800475165A CN101331527B (en) 2005-12-16 2006-12-14 Processing images of media items before validation
JP2008545085A JP5044567B2 (en) 2005-12-16 2006-12-14 Medium item confirmation device and self-service device
BRPI0620625-5A BRPI0620625A2 (en) 2005-12-16 2006-12-14 enhanced quality counterfeit media detection
BRPI0620308-6A BRPI0620308A2 (en) 2005-12-16 2006-12-14 image processing of media items before validation
US11/639,576 US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media
US11/639,597 US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,593 US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US30553705A 2005-12-16 2005-12-16
US11/366,147 US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US30553705A Continuation-In-Part 2005-12-16 2005-12-16

Related Child Applications (3)

Application Number Title Priority Date Filing Date
US11/639,576 Continuation-In-Part US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media
US11/639,597 Continuation-In-Part US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,593 Continuation-In-Part US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation

Publications (1)

Publication Number Publication Date
US20070140551A1 true US20070140551A1 (en) 2007-06-21

Family

ID=37529297

Family Applications (4)

Application Number Title Priority Date Filing Date
US11/366,147 Abandoned US20070140551A1 (en) 2005-12-16 2006-03-02 Banknote validation
US11/639,597 Abandoned US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,593 Abandoned US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation
US11/639,576 Active 2028-12-14 US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media

Family Applications After (3)

Application Number Title Priority Date Filing Date
US11/639,597 Abandoned US20070154079A1 (en) 2005-12-16 2006-12-15 Media validation
US11/639,593 Abandoned US20070154078A1 (en) 2005-12-16 2006-12-15 Processing images of media items before validation
US11/639,576 Active 2028-12-14 US8086017B2 (en) 2005-12-16 2006-12-15 Detecting improved quality counterfeit media

Country Status (5)

Country Link
US (4) US20070140551A1 (en)
EP (4) EP1964073A1 (en)
JP (4) JP5219211B2 (en)
BR (4) BRPI0619845A2 (en)
WO (4) WO2007068867A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090307167A1 (en) * 2006-07-28 2009-12-10 Mei, Inc. Classification using support vector machines and variables selection
US20090324053A1 (en) * 2008-06-30 2009-12-31 Ncr Corporation Media Identification
US20110186402A1 (en) * 2008-07-29 2011-08-04 Mei, Inc. Currency discrimination
US8540142B1 (en) * 2005-12-20 2013-09-24 Diebold Self-Service Systems Banking machine controlled responsive to data read from data bearing records
US20140241618A1 (en) * 2013-02-28 2014-08-28 Hewlett-Packard Development Company, L.P. Combining Region Based Image Classifiers
CN104299313A (en) * 2014-11-04 2015-01-21 浙江大学 Paper money identification method, device and system
US20150310268A1 (en) * 2014-04-29 2015-10-29 Ncr Corporation Media item validation
CN106056752A (en) * 2016-05-25 2016-10-26 武汉大学 Banknote authentication method based on random forest
CN108460649A (en) * 2017-02-22 2018-08-28 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
US20180350869A1 (en) * 2017-05-30 2018-12-06 Ncr Corporation Media security validation
US10542961B2 (en) 2015-06-15 2020-01-28 The Research Foundation For The State University Of New York System and method for infrasonic cardiac monitoring
US20210342797A1 (en) * 2020-05-04 2021-11-04 Bank Of America Corporation Dynamic Unauthorized Activity Detection and Control System
US11403905B2 (en) * 2015-09-16 2022-08-02 Giesecke+Devrient Currency Technology Gmbh Device and method for counting bundles of value documents, in particular bundles of bank notes

Families Citing this family (39)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10160578A1 (en) * 2001-12-10 2004-02-12 Giesecke & Devrient Gmbh Method and device for checking the authenticity of sheet material
US20070140551A1 (en) * 2005-12-16 2007-06-21 Chao He Banknote validation
JP4999163B2 (en) * 2006-04-17 2012-08-15 富士フイルム株式会社 Image processing method, apparatus, and program
US8503796B2 (en) 2006-12-29 2013-08-06 Ncr Corporation Method of validating a media item
US8472676B2 (en) 2007-06-06 2013-06-25 De La Rue International Limited Apparatus and method for analysing a security document
WO2008149050A1 (en) * 2007-06-06 2008-12-11 De La Rue International Limited Apparatus for analysing a security document
US8630475B2 (en) 2007-12-10 2014-01-14 Glory Ltd. Banknote handling machine and banknote handling method
CA2707331C (en) * 2007-12-10 2015-01-27 Glory Ltd. Banknote handling machine and banknote handling method
US8094917B2 (en) * 2008-04-14 2012-01-10 Primax Electronics Ltd. Method for detecting monetary banknote and performing currency type analysis operation
US20090260947A1 (en) * 2008-04-18 2009-10-22 Xu-Hua Liu Method for performing currency value analysis operation
US8085972B2 (en) * 2008-07-03 2011-12-27 Primax Electronics Ltd. Protection method for preventing hard copy of document from being released or reproduced
US7844098B2 (en) * 2008-07-21 2010-11-30 Primax Electronics Ltd. Method for performing color analysis operation on image corresponding to monetary banknote
WO2010035163A1 (en) * 2008-09-29 2010-04-01 Koninklijke Philips Electronics, N.V. Method for increasing the robustness of computer-aided diagnosis to image processing uncertainties
CN101853389A (en) 2009-04-01 2010-10-06 索尼株式会社 Detection device and method for multi-class targets
RU2438182C1 (en) 2010-04-08 2011-12-27 Общество С Ограниченной Ответственностью "Конструкторское Бюро "Дорс" (Ооо "Кб "Дорс") Method of processing banknotes (versions)
RU2421818C1 (en) 2010-04-08 2011-06-20 Общество С Ограниченной Ответственностью "Конструкторское Бюро "Дорс" (Ооо "Кб "Дорс") Method for classification of banknotes (versions)
CN101908241B (en) * 2010-08-03 2012-05-16 广州广电运通金融电子股份有限公司 Method and system for identifying valued documents
DE102010055427A1 (en) * 2010-12-21 2012-06-21 Giesecke & Devrient Gmbh Method and device for investigating the optical state of value documents
DE102010055974A1 (en) * 2010-12-23 2012-06-28 Giesecke & Devrient Gmbh Method and device for determining a class reference data set for the classification of value documents
NL2006990C2 (en) * 2011-06-01 2012-12-04 Nl Bank Nv Method and device for classifying security documents such as banknotes.
CN102324134A (en) * 2011-09-19 2012-01-18 广州广电运通金融电子股份有限公司 Valuable document identification method and device
CN102592352B (en) * 2012-02-28 2014-02-12 广州广电运通金融电子股份有限公司 Recognition device and recognition method of papery medium
US9036890B2 (en) 2012-06-05 2015-05-19 Outerwall Inc. Optical coin discrimination systems and methods for use with consumer-operated kiosks and the like
US9734648B2 (en) 2012-12-11 2017-08-15 Ncr Corporation Method of categorising defects in a media item
US9947163B2 (en) * 2013-02-04 2018-04-17 Kba-Notasys Sa Authentication of security documents and mobile device to carry out the authentication
US8739955B1 (en) * 2013-03-11 2014-06-03 Outerwall Inc. Discriminant verification systems and methods for use in coin discrimination
CN103324946B (en) * 2013-07-11 2016-08-17 广州广电运通金融电子股份有限公司 A kind of method and system of paper money recognition classification
US9727821B2 (en) * 2013-08-16 2017-08-08 International Business Machines Corporation Sequential anomaly detection
US10650232B2 (en) 2013-08-26 2020-05-12 Ncr Corporation Produce and non-produce verification using hybrid scanner
CN103729645A (en) * 2013-12-20 2014-04-16 湖北微模式科技发展有限公司 Second-generation ID card area positioning and extraction method and device based on monocular camera
US9443367B2 (en) 2014-01-17 2016-09-13 Outerwall Inc. Digital image coin discrimination for use with consumer-operated kiosks and the like
ES2549461B1 (en) * 2014-02-21 2016-10-07 Banco De España METHOD AND DEVICE FOR THE CHARACTERIZATION OF THE STATE OF USE OF BANK TICKETS, AND ITS CLASSIFICATION IN APTOS AND NOT SUITABLE FOR CIRCULATION
US9336638B2 (en) * 2014-03-25 2016-05-10 Ncr Corporation Media item validation
US10762736B2 (en) * 2014-05-29 2020-09-01 Ncr Corporation Currency validation
US10275971B2 (en) * 2016-04-22 2019-04-30 Ncr Corporation Image correction
US10452908B1 (en) 2016-12-23 2019-10-22 Wells Fargo Bank, N.A. Document fraud detection
JP7093075B2 (en) * 2018-04-09 2022-06-29 東芝エネルギーシステムズ株式会社 Medical image processing equipment, medical image processing methods, and programs
EP3829152B1 (en) * 2019-11-26 2023-12-20 European Central Bank Computer-implemented method for copy protection, data processing device and computer program product
CN113240643A (en) * 2021-05-14 2021-08-10 广州广电运通金融电子股份有限公司 Banknote quality detection method, system and medium based on multispectral image

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048095A (en) * 1990-03-30 1991-09-10 Honeywell Inc. Adaptive image segmentation system
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
US6163618A (en) * 1997-11-21 2000-12-19 Fujitsu Limited Paper discriminating apparatus
US20030021459A1 (en) * 2000-05-24 2003-01-30 Armando Neri Controlling banknotes
US20030128874A1 (en) * 2002-01-07 2003-07-10 Xerox Corporation Image type classification using color discreteness features
US20030217906A1 (en) * 2002-05-22 2003-11-27 Gaston Baudat Currency validator
US20040183923A1 (en) * 2003-03-17 2004-09-23 Sharp Laboratories Of America, Inc. System and method for attenuating color-cast correction in image highlight areas
US20040247169A1 (en) * 2003-06-06 2004-12-09 Ncr Corporation Currency validation
US20070154099A1 (en) * 2005-12-16 2007-07-05 Chao He Detecting improved quality counterfeit media
US20080123931A1 (en) * 2006-12-29 2008-05-29 Ncr Corporation Automated recognition of valuable media
US20080159614A1 (en) * 2006-12-29 2008-07-03 Ncr Corporation Validation template for valuable media of multiple classes

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2949823B2 (en) * 1990-10-12 1999-09-20 株式会社村田製作所 Method for manufacturing flat type electrochemical device
BR9406727A (en) * 1993-05-28 1996-01-30 Axiom Bildverarbeit Syst Automatic inspection apparatus to classify article according to surface feature of article Automatic selection apparatus Automatic inspection apparatus to categorize workpiece based on its surface pattern and / or core / or texture process to automatically inspect article to classify the article according to a superficical characteristic of the same
JP3611006B2 (en) * 1997-06-19 2005-01-19 富士ゼロックス株式会社 Image area dividing method and image area dividing apparatus
JP2000215314A (en) * 1999-01-25 2000-08-04 Matsushita Electric Ind Co Ltd Image identifying device
JP2000341512A (en) * 1999-05-27 2000-12-08 Matsushita Electric Ind Co Ltd Image reader
JP2001331839A (en) * 2000-05-22 2001-11-30 Glory Ltd Method and device for discriminating paper money
DE60033535T2 (en) 2000-12-15 2007-10-25 Mei, Inc. Currency validator
US20030042438A1 (en) * 2001-08-31 2003-03-06 Lawandy Nabil M. Methods and apparatus for sensing degree of soiling of currency, and the presence of foreign material
US20030099379A1 (en) * 2001-11-26 2003-05-29 Monk Bruce C. Validation and verification apparatus and method
JP4102647B2 (en) * 2002-11-05 2008-06-18 日立オムロンターミナルソリューションズ株式会社 Banknote transaction equipment
JP4252294B2 (en) * 2002-12-04 2009-04-08 株式会社高見沢サイバネティックス Bill recognition device and bill processing device
JP4332414B2 (en) * 2003-03-14 2009-09-16 日立オムロンターミナルソリューションズ株式会社 Paper sheet handling equipment
FR2857481A1 (en) * 2003-07-08 2005-01-14 Thomson Licensing Sa METHOD AND DEVICE FOR DETECTING FACES IN A COLOR IMAGE
JP4532915B2 (en) * 2004-01-29 2010-08-25 キヤノン株式会社 Pattern recognition learning method, pattern recognition learning device, image input device, computer program, and computer-readable recording medium
JP3978614B2 (en) * 2004-09-06 2007-09-19 富士ゼロックス株式会社 Image region dividing method and image region dividing device
JP2006338548A (en) * 2005-06-03 2006-12-14 Sony Corp Printing paper sheet management system, printing paper sheet registration device, method, and program, printing paper sheet discrimination device, method and program
US7961937B2 (en) * 2005-10-26 2011-06-14 Hewlett-Packard Development Company, L.P. Pre-normalization data classification

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5048095A (en) * 1990-03-30 1991-09-10 Honeywell Inc. Adaptive image segmentation system
US5729623A (en) * 1993-10-18 1998-03-17 Glory Kogyo Kabushiki Kaisha Pattern recognition apparatus and method of optimizing mask for pattern recognition according to genetic algorithm
US6163618A (en) * 1997-11-21 2000-12-19 Fujitsu Limited Paper discriminating apparatus
US20030021459A1 (en) * 2000-05-24 2003-01-30 Armando Neri Controlling banknotes
US20030128874A1 (en) * 2002-01-07 2003-07-10 Xerox Corporation Image type classification using color discreteness features
US20030217906A1 (en) * 2002-05-22 2003-11-27 Gaston Baudat Currency validator
US20040183923A1 (en) * 2003-03-17 2004-09-23 Sharp Laboratories Of America, Inc. System and method for attenuating color-cast correction in image highlight areas
US20040247169A1 (en) * 2003-06-06 2004-12-09 Ncr Corporation Currency validation
US7639858B2 (en) * 2003-06-06 2009-12-29 Ncr Corporation Currency validation
US20070154099A1 (en) * 2005-12-16 2007-07-05 Chao He Detecting improved quality counterfeit media
US20070154079A1 (en) * 2005-12-16 2007-07-05 Chao He Media validation
US20080123931A1 (en) * 2006-12-29 2008-05-29 Ncr Corporation Automated recognition of valuable media
US20080159614A1 (en) * 2006-12-29 2008-07-03 Ncr Corporation Validation template for valuable media of multiple classes

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8540142B1 (en) * 2005-12-20 2013-09-24 Diebold Self-Service Systems Banking machine controlled responsive to data read from data bearing records
US8706669B2 (en) * 2006-07-28 2014-04-22 Mei, Inc. Classification using support vector machines and variables selection
US20090307167A1 (en) * 2006-07-28 2009-12-10 Mei, Inc. Classification using support vector machines and variables selection
US20090324053A1 (en) * 2008-06-30 2009-12-31 Ncr Corporation Media Identification
US8682056B2 (en) * 2008-06-30 2014-03-25 Ncr Corporation Media identification
US20110186402A1 (en) * 2008-07-29 2011-08-04 Mei, Inc. Currency discrimination
US8474592B2 (en) 2008-07-29 2013-07-02 Mei, Inc. Currency discrimination
US20140241618A1 (en) * 2013-02-28 2014-08-28 Hewlett-Packard Development Company, L.P. Combining Region Based Image Classifiers
US9824268B2 (en) * 2014-04-29 2017-11-21 Ncr Corporation Media item validation
US20150310268A1 (en) * 2014-04-29 2015-10-29 Ncr Corporation Media item validation
CN104299313A (en) * 2014-11-04 2015-01-21 浙江大学 Paper money identification method, device and system
US10542961B2 (en) 2015-06-15 2020-01-28 The Research Foundation For The State University Of New York System and method for infrasonic cardiac monitoring
US11478215B2 (en) 2015-06-15 2022-10-25 The Research Foundation for the State University o System and method for infrasonic cardiac monitoring
US11403905B2 (en) * 2015-09-16 2022-08-02 Giesecke+Devrient Currency Technology Gmbh Device and method for counting bundles of value documents, in particular bundles of bank notes
CN106056752A (en) * 2016-05-25 2016-10-26 武汉大学 Banknote authentication method based on random forest
CN108460649A (en) * 2017-02-22 2018-08-28 阿里巴巴集团控股有限公司 A kind of image-recognizing method and device
US20180350869A1 (en) * 2017-05-30 2018-12-06 Ncr Corporation Media security validation
US10475846B2 (en) * 2017-05-30 2019-11-12 Ncr Corporation Media security validation
US20210342797A1 (en) * 2020-05-04 2021-11-04 Bank Of America Corporation Dynamic Unauthorized Activity Detection and Control System

Also Published As

Publication number Publication date
BRPI0620625A2 (en) 2011-11-16
JP5044567B2 (en) 2012-10-10
EP1964075A1 (en) 2008-09-03
BRPI0620308A2 (en) 2011-11-08
JP5175210B2 (en) 2013-04-03
EP1964076A1 (en) 2008-09-03
WO2007068930A1 (en) 2007-06-21
JP5177817B2 (en) 2013-04-10
EP1964073A1 (en) 2008-09-03
BRPI0619845A2 (en) 2011-10-18
JP2009527027A (en) 2009-07-23
US20070154079A1 (en) 2007-07-05
US20070154078A1 (en) 2007-07-05
EP1964074A1 (en) 2008-09-03
JP2009519532A (en) 2009-05-14
BRPI0619926A2 (en) 2011-10-25
JP5219211B2 (en) 2013-06-26
JP2009527028A (en) 2009-07-23
US20070154099A1 (en) 2007-07-05
WO2007068923A1 (en) 2007-06-21
WO2007068867A1 (en) 2007-06-21
JP2009527029A (en) 2009-07-23
US8086017B2 (en) 2011-12-27
WO2007068928A1 (en) 2007-06-21

Similar Documents

Publication Publication Date Title
US20070140551A1 (en) Banknote validation
US7639858B2 (en) Currency validation
CN101331527B (en) Processing images of media items before validation
JP5344668B2 (en) Method for automatically confirming securities media item and method for generating template for automatically confirming securities media item
Zeggeye et al. Automatic recognition and counterfeit detection of Ethiopian paper currency
Verma et al. Indian currency recognition based on texture analysis
Darwish et al. Building an expert system for printer forensics: A new printer identification model based on niching genetic algorithm
Kiruthika et al. Image quality assessment based fake face detection
Dhar et al. Paper currency detection system based on combined SURF and LBP features
Gebremeskel et al. Developing a Model for Detection of Ethiopian Fake Banknote Using Deep Learning
KR20120084946A (en) Method for detecting counterfeits of banknotes using bayesian approach
Vishnu et al. Principal component analysis on Indian currency recognition
Patgar et al. An unsupervised intelligent system to detect fabrication in photocopy document using geometric moments and gray level co-occurrence matrix
Vishnu et al. Currency detection using similarity indices method
Al-Frajat Selection of Robust Features for Coin Recognition and Counterfeit Coin Detection
Prakash et al. An Efficient Detection of Counterfeit Currency Using a Hybrid Framework
Kodati et al. Detection of Fake Currency Using Machine Learning Models

Legal Events

Date Code Title Description
AS Assignment

Owner name: NCR CORPORATION, OHIO

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HE, CHAO;ROSS, GARY;REEL/FRAME:017614/0207

Effective date: 20060213

AS Assignment

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT, ILLINOIS

Free format text: SECURITY AGREEMENT;ASSIGNORS:NCR CORPORATION;NCR INTERNATIONAL, INC.;REEL/FRAME:032034/0010

Effective date: 20140106

Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT

Free format text: SECURITY AGREEMENT;ASSIGNORS:NCR CORPORATION;NCR INTERNATIONAL, INC.;REEL/FRAME:032034/0010

Effective date: 20140106

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION

AS Assignment

Owner name: NCR VOYIX CORPORATION, GEORGIA

Free format text: RELEASE OF PATENT SECURITY INTEREST;ASSIGNOR:JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT;REEL/FRAME:065346/0531

Effective date: 20231016