GB2342207A - Distinguishing part of a scene - Google Patents

Distinguishing part of a scene Download PDF

Info

Publication number
GB2342207A
GB2342207A GB9821377A GB9821377A GB2342207A GB 2342207 A GB2342207 A GB 2342207A GB 9821377 A GB9821377 A GB 9821377A GB 9821377 A GB9821377 A GB 9821377A GB 2342207 A GB2342207 A GB 2342207A
Authority
GB
United Kingdom
Prior art keywords
transformation
image
scene
main
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9821377A
Other versions
GB9821377D0 (en
GB2342207B (en
Inventor
David Capel
Andrew Zisserman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oxford University Innovation Ltd
Original Assignee
Oxford University Innovation Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oxford University Innovation Ltd filed Critical Oxford University Innovation Ltd
Priority to GB9821377A priority Critical patent/GB2342207B/en
Publication of GB9821377D0 publication Critical patent/GB9821377D0/en
Publication of GB2342207A publication Critical patent/GB2342207A/en
Application granted granted Critical
Publication of GB2342207B publication Critical patent/GB2342207B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Collating Specific Patterns (AREA)
  • Image Input (AREA)
  • Image Analysis (AREA)

Abstract

A method of distinguishing part of a scene, eg a fingerprint on a banknote includes obtaining a main image of the scene including the part, obtaining a control image of the scene without the part, calculating a transformation to bring the main and control images into registration, 10, applying the transformation 12 and comparing the images after the transformation 22 so as to obtain an image of the part.

Description

f 2342207 METHOD AND APPARATUS FOR DISTINGUISHING PART OF A SCENE The
present invention relates to a method and apparatus for distinguishing part of a scene, in
particular for separating an additional part of a scene or an alteration to a scene from its background.
In the field of forensics, it is often necessary to study a fingerprint and compare that fingerprint with a database of known fingerprints. This may be done by the eye of a trained expert or, in some cases, by a computerised automatic comparison system.
Whether the identification process is carried out by eye or by computer, it is necessary to have a good image Of the f-Ingerprint in question. Forensic dyes are known to highlight the fingerprint, but there is often still some d'Lfficulty in distinguishing the fingerprint image -tern.
from a background pat it has been proposed to use computerised image processing techniques and Fourier techniques to remove periodic background images from the fingerprint image. However, in many instances, for instance when fingerprints are found on bank notes, there is no periodicity to the background image and these techniques cannot be applied.
Ac-cording to the present invention there is provided a method of distinguishing part of a scene comprising:
obtaining a main image of the scene including said part; obtaining a control image of the scene without said part; calculating a transformation to bring the main and control images into registration; applying the transformation; and comparing the images after the transformation so as to obtain an image of said part.
According to the present invention, there is also provided an apparatus for distinguishing part of a scene comprising:
a comparator for comparing a main image of the scene including said part and a control image of the scene without said part, so as to calculate a transformation to bring the main and control images into registration; a transformer for applying the transformation; and means for comparing the images after the transformation so as to obtain an image of said part.
In this way, the distinguished part of the scene can be analysed by eye or by some computerised analysis. The part may comprise a fingerprint on a bank note and the fingerprint on the bank note may be separated from the image of the bank note by obtaining another image of another bank note. By providing the transformation, it is possible to use a separate bank note to remove the image of the bank note from the fingerprint.
The same technique can be used to obtain images of footprints on a floor. In particular, photographs may be taken at a crime scene, the floor cleaned and then further photographs taken. By use of the transformation, it is not necessary for the camera position, lens, etc to be the same for the subsequent photographs. According to the present invention, the transformation will allow the later photographs to be used to remove the image of the floor, etc.
The invention is also applicable to distinguishing changes in satellite or aerial images and is not dependent on images being obtained from the same position or knowledge of the position at which the images were obtained.
Preferably, the geometric transformation calculator is for calculating a global geometric transformation for mapping one of the main and control images onto the other of the main and control images and, where the comparator includes a feature detector for extracting image features from the main and control images suitable for matching the images, the geometric transformation calculator calculates the transformation to map a large fraction of the image features of one of the main and control images onto the image features of the other of the main and control images.
This enables the control image to be obtained independently of the initial main image and y-2t still be used effectively to remove unwanted parts of the main image.
Preferably, the geometric transformation calculator calculates a transformation on the basis of 3 to 8 degrees of freedom homography.
Thus, for a 2-dimensional scanned scene, such as a bank note, it is possIble to apply a geometric transformation with merely 3 degrees of freedom. However, for a photographic image of a plane within a 3dimensional scene, a transformation with 8 degrees of freedom may be used to match images taken with cameras at different positions.
Prefferably, the comparator includes a photometric transformation calculator for calculating a photometric transformation which photometrically maps one of the main and control images onto the other of the main and control images and the transformer comprises means for applying the photometric transformation.
In this way, where colour variations exist between the main and control images of the scene, these can be corrected, such that the ultimate comparison between the main and control images of the scene will result in correct identification of the required part.
The invention will be more clearly understood from the following description, given by way of example only, with reference to the accompanying drawings, in which:
Figure 1 illustrates schematically an embodiment of the present invention; and Figures 2(a) to (f) illustrate the present invention applied to distinguishing a fingerprint on a bank note.
An embodiment of the present invention will first be described generally with reference to obtaining a clear image of a fingerprint on a bank note.
Figure 1 illustrates schematically an apparatus for carrying out the embodiment. The various functional blocks of Figure 1 are used to illustrate the various functions of the embodiment, even though a practical embodiment may not have such distinct functional elements, but may use single multi-functional elements.
2.5 For instance, although several memories are illustrated, embodiments may use only a single shared memory.
An imager 2 is provided for capturing an image of a scene and providing this to one of two memories 4 and 6.
The imager may comprise a device such as a high resolution flat bed scanner.
The imager is used to obtain an image of the scene in question, in this case, a bank note such as illustrated in Figure 2(a) on which a fingerprint has been left. This main image is then stored in memory 4.
The imager 2 is then used to obtain a control image of a different and clean bank note such as illustrated in Figure 2(a) and this is stored in the memory 6.
As will become apparent from the following, accurate image alignment of the two bank notes is not essential during imaging. Indeed, the two images may be obtained using different equipment and/or on different occasions.
Having obtained the two images, a feature detector 8 is used to analyse each image and to extract a number, for instance several hundred, image features suitable for matching the images. These features are illustrated in Figure 2(b).
A preferred feature detector is the Harris feature detector, as described in "A combined corner and edge detector", Proc. 4th Alvey Vision Conference, Manchester, pages 147-151, 1988. The Harris feature detector finds feature points at which the local auto correlation has a strong minimum, i.e. pixels near the point correlate poorly With those a small distance away in any direction.
Such points are generally present and in the same place in several images of a scene. This typically generates over a thousand feature points, but only the top few hundred strongest features are selected for use.
In the preferred embodiments, features are detected from the combined grey-level image.
Edge features may be obtained using a Canny edge detector. This looks for sharp unidirectional gradient changes known as edges and then links these edges together into chains. Similar edged chains are typically observed in both images and they may be matched and used to refine further the registration.
The feature detector 8 may also apply techniq-3es extracting curved segments to facilitate matching. This Js particularly useful in facilitating the matching of a L text layer.
The results of the feature detection are passed to a geometric transformation calculator 10. This analyses the detected image features to find a geometric transformation which applies over the whole area of the images and which maps a large fraction of the features in one image to corresponding features in the other image.
Corresponding mapped features are illustrated in Figure 2(c).
The geometric alignment is preferably carried out using a random sampling and consensus (RANSAC) robust estimation algorithm, as discussed in "Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography", Comm. Assoc. Comp. Mach., 24(6):381- 395, 1981. This is applied to the detected features to find a geometric transformation which applies over the whole image. The RANSRC algorithm is a "hypothesize and verify" procedure for fitting a mathematical model to a set of data which may contain many spurious points (outliers). A simple example would be in fitting a line to a set of points.
In this case, two points are randomly selected to hypothesize a line and the number of points lying close to the line provides a measure of support (verifies that line hypothesis). This process is repeated many times to find the line with the most support.
The required transformation may be described as planar homography and may make use of as few as 3 or as many as 8 degrees of freedom depending on the method of capture of the source images.
The number of degrees of freedom used depends on the severity Of the transformation required and the method of acquJLring the images. Generally, for scanned fingerprint 4 dean transformation may be sufficient, images, a Eucl_ representing translation and rotation, i.e. Only 3 degrees of freedom. However, if the images are acquired by cameras with different positions and orientations, then a full 8 degrees of freedom homography is required.
Preferably, the calculator includes means for iteratively refining the estimate by using a non-linear optimiser to improve its accuracy by computing the maximum likelihood estimate of the transformation.
The iterative non-linear optimizer is a numerical procedure for minimizing a cost function using gradient descent. Examples of appropriate techniques are discussed in "Automated mosaicing with super- resolution zoom., Proceedings of the Conference on Computer Vision and Pattern Recognition, Santa Barbara, 1998 and "Robust computation and parameterization of multiple view relations", Proc. 6th International Conference on Computer Vision, Bombay, pages 727-732, January 1998.
Having calculated the appropriate geometric L-ransformation, a geometric transformer 12 processes the image stored in memory 6 and applies the appropriate transformation so as to obtain global alignment or registration. In this way, the control image of the bank note without the fingerprint is warped so that it is accurately registered with the main image including the fingerprint. The resulting transformed control image is stored in memory 14 and as illustrated in Figure 2(d).
In some instances, particularly where only monochrome images are being considered, it would now be possible to compare the transformed control image with Lhe original main image. However, the illustrated eir.bodiment includes additional processing for allowing photometric registration also.
As illustrated, a photometric detector 16 compares the original main image stored in memory 4 with the geometrically transformed control image stored in memory 14 and calculates any gamma and/or chroma differences in order to calculate a global mapping between the colour values in one image and those in the other.
The appropriate transformation data is sent to a photometric registration device 18 which then applies an appropriate global gamma and/or chroma correction to the colour values of the image in memory 14. In particular, the colour global mapping is a brightness and contrast adjustment for each colour channel. The photometric correction may also be spatially varying.
The resulting photometrically and geometrically transformed control image is then stored in memory 20 and is illustrated in Figure 2(e).
Finally, a comparator 22 compares the transformed image in memory 20 with the original main image in memory 4 so as to find the differences between the two images. In particular, statistical comparison of the images on a pixel-to-pixel basis then determines which pixels are part of the background image and which are likely to be part of the required latent image, i.e. the fingerprint. In this way, an image of the fingerprint may be output. The resulting image is illustrated in Figure 2(f).
The output image may be supplied to a printer or display. It may also be passed on to other processing equipment for comparing the fingerprint with a computerised database of such fingerprints.
As well as the geometric and photometric transformations, it is also possible to include additional registration refinement and Statistical comparLson. In particular, residual inaccuracies in registration, for instance due to scanner nonlinearalities, paper creases, etc, may be removed by fitting a deformable mesh to the control image.
The embodiment described above may be varied in a number of ways.
For the embodiment described above, the geometric transformation is carried out on the basis of a combined grey-level image. In contrast, it is possible tO carry out geometric registration on the basis of only one colour channel. Similarly, it would be possible to separate the image into its separate colour components, for instance the various RGB colour channels and to provide separate geometric transformations for each colour channel. This might be appropriate for imaging systems where the three colour channels are not coincident. In most cases, only a single transformation is required, since the various colour channels are already very accurately aligned and registration may be performed using features (points and edges) obtained from the combined grey-level image.
Although the embodiment described above applies the ormations to the control image, it is also possible transt to apply opposite transformations to the main image. However, in some instances, this may be less desirable, since it will result in a transformation of the image of the distinguished part, eg. the fingerprint.
The system described above can be applied to a wide variety of situations where it is desired to distinguish a particular part of a scene.
Clearly, the invention can also be applied to other scenes, such as other printed products or such images as may be scanned by a scanner. In these cases, the technique may be used to distinguish not only fingerprints, but other mark,-ngs or deletions to the original scene.
The invention may also be applied to distinguish chancres in more general scenes. For instance, by using h-ighe.- levels of geometric transformation, it is possible -wo photographic images of a scene and to register +1 d-ermine changes in that scene. One particularly useful e 't.
application is where photographs are taken at the scene of a crime and, after that scene has been cleared up, cleaned, etc further photographs were taken. By using the techniques of the present invention, it is possible to distinguish images of footprints and such like by comparing the photographs. Similar results can be obtained from satellite and aerial images of scenes so as to determine changes in urbanisation, forestation, etc.
Using the geometric transformation techniques proposed by the present invention, changes may be readily distinguished without the need to know the location from which images were obtained.
The imager 2 is not limited to being a scanner.
Where the present invention is used to identify other latent marks, such as footprints, the imager 2 may be a scanner in conjunction with photographic equipment or may include a digital video stills camera. 5

Claims (16)

1. An apparatus for distinguishing part of a scene comprising: a comparator for comparing a main image of the scene including said part and a control image of the scene without said part, so as to calculate a transformation to bring the main and control images into registration; y41ng the transformation; and a transformer for appl _ means for comparing the images after the transformation so as to obtain an image of said part.
2. Ar. apparatus according to claim 1 wherein the comparator includes a geometric transformation calculator for calculating a global geometric transformation for is mapping one of the main and control images onto the Other of the main and control images.
3. An apparatus according to claim 1 wherein the comparator includes a feature detector for extracting image features from the main and control images suitable for matching the images.
4. An apparatus according to claim 3 wherein the feature detector is a Harris feature detector.
S. An apparatus according to claim 3 or 4 wherein the comparator includes a geometric transformation calculator for calculating a global geometric transformation which maps a large fraction of the image features of one of the main and control images onto the image features of the other of the main and control images.
6. An apparatus according to claim 2 or 5 wherein the geometric transformation calculator includes means for iteratively refining the geometric transformation.
7. An apparatus according to claims 2, 5 or 6 wherein the geometric transformation calculator applies a RANSAC robust estimation algorithm.
8. An apparatus according to any one of claims 2 and 5 to 7 wherein the geometric transformation calculator calculates a transformation on the basis of 3 to 8 degrees of freedom homography.
9. An apparatus according to any preceding claim wherein the comparator includes a photometric transformation calculator for calculating a photometric transformation which photometrically maps one of the main and control images onto the other of the main and control images; and Lhe transformer comprises means for applying the photometric transformation.
10. A method of distinguishing part of a scene comprising:
obtaining a main image of the scene including said part; obtaining a control image of the scene without said part; calculating a transformation to bring the main and control images into registration; applying the transformation; and comparing the images after the transformation so as to obtain an imaae of said part.
11. An apparatus according to any one of claims 1 to 0, or a method according to claim 10 wherein the scene comprises a 2- dimensional image.
12. An apparatus or a method according to claim 11 wherein said part comprises a fingerprint on the 2dimensional image.
13. An apparatus or a method according to claim 12 wherein the 2-dimensional image comprises a bank note and the control image is taken from another bank note without the fingerprint.
14. An apparatus according to any one of claims 1 to 9 or a method according to claim 10 wherein the scene comprises a substantially planar region of a 3dimensional scene.
15. An apparatus constructed and arranged substantially as hereinbefore described with reference to and as illustrated by the accompanying drawings.
16. A method of distinguishing part of a scene substantially as hereinbefore described With reference to and as illustrated by the accompanying drawings.
GB9821377A 1998-10-01 1998-10-01 Method and apparatus for distinguishing part of a scene Expired - Fee Related GB2342207B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB9821377A GB2342207B (en) 1998-10-01 1998-10-01 Method and apparatus for distinguishing part of a scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB9821377A GB2342207B (en) 1998-10-01 1998-10-01 Method and apparatus for distinguishing part of a scene

Publications (3)

Publication Number Publication Date
GB9821377D0 GB9821377D0 (en) 1998-11-25
GB2342207A true GB2342207A (en) 2000-04-05
GB2342207B GB2342207B (en) 2003-08-06

Family

ID=10839805

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9821377A Expired - Fee Related GB2342207B (en) 1998-10-01 1998-10-01 Method and apparatus for distinguishing part of a scene

Country Status (1)

Country Link
GB (1) GB2342207B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694494A (en) * 1993-04-12 1997-12-02 Ricoh Company, Ltd. Electronic retrieval of information from form documents

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5694494A (en) * 1993-04-12 1997-12-02 Ricoh Company, Ltd. Electronic retrieval of information from form documents

Also Published As

Publication number Publication date
GB9821377D0 (en) 1998-11-25
GB2342207B (en) 2003-08-06

Similar Documents

Publication Publication Date Title
Ahmed Comparative study among Sobel, Prewitt and Canny edge detection operators used in image processing
Yao et al. Detecting image splicing based on noise level inconsistency
US6985631B2 (en) Systems and methods for automatically detecting a corner in a digitally captured image
US20090232358A1 (en) Method and apparatus for processing an image
US20040190787A1 (en) Image noise reduction
US8294945B2 (en) Defacement degree determination apparatus and defacement degree determination method
KR101548928B1 (en) Invariant visual scene and object recognition
Lee A coarse-to-fine approach for remote-sensing image registration based on a local method
US6226399B1 (en) Method and system for identifying an image feature and method and system for determining an optimal color space for use therein
US20020009230A1 (en) Template matching using correlative auto-predicative search
JPH06208618A (en) Image processor and processing method
CN104392416A (en) Video stitching method for sports scene
Shah et al. Removal of specular reflections from image sequences using feature correspondences
US20140126839A1 (en) Defect detection using joint alignment and defect extraction
CN109741551A (en) A kind of commodity identification settlement method, apparatus and system
CN110969202A (en) Portrait collection environment verification method and system based on color component and perceptual hash algorithm
EP1873717B1 (en) Method to estimate the geometric distortion between two images
US7305124B2 (en) Method for adjusting image acquisition parameters to optimize object extraction
CN104966283A (en) Imaging layered registering method
Anzid et al. A new SURF-based algorithm for robust registration of multimodal images data
CN111275687A (en) Fine-grained image stitching detection method based on connected region marks
GB2342207A (en) Distinguishing part of a scene
CN111583341B (en) Cloud deck camera shift detection method
JP2004145592A (en) Motion vector extraction device, method and program, and its recording medium
RU2767281C1 (en) Method for intelligent processing of array of non-uniform images

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20071001