WO2006040576A1  A process to improve the quality the skeletonisation of a fingerprint image  Google Patents
A process to improve the quality the skeletonisation of a fingerprint imageInfo
 Publication number
 WO2006040576A1 WO2006040576A1 PCT/GB2005/003968 GB2005003968W WO2006040576A1 WO 2006040576 A1 WO2006040576 A1 WO 2006040576A1 GB 2005003968 W GB2005003968 W GB 2005003968W WO 2006040576 A1 WO2006040576 A1 WO 2006040576A1
 Authority
 WO
 Grant status
 Application
 Patent type
 Prior art keywords
 data
 representation
 element
 feature
 set
 Prior art date
Links
Classifications

 G—PHYSICS
 G06—COMPUTING; CALCULATING; COUNTING
 G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
 G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
 G06K9/00006—Acquiring or recognising fingerprints or palmprints
 G06K9/00067—Preprocessing; Feature extraction (minutiae)
Abstract
Description
IMPROVEMENTS IN AND RELATING TO IDENTIFIER INVESTIGATION
This invention concerns improvements in and relating to identifier investigation, particularly, but not exclusively, in relation to the comparison of biometric identifiers or markers, such as prints from a known source with biometric identifiers or markers, such as prints from and unknown source. The invention is applicable to fingerprints, palm prints and a wide variety of other prints or marks, including retina images.
It is useful to be able to capture, process and compare identifiers with a view to obtaining useful information as a result. In the context of fingerprints, the useful result may be evidence to support a person having been at a crime scene.
Problems exist with present methods in terms of their accuracy and speed.
The present invention has amongst its potential aims to process a representation of an identifier so as to produce a processed representation which more accurately represents the identifier. The potential aims may include a faster process for the representation of a identifier.
According to a first aspect of the present invention we provide a method of processing a representation of an identifier, the method including: obtaining a representation of an identifier, the representation including one or more components; defining a part of the representation of the identifier as being within a neighborhood by reference to a boundary for that neighborhood; determining any ends for the components which ends fall within the boundary to the neighborhood; determining any limits for the components, a limit being an element of the component which coincides with the boundary to the neighborhood and/or being an element of the component which forms the junction with one of more of the components; generating a processed representation of the identifier; wherein the form of the components present in the processed representation is determined according to the interrelationship between the ends and/or limits of components.
The first aspect of the present invention may include features, options or possibilities set out elsewhere in this application, including in the other aspects of the invention. In particular, the first aspect may include the following.
The representation of the identifier may have been captured. The representation may be captured from a crime scene and/or an item and/or a location and/or a person. The representation may have been captured by scanning and/or photography.
The method may process an already processed representation of an identifier. The already processed representation may have been processed to convert a colour and/or shaded representation into a black and white representation. The already processed representation may have been processed using Gabor filters.
The method may process a representation of an identifier which has been altered in format. The alteration in format may involve converting the representation into a skeletonised format. The alteration in format may involve converting the representation into a format in which the representation is formed of components, preferably linked data element sets. The alteration may convert the representation into a representation formed of single pixel wide lines.
The method of the first aspect may provide processing which cleans the representation, particularly when provided according to the second aspect of the present invention and/or its options, possibilities and features.
The method of the first aspect may provide processing which heals the representation, particularly when provided according to the third aspect of the present invention and/or its options, possibilities and features.
The method may provide for cleaning followed by healing.
The identifier may be a biometric identifier or other form of marking. The identifier may be a fingerprint, palm print, ear print, retina image or a part of any of these. The representation of the identifier may be obtained direct or after processing of the type provided above.
The representation may be formed of a plurality of components, particularly in the form of linked data element sets.
One or more of the components may be in the form of linked data elements, for instance to form linked data element sets. One or more of the components may be formed of a plurality of data elements which are connected to one another. A linked data element set may be formed of a plurality of data elements which are connected to one another. A plurality of the data elements in a linked data element set may be connected to two adjoining data elements. One or two data elements in a linked data element set may be connected to only one other data element, for instance the data element defining a ridge end in a fingerprint. One or two of the data elements in a linked data element set may be connected to three data elements, for instance the data element defining a bifurcation in a fingerprint.
The part of the representation being within the neighborhood is preferably less than the whole of the representation. The part may be less than 10% of the whole, preferably less than 5% of the whole, more preferably less than 1% of the whole and ideally less than 0.1% of the whole.
The neighborhood may have a shape defined by the boundary. The shape may be circular or square or rectilinear. The neighborhood may have a predetermined area and/or shape and/or size. The neighborhood area and/or shape and/or size may be varied between parts of the representation and/or between different processings of the representation.
An end may be a part of a component within the boundary of the neighborhood and/or a part of the component only connected to another part of the component in a single place. An end maybe a representation of a ridge end and/or apparent ridge end which falls within the boundary of the neighborhood. A neighborhood may contain no or one or more ends. An end may be in the form of an end element. An end data element may be one within the boundary of the neighborhood and/or only connected to one other data element. An end data element may be one representing a ridge end and/or apparent ridge end which falls within the boundary of the neighborhood. A neighborhood may contain no or one or more end data elements.
A limit may be the part of a component which crosses the boundary. The limit may be connected to a part of the component on the inside of the boundary and a part of the component on the outside of the boundary. The limit may be connected to one or more other parts of the component across the boundary and/or outside the neighborhood. One or more other parts of the component across the boundary and/or outside the neighborhood may not be considered part of the component. They may be considered part of a component in respect of the processing of another part of the representation. A limit may be one point on a representation of a continuous ridge. The boundary may coincide with no or one or more such limits. A limit may be in the form of a limiting data element. A limiting data element may be the data element of a linked data element set which crosses the boundary. The limiting data element may be connected to a data element on the inside of the boundary and a data element on the outside of the boundary. The limiting data element may be connected to one or more other data elements across the boundary and/or outside the neighborhood. One or more other data elements across the boundary and/or outside the neighborhood may not be considered part of the linked data element set. They may be considered part of a linked data element set in respect of the processing of another part of the representation. A limiting data element may be one point on a representation of a continuous ridge. The boundary may coincide with no or one or more such limiting data elements.
A limit may be a part of a component which meets another part of another component. The limit may be connected to three other parts of components. Preferably one of the three parts is a part of the component and the other parts are parts of other components. A limit may represent a bifurcation or apparent bifurcation. A neighborhood may contain no or one or more limits of this nature. A limit may be in the form of a limiting data element. A limiting data element may be a data element of a linked data element set which meets another data element of another linked data element set. The limiting data point may be connected to three other data elements. Preferably one of the three data elements is in the linked data element set and the other data elements or in another data element set or other data element sets. A limiting data element may represent a bifurcation or apparent bifurcation. A neighborhood may contain no or one or more limiting data elements of this nature.
The processed representation of the identifier may contain components in a form not present in the representation before processing, for instance due to healing. The processed representation of the identifier may contain components which include further parts not present in the representation before processing, for instance due to healing. Preferably any new parts are part of one or more components. The processed representation of the identifier may not contain parts and/or components present in the representation before processing, for instance due to cleaning. The processed representation of the identifier may contain data elements not present in the representation before processing, for instance due to healing. The processed representation of the identifier may contain linked data element sets which include further data points not present in the representation before processing, for instance due to healing. Preferably any new data elements are part of one or more linked data element sets. The processed representation of the identifier may not contain data elements and/or linked data element sets present in the representation before processing, for instance due to cleaning.
Consideration of one or more interrelationships between the ends and/or limits of components may be provided. The interrelationships may be between parts of the same components and/or maybe between parts of different components.
The interrelationship may be that where a component has two ends within the boundary that that component is omitted from the processed representation. The interrelationship may be that where a component has an end and a limit which forms the junction with one or more of the other components, within the boundary, that that component is omitted from the processed representation. The interrelationship may be that where a component has an end, within the boundary, and a limit which coincides with the boundary to the neighborhood that that component is present in the processed representation. The interrelationship may be that where a component has two limits which coincide with the boundary to the neighborhood that that component is present in the processed representation. Two of these interrelationships, preferably three of them and ideally all four of them may be applied in the method, particularly for the purposes of cleaning the representation.
The determination may involve, for components having an end, generating a line extending between the end and the end or limit forming the other extent of the component. In such a case, one or more of the interrelationships set out in the next paragraph may be applied. Preferably both of the interrelationships are applied. The interrelationship or interrelationships may be applied together with one or more of the interrelationships provided in the previous paragraph.
The interrelationship maybe that, where the direction of the generated line for the first component and the direction of the generated line for the second component match within limits, that the processed representation includes the end of the first component being joined to the end of the second component. The interrelationship maybe that, where the direction of the generated line for the first component and the direction of the generated line for the second component do not match within limits, that the processed representation includes the end of the first component not being joined to the end of the second component.
Preferably the processed representation is formed of a series of parts to which the method has been applied in turn. A series of adjoining or overlapping neighborhoods may be used to process a series of parts of the representation. At least 50% of the representation maybe so processed. Preferably at least 75% of the representation is so processed and ideally all of the representation is so processed. The neighborhood used to process one part of the representation maybe of the same area and/or shape and/or size as the neighborhood used to process another part of the representation.
The processing of one neighborhood may result in one or more parts and/or data elements and/or components and/or linked data element sets being retained, which are not present in the eventual processed representation because of the processing of one or more other neighborhoods.
The processed representation may be subjected to one or more further steps. The one or more further steps may include the extraction of data from the processed representation, particularly as set out in detail in applicant's UK patent application no 0502990.5. One or more further steps in which the processed representation is placed in a form for comparison may be provided. The form for comparison may particularly be that set out in detail in applicant's UK patent application number 0502902.0 of 11 February 2005 and/or UK patent application number 0422786.46 of 14 October 2004. The form for comparison may allow the representation to be compared with one or more other representations. The one or more other representations may have been processed according to the present invention. The method of comparison may particularly be that set out in applicant's UK patent application number 0502900.4 filed 11 February 2005 and/or UK patent application number 0422784.9 filed 14 October 2004. The comparison may provide an indication of the likelihood of the representation and other representation coming from the same source.
According to a second aspect of the invention we provide a method of processing a representation of an identifier, the method including: obtaining a representation of an identifier, the representation including one or more linked data element sets; defining a part of the representation of the identifier as being within a neighborhood by reference to a boundary for that neighborhood; determining any end data elements for the linked data element sets which fall within the boundary to the neighborhood; determining any limiting data elements for the linked data element sets, a limiting data element being one which coincides with the boundary to the neighborhood and/or being one which forms the junction with one of more of the other linked data element sets; generating a processed representation of the identifier; wherein a linked data element set having two end data elements is omitted from the processed representation and/or a linked data element set having an end data element and a limiting data element which forms the junction with one or more of the other linked data sets is omitted from the processed representation and/or a linked data element set having an end data element and a limiting data element which coincides with the boundary to the neighborhood is present in the processed representation and/or a linked data set having two limiting data elements which coincide with the boundary to the neighborhood is present in the processed representation.
The second aspect of the present invention may include features, options or possibilities set out elsewhere in this application, including in the other aspects of the invention.
According to a third aspect of the invention we provide a method of processing a representation of an identifier, the method including: obtaining a representation of an identifier, the representation including one or more linked data element sets; defining a part of the representation of the identifier as being within a neighborhood by reference to a boundary for that neighborhood; determining any end data elements for the linked data element sets which fall within the boundary to the neighborhood; determining any limiting data elements for the linked data element sets, a limiting data element being one which coincides with the boundary to the neighborhood and/or being one which forms the junction with one of more of the other linked data sets; for linked data element sets having an end data element, generating a line extending between the end data element and the end or limiting data element forming the other extent of the linked data element set; generating a processed representation of the identifier; wherein, in the processed representation, the end data element of a first linked data element set is joined to the end data element of a second linked data element set if the direction of the generated line for the first linked data element set and the direction of the generated line for the second linked data element set match within limits and/or the end data element of a first linked data element set is not joined to the end data element of a second linked data element set if the direction of the generated line for the first linked data element set and the direction of the generated line for the second linked data element set do not match within limits.
The third aspect of the present invention may include features, options or possibilities set out elsewhere in this application, including in the other aspects of the invention.
The limits may be expressed in terms of an angle. The limits may be constant between neighborhoods and/or different processings of the representation.
The end data element of a first linked data element set may be joined to the end data element of a second linked data element set only if the first linked data element set and second linked data element set are within a certain distance range of one another. The end data element of a first linked data element set may not joined to the end data element of a second linked data element set if the distance between the first linked data element set and the second linked data element set is above a certain distance.
The method of the third aspect of the invention maybe applied to the representation after it has had the method of the second aspect of the invention applied to it.
The methods of the first and/or second and/or third aspects of the invention may be computer implemented methods.
Various embodiments of the invention will now be described, by way of example only, and with reference to the accompanying figures in which:
Figure 1 is a schematic overview of the stages, and within them steps, involved in the comparison of a print from an unknown source with a print from a known source; Figure 2a is a schematic illustration of a part of a basic skeletonised print; Figure 2b is a schematic illustration of the print of Figure 2a after cleaning and healing;
Figure 3 is a schematic illustration of the generation of representation data for the print of Figure 2b; Figure 4 is a schematic illustration of a part of a print potentially requiring cleaning;
Figure 5 is a schematic illustration of the neighborhood approach to cleaning according to the present invention;
Figure 6 is a schematic illustration of a part of a print potentially requiring healing;
Figure 7 is a schematic illustration of the neighborhood approach to direction determination, particularly useful in healing;
Figure 8 is a schematic illustration of the application of a triangle to part of a print as part of the data extraction;
Figure 9 is a schematic illustration of the application of a series of triangle to part of a print according to a further approach to the data extraction;
Figure 10 is a schematic illustration of the application of Delauney triangulation applied to the same part of a print as considered in Figure 9;
Figure 11 is a representation of a probability distribution for variation in prints from the same finger and a probability distribution for variation in prints between different fingers;
Figure 12 shows the distributions of Figure 9 in use to provide a likelihood ratio for a match between known and unknown prints;
Figure 13a illustrates minutia and direction information from a mark and a suspect;
Figure 13b illustrates the presentation of the direction information in a format for comparison;
Figure 13c illustrates the information of Figure 13b being compared; and
Figure 14 is a Bayesian network representation; Background
A variety of situations call for the comparison of markers, including biometric markers. Such situations include a fingerprint, palm print or other such marking, whose source is known, being compared with a fingerprint, palm print or other such marking, whose source is unknown. Improvements in this process to increase speed and/or reliability of operation are desirable.
In the context of forensic science in particular, the consideration of the unknown source fingerprint may require the consideration of a partial print or print produced in less than ideal conditions. The pressure applied when making the mark, substrate and subsequent recovery process can all impact upon the amount and clarity of information available.
Process overview
The overall process of the comparison is represented schematically in Figure 1.
After the recovery of the fingerprint and its representation, which maybe achieved in one or more of the conventional manners, a representation of the fingerprint is captured. This may be achieved by the consideration of a photograph or other representation of a fingerprint which has been recovered. hi the next stage, the representation is enhanced. The representation is processed to represent it as a purely black and white representation. Thus any colour or shading is removed. This makes subsequent steps easier to operate. The preferred approach is to use Gabor filters for this purpose, but other possibilities exist.
Following on from this part of the stage, the enhanced representation is converted into a format more readily processed. This skeletonisation includes a number of steps. The basic skeletonisation is readily achieved, for instance using a function within the Matlab software (available from The MathWorks Inc). A section of the basic skeleton achieved in this way is illustrated in Figure 2a. The problem with this basic skeleton is that the ridges 20 often feature relatively short side ridges 22, "hairs", which complicate the pattern and are not a true representation of the fingerprint. Breaks 24 and other features may also be present which are not a true representation of the fingerprint. To counter these issues, the basic skeleton is subjected to a cleaning step and healing step as part of the skeletonisation. The operation of these steps are described in more detail below and gives a clean healed representation, Figure 2b.
Once the enhanced representation of the recovered fingerprint has been processed to give a clean and healed representation, the data from it to be compared with the other print can be considered. To do this involves first the extraction of representation data which accurately reflects the configuration of the fingerprint present, but which is suitable for use in the comparison process. The extraction of representation data stage is explained in more detail below, but basically involves the use of one of a number of possible techniques.
The first of the possible techniques, see Figure 3, involves defining the position of features 30 (such as ridge ends 32 or bifurcation points 34), forming an array of triangles 36 with the features 30 defining the apex of those triangles 36 and using this and other representation data in the comparison stage.
In a second technique, developed by the applicant, the positions of features are defined and the positions of a group of these are considered to define a center. The center defines one apex of the triangles, with adjoining features defining the other apexes.
To facilitate the comparison stage, the representation data extracted is formatted before it is used in the comparison stage. This basically involves presenting the information characteristic of the triangles, quadrilaterals or other polygons being considered when the data is extracted in a format mathematically coded for use in the comparison stage. Further details of the format are described below.
Now that the fingerprint has been expressed as representation data, it can be compared with the other fingerprint(s). The comparison stage is based on different representation data being compared to that previously suggested. Additionally, in making the comparison, the technique goes further than indicating that the known and unknown source prints came from the same source or that they did not. Instead, an expression of the likelihood that they came from the same source is generated. In the preferred forms, one or both of the two different models (a data driven approach and a model driven approach) both described in more detail below are used.
Having provided an overview of the entire process, the stages and steps in them will now be discussed in more detail. Cleaning and healing steps of the skeletonisation stage
Some existing attempts at interpreting the basic skeleton to give an improved version have been made.
In the situation illustrated in Figure 4, the basic skeleton suggests that a ridge island 40 is present, as well as a short ridge 41 which as a result gives a bifurcation point 43 and ridge end 44.
The existing interpretation considers the length of the ridge island 40. If the length is equal to or greater than a predetermined length value then it is deemed a true ridge island and is left. If the length is less than the predetermined length then the ridge island is discarded. In a similar manner, the length from the bifurcation point 43 to the ridge end 44 is considered. Again if it is equal to or greater than the predetermined length it is kept as a ridge with its attendant features. If it is shorter than the predetermined length it is discarded. This approach is slow in terms of its processing as the length in all cases is measured by starting at the feature and then advancing pixel by pixel until the end is reached. The speed is a major issue as there are a lot of such features need to be considered within a print.
The new approach now described has amongst its aims to provide a reliable, faster means for handling such a situation. Instead of advancing pixel by pixel, the new approach illustrated in Figure 5 considers the print in a series of sections or neighborhoods. Thus a neighborhood definition, box 50, is applied to part of the print. Features within that neighborhood 50 are then quickly established by considering any pixel which is only connected to one other. This points to features 51 and 52 which represent ridge ends within the neighborhood 50. The start point for the data set forming a feature is then determined relative to the neighborhood 50. In the case of feature 51 this is the bifurcation feature 53. hi the case of feature 52 this is the neighborhood boundary crossing 54. Thus feature 51 is part of data set A extending between feature 53 and feature 51. Feature 52 is a part of separate data set, data set B, extending between crossing 54 and feature 52. AU data sets formed by a feature at both ends, with both features being within the neighborhood 50 are discarded as being too short to be true features. All data sets formed by a feature at one end and a crossing at the other are kept as far as the cleaning of that neighborhood is concerned. Thus feature 51 and its attendant data set are discarded (including the bifurcation feature 53) and feature 52 is kept by this cleaning for this neighborhood 50.
When further neighborhoods are considered, it may of course be that the feature 52 is itself part of a data set with the features both within that neighborhood, where upon it too will be discarded. If, however, it is the end of a ridge of significant length then for all neighborhoods considered its data set will start with the feature and end with a crossing and so be kept.
This approach can be used to address all ridge ends and attendant bifurcation features within the print to be cleaned.
As well as addressing "extra" data by cleaning, the present invention also addresses the type of situation illustrated in Figure 6 where the basic skeleton shows a first ridge end 60 and a second 61, generally opposing one another, but with a gap 62 between them. Is this a single ridge which needs healing by adding data to join the two ends together? Or is this truly two ridge ends?
Not only is it desirable to address this type of situation, but it also must be done in a way which does not detract from the accuracy of the subsequent process, and in particular the generation of the representative data which follows. This is particularly important in the case where the "direction" is a part of the representative data generated, as proposed for the embodiment of the invention detailed below.
To ensure that the "direction" information is not impaired it must be accurately determined and maintained. The pixel by pixel approach of the type used above for cleaning, suggests taking a feature and then moved pixel by pixel away from it for a given length. A projected line between the feature and the pixel the right length away then gives the angle. Again the pixel by pixel approach is labourious and time consuming.
The approach of the present invention is illustrated in Figure 7 and is again based on the neighborhood approach. A neighborhood 70 is defined relative to a part of the print, hi this case, the part of the print includes a ridge end 71 and bifurcation 72. Also present are points where the ridges cross the boundaries of the neighborhood, crossings 73, 74, 75, 76. Again the crossings and features define a series of data sets. In this case, ridge end 71 and crossing 73 define data set W; bifurcation 72 and crossing 74 define data set X; bifurcation 72 and crossing 75 define data set Y; and bifurcation 72 and crossing 76 define data set Z.
The direction of data set W is defined by a line drawn between ridge end 71 and crossing 73. A similar determination can be made for the direction of the other data sets.
Once the directions for data sets have been obtained, the type of situation shown in Figure 6 is addressed by considering the direction of the ridge ending in first ridge end 60 and the direction of the ridge ending in second ridge end 61. If the two directions are the same, within the bounds of a limited range, and the separation is small (for instance, the gap falls with the neighborhood) then the gap is healed and the two ridge ends 60, 61 disappear as features as far as further consideration is required. If the separation is too large and/or if the directions do not match, then no healing occurs and the ridge ends 60, 61 are accepted as genuine.
The approach taken in the present invention allows faster processing of the cleaning and healing stage, in a manner which is accurate and is not to the detriment of subsequent stages and steps.
Extraction of representation data
Preferably after the above mentioned processing, the necessary data from it to be compared with the other print can be extracted in a way which accurately reflects the configuration of the fingerprint present, but which is suitable for use in the comparison process.
It is possible to fix coordinate axes to the representation and define the features/directions taken relative to that. However, this leads to problems when considering the impact of rotation and a high degree of interrelationship being present between data.
Instead of this approach, with reference to Figure 8, one approach of the present invention will now be explained. Within the illustration, a first bifurcation feature 80, second 81 and ridge end 83 are present. These form nodes which are then joined to one another so that a triangle is formed. Extrapolation of this process to a larger number of minutia features gives a large number of triangles. A print can typically be represented by 50 to 70 such triangles. The Delaunay triangulation approach is preferred. Whilst this one approach is suitable for use in the new mathematical coding of the information extracted set out below, the use of Delaunay triangulation does not extract the data in the most robust way.
In the alternative approach, developed by the applicant, an entirely new approach is taken. Referring to Figure 9 a series of features 120a through 1201 are identified within a representation 122. A number of approaches can be used to identify the features to include in a series. Firstly, it is possible to identify all features in the representation and join features together to form triangles (for instance, using Delauney triangulation). Having done so, one of the triangles is selected and this provides the first three features of the series. One of the adjoining triangles to the first triangle is then selected at random and this provides a further feature for the series. Another triangle adjoining the pair is then selected randomly and so on until the desired number of features are in the series. In a second approach, a feature is selected (for instance, at random) and all features within a given radius of the first feature are included in the series. The radius is gradually increased until the series includes the desired number of features.
Having established the series of features, the position of each of these features is considered and used to define a centre 124. Preferably, and as illustrated in this embodiment this is done by considering the X and Y position of each of the features and obtaining a mean for each. The mean X position and mean Y position define the centre 124 for that group of features 120a through 1201. Other approaches to the determination of the centre are perfectly useable. Instead of defining triangles with features at each apex, the new approach uses the centre 124 as one of the apexes for each of the triangles. The other two apexes for first triangle 126 are formed by features 120a and 120b. The next triangle 128 is formed by centre 124, feature 120b and 120c. Other triangles are formed in a similar way, preferably moving around the centre 124 in sequence. The set of triangles formed in this approach is unique, simple and easy to describe data set. The approach is more robust than the Delaunay triangulation described previously, particularly in relation to distortion. Furthermore, the improvement is achieved without massively increasing the amount of data that needs to be stored and/or the computing power needed to process it. For comparison purposes, Figure 10 illustrates the Delaunay triangulation approach applied to the same set of features. Either the first, Delaunay triangulation, based approach or the second, radial triangulation, approach extract data which is suitable for formatting according to the preferred approach of the present process.
Format of representative data
Having considered the print in one of the above mentioned ways to extract the representative data, the data must be suitably mathematically coded to allow the comparison process and here a different approach is taken to that considered before. The approach presents the extracted data in vector form, and so allows easy comparison between expressions of different representations.
Particularly with reference to the first approach, for a given triangle, a number of pieces of information are taken and used to form a feature vector. The information is: the type of the minutia feature each node represents (three pieces of information in total); the relative direction of the minutia features (three pieces of information in total); and the distances between the nodes (three pieces of information in total). Thus the feature vector is formed of nine pieces of information. The type of minutia can be either ridge end or bifurcation. The direction, a number between 0 and 2π radians, is calculated relative to the orientation, a number between 0 and π radians, of the opposing segment of the triangle as reference and so the parameters of the triangle are independent from the image. hi particular the feature vector may be expressed as:
FV=[GP, Reg, (T_{1}, A_{1}, D_{12}, T_{2}, A_{2}, D_{23}, T_{3}, A_{3}, D_{3}JJ where
GP is the general pattern of the fingerprint;
Reg is the region of the fingerprint the triangle is in;
T_{1} is the type of minutia 1;
A, is the direction of the minutia at location 1 relative to the direction of the opposing side of the triangle;
D_{12} is the length of the triangle side between minutia 1 and minutia 2; T_{2} is the type of mimitia 2;
A_{2} is the direction of the minutia at location 2 relative to the direction of the opposing side of the triangle;
D_{23} is the length of the triangle side between minutia 2 and minutia 3;
T_{3} is the type of minutia 3;
A_{3} is the direction of the minutia at location 3 relative to the direction of the opposing side of the triangle;
D_{3} , is the length of the triangle side between minutia 3 and minutia 1.
To avoid the same feature vector representing two symmetrical triangles, the features are recorded for all the triangles in the same order (either clockwise or anticlockwise). A rule of starting with the furthest feature to the left is used, but other such rules could be applied.
As each triangle considered is independent of the others and is also independent of the print image this addresses the problem of rotational issues in the comparison.
Advantageously the second data extraction approach described above is also suited to be mathematically coded using the vector format and so allow comparison with data extracted from other representations. The pieces of information used to form the feature vector in this case are: the general pattern of the fingerprint; the type of minutia; the direction of the minutia relative to the image; the radius of the minutia from the centre or centroid; the length of the polygon side between a minutia and the minutia next to it; the surface area of the triangle defined by the minutia, the minutia next to it and the centroid. hi particular the vector may be expressed as:
FV[GP, (T_{1}, A_{1}, R_{1}, L_{12}, S_{1}J, ,{T_{h} A_{k}, R^ L_{kM1}, SJ, ,{T_{N}, A_{N}, R_{N}, L_{N1}, S_{N})] where
GP is the general pattern of the fingerprint;
T_{k} is the type of minutia i;
A_{k} is the direction of minutia /c relative to the image;
L_{kM1} is the length of the polygon side between minutia k and minutia Jc+ 1; S_{k} is the surface area of the triangle defined by minutia k, k+1 and the centroid; and R_{k} is the radius between the centroid and the minutia k.
When compared with the expression of the vector set out above in the context of the approach taken for the first data extraction approach, it should be noted that region of the fingerprint is no longer considered. The set of features can extend across region boundaries and so it is potentially not appropriate to consider one region in the vector. The region could still be considered, however, and the expression set out below is a suitable one in that context, with the region designated Reg and the other symbols having the meanings outlined above. Note a separate region is possible for each minutia.
FV=[GP, (T_{1}, A_{1}, R_{1}, Reg,, L_{u}, SJ,...,{T> A_{h} R_{19} Reg_{h} L_{kM1}, SJ ,{T_{N}, A_{N}, R_{N}, Reg_{N}, L_{NJ}, S_{N}J]
Using the types of format described above, it is possible to present the data extracted from the representations in a format particularly useful to the comparison stage.
Comparison Approaches
A number of different approaches to the comparison between a feature vector of the above mentioned type which represent the print from an unknown source with the a feature vector which represent the print from the known source are possible. A match/not match result may simply be stated. However, substantial benefits exist in making the comparison in such a way that a measure of the strength of a match can be stated.
Likelihood ratio approach
One general type of approach that can be taken, which allows the comparison to be expressed in terms of a measure of the strength of the match is through the use of a likelihood ratio.
The likelihood ratio is the quotient of two probabilities, one being that of two feature vectors conditioned on their being from the same source, the other two feature vectors being conditioned on their being from different sources. Feature vectors obtained according to the first data extraction approach and/or second extraction approach described above can be compared in this way, the differences being in the data represented in the feature vectors rather than in the comparison stage itself. hi each case, therefore, the approach can be derived from the expression:
Pφ_{s},β>_{m} \ Hp)
LR= Pφ_{s},fv_{m} \ Hd)
Where the feature vector fv contains the information extracted from the representation and formatted. The addition of the subscript s to this abbreviation denotes that a feature vector comes from the suspect, and the addition of the subscript m denotes that a feature vector originates from the crime. The symbol fv_{s} then denotes a feature vector from the known source or suspect, and fv_{m} denoted the feature vector originated from an unknown source from the crime scene. For modelling purposes it is useful to classify a feature vector into discrete quantities (which may include general pattern, region, type, and other data) and continuous quantities (which may include the distances between minutiae, relative directions and other data).
The preferred forms for the quotient in the context of the first approach and second approach are discussed in more detail below in the context of their use in the data driven approach to the comparison stage.
Within the general concept of a likelihood ratio approach, a number of ways of implementing such an approach exist. One such approach which allows the comparison to be expressed in terms of a measure of the strength of the match is through the use of a data driven approach.
Data driven approach
In general terms, the data driven approach involves the consideration of a quotient defined by a numerator which considers the variation in the data which is extracted from different representations of the same fingerprint and by a denominator which considers the variation in the data which is extracted from representations of different fingerprints. The output of the quotient is a likelihood ratio. hi order to quantify the likelihood ratio, the feature vector for the first representation, the crime scene, and the feature vector for the second representation, the suspect are obtained, as described above. The difference between the two vectors is effectively the distance between the two vectors. Once the distance has been obtained it is compared with two different probability distributions obtained from two different databases.
In the first instance, the probability distribution for these distances is estimated from a database of prints taken from the same finger. A large number of pairings of prints are taken from the database and the distance between them is obtained. This involves a similar approach to that described above. Each of the prints has data extracted from it and that data is formatted as a feature vector. The differences between the two feature vectors give the distance between that pairing. Repeating this process for a large number of pairings gives a range of distances with different frequencies of occurrence. A probability distribution reflecting the variation between prints of the same figure is thus obtained.
Ideally, the database would be obtained from a number of prints taken from the same finger of the suspect. However, the approach can still be applied where the prints are taken from the same finger, but that finger is someone's other than the suspect. This database needs to reflect how a print (more particularly the resulting triangles and their respective feature vectors) from the same finger changes with pressure and substrate. This database is formed from a significant number of sets of information, each set being a large number of prints taken from the same finger under the full range of conditions encountered in practice. The database is populated by the identification, by an operator, of corresponding triangles in several applications of the same finger. Alternatively, a smaller set of prints can be processed as described above, distortion functions can then be calculated. The prefer method is thin plate splines, but other methods exist. The distortion function can then be applied to other prints to simulate further sets of data.
In the second instance, the probability distribution for these distances is estimated from a database of prints taken from different fingers. Again a large number of pairings of prints are taken from the database and the distance between them obtained. The extraction of data, formatting as a feature vector, calculation of the distance using the two feature vectors and determination of the distribution is performed in the same way, but uses the different database.
This different database needs to reflect how a print (more particularly the resulting triangles and their respective feature vectors) from a number of different fingers varies between fingers and, potentially, with various pressures and substrates involved. Again , the database is populated by the identification, by an operator, of triangles in the various representations obtained from the different fingers of different persons.
Having established the manner in which the databases and probability distributions are obtained, the comparison of a crime scene print against a suspect print is considered further.
The numerator may thus be thought of as considering a first representation obtained from a crime scene or an item linked to a crime, against a second representation from a suspect through an approach involving: taking and/or generating a number of example representations of the second representation; considering the example representations as a number of triangles; considering the value of the feature vector for a given triangle in respect of each of the example representations; obtaining the feature vector value of the first representation; forming a probability distribution of the frequency of the crossdifferences of different feature vector values for a given triangle between example representations; comparing the difference of the feature vector value of the first representation and the feature vector value of the second representation with the probability distribution.
The denominator may thus be thought of as considering the second representation obtained from a suspect against a series of representations taken from a population through an approach involving: taking or generating a number of example representations of representations taken from a population; considering the example representations as a number of triangles; considering the values of the feature vectors in respect of each of the example representations; forming a probability distribution of the frequency of differences between the feature vector of the first representation and the different feature vector values from the example representations; obtaining the feature vector value of the second representation; comparing the difference between the feature vector value of the first representation and the feature vector value of the second representation with the probability distribution.
Applying the data driven approach, and in the context of the first data extraction approach (Delaunay triangulation), and after some algebraic operations, a probability for the numerator of the likelihood ratio is computed using the following formula:
Num = ^{Vx{d{fv_{s c}β>_{nhC})\fv_{s d}Jv_{m dy}H_{p})\ for all>^ and/v_{m rf} such thatβ>_{s d}=fv_{m}j
where
Jv means feature vector, c means continuous, d means discrete, m means mark and s means suspect and therefore:
Jv _{m c} : continuous data of the feature vector from the mark
Jv _{m d} : discrete data of the feature vector from the mark
Jv_{s c} : discrete data of the feature vector from the suspect
Jv _{s d} : discrete data of the feature vector from the suspect d(β>_{s c},Jv_{m c}) is the distance measured between the continuous data of the two feature vectors from the mark and the suspect
H_{p} is the prosecution hypothesis, that is the two feature vectors originate from the same source. Notice that, conditioning on H_{p}, suggests /V_{5 c} and Jv _{m c} become measurements extracted from the same finger of the same person. The subscript in the summation symbol means that the probabilities in the righthandside of equation are added up for all the cases where the values of the discrete quantities of the features vectors coincide, hi some occasions some or all of the discrete variables are present in the fingermark. For these cases the index of the summation is replaced by values of the quantities that are not present. The summation symbol is removed when all discrete quantities are present in the fingermark.
The expression d(fv_{s c} ,/v_{m c}) denotes a distance between the continuous quantities of the feature vectors for the prints. The continuous quantities in a feature vector are the length of the triangle sides and minutia direction relative to the opposite side of the triangle. There are a number of distance measures that can be used but the distance measure describe below is preferred. This distance measure is computed by first subtracting term by term. The result is a vector containing nine quantities. This is then normalised to ensure that the length and angle are given equal weighting. By taking the sum of the squares of the distances from all the feature vectors considered in this way a single value is obtained.
In such a case, and after some algebraic operations, a probability for the denominator of the likelihood ratio is computed using the following formula,
Den = ∑{?τ(d(β_{sc}β_{n}J\fv_{sd}β_{mιd}fi_{d})?φ_{m}jH_{d}): for all/v,_{rf} and/v_{ra rf} such that β_{s,}rβ^{;} _{m,d}}
where fo means feature vector, c means continuous, d means discrete, m means mark and s means suspect, and therefore: fv_{m c} : continuous data of the feature vector from the mark jv_{m d} : discrete data of the feature vector from the mark fv_{s c} : discrete data of the feature vector from the suspect β'_{s d} : discrete data of the feature vector from the suspect d(fv_{s c},fv_{m c}) is the distance measured between the continuous data of the two feature vectors from the mark and the suspect H_{d} is the defence hypothesis, that is the two feature vectors originate from different sources.
Several distance measures exist but the one described above is preferred. The subscript in the summation symbol means that the probabilities in the righthandside of this equation are added up for all the cases where the values of the discrete quantities of the features vectors coincide. In some occasions some or all of the discrete variables are present in the fingermark. For these cases the index of the summation is replaced by values of the quantities that are not present. The summation symbol is removed when all discrete quantities are present in the fingermark.
Conditioning on H_{d}, that is "the prints originated from different sources", the features vectors come from different fingers of different people. The probability distribution for distances d(fv_{s c} , fv_{m c} ) can be estimated from a reference database of fingerprints. This database needs to reflect how much variability there is in respect of all prints (again more particularly the resulting triangles and their feature vectors) between different sources. This database can readily be formed by taking existing records of different source fingerprints and analysing them in the above mentioned way.
The second factor ?v(fv_{m d}\H_{d}) is a probability distribution of discrete variables including general pattern. A probability distribution for general pattern was computed based on frequencies compiled by the FBI for the National Crime Information Center in 1993. These data can be found on http://home.att.net/~dermatoglyphics/mfre/. A probability distribution for the remaining discrete variables can be estimated from a reference database using a number of methods. A probability tree is preferred because it can more efficiently code the asymmetry of this distribution, for example, the number of regions depends on the general pattern.
Again applying the data driven approach, and in the context of the second data extraction approach (radial triangulation), a probability for the numerator of the likelihood ratio is computed using the following formula: Num = Pr(OOv_{3}JvJ yH_{1}) where d(fv_{s}β>_{m}) is the distance measured between discrete and continuous data of the two feature vectors from the mark and suspect;
H_{p} is the prosecution hypothesis, that is the two vectors originate from the same source.
The probability for the numerator is computed using the following formula:
where
H_{d} is the defence hypothesis, that is the two vectors originate from different sources.
In each case, similar approaches to those detailed above can be used to generate the relevant probability distributions.
In the second approach, it is possible to measure the distance between feature vectors in the above described manner of the first data extraction approach in respect of each orientation of the polygon in the mark and suspect representations. However, the large number of minutia which may now be being considered in a feature vector (for instance 12) would mean that there are very many rotations (for instance 12 rotations) of the feature vector which must be considered, compared with the more practical three of the first approach. The use of a greater number of minutia is desirable as this increases the discriminating power of the process. Investigations to date suggest that by the time 12 minutia are being considered, there is little or no overlap between the within finger distribution and between finger distributions illustrated in Figure 11.
In a modification, therefore, a feature vector is first considered against another feature vector in terms of only part of the information it contains. In particular, the information apart from the minutia direction can be compared. In the comparison, the data set included in one of the vectors is fixed in orientation and the data set included in the other vector with which it is being compared is rotated. If the data set relates to three minutia then three rotations would be considered, if it related to twelve then twelve rotations would be used. The extent of the fit at each position is considered and the best fit rotation obtained. This leads to the association of minutiae pairs across both feature vectors.
In respect of the best fit rotation, in each case, the process then goes on to compare the remaining data in each set, the minutia direction. To achieve this, the minutiae directions are made independent of the orientation of the print on the image. The approach taken on direction is described with reference to Figurel3a through 13c. hi Figure 13a, a mark set of minutia 200 and a suspect set of minutia 202 are being considered against one another. Each set is formed of four minutia, 204a, 204b, 204c, 204d and 206a, 206b, 206c, 206d respectively. The allocation of the minutia reference numerals reflects the suggested best match between the two sets arising from the consideration of the minutia type, length of the polygon sides between minutia, surface of the polygon defined by the minutia and centroid. Each of the minutia has an associated direction 208a, 208b, 208c, 208d and 210a, 210b, 210c, 21Od respectively. For the mark set 200 and the suspect set 202, a circle 212, 214 of radius one is taken. To the mark circle 212 is added a radius 216 for each of the minutia directions, see Figure 13b. To the suspect circle 214 is added a radius 218 from each of the minutia directions, Figure 13b. Rotation of one of the circles relative to the other allows the orientation of the minutia to be brought into agreement, according to the set of the pairs of minutiae that were determined before, Figure 13 c, and allows the extent of the match in terms of the minutiae directions for each pair of minutiae to be considered, hi the illustrated case there is extensive agreement between the two circles and hence between the two marks in respect of the data being considered. hi effect, the match between the polygons is being considered in terms of the minutia type, distance between minutia, radius between the minutia and the centroid, surface area of the triangle defined between the minutia and the centroid and minutia direction. All of these considerations serve to compliment one another in the comparison process. One or more may be omitted, however, and a practical comparison be carried out. The comparison provides a distance which can be considered against the two distributions in the manner previously described with reference to Figures 11 and 12 below. Various means can be used for computing the distance, including algorithms (such as Euclidean, Pearson, Manhattan etc) or using neural networks.
Assessing a comparison using the data driven approaches
Having extracted the data, formatted it in feature vector form and compared two feature vectors to obtain the distance between them, that distance is compared with the two probability distributions obtained from the two databases to give the assessment of match between the first and second representation.
In Figure 11, the distribution for prints from the same finger is shown, S, and shows good correspondence between examples apart from in cases of extreme distortion or lack of clarity. Almost the entire distribution is close to the vertical axis. Also shown is the distribution for prints from the fingers of different individuals, D. This shows a significant spread from a low number of extremely different cases, to an average of very different and with a number of little different cases. The distribution is spread widely across the horizontal axis. hi Figure 12, these distributions are considered against a distance I obtained from the comparison of an unknown source (for instance, crime scene) and known source (for instance, suspect) fingerprint in the manner described above. At this distance, I, the values (Q and R respectively) of the distributions S and D can be taken, dotted lines. The likelihood ratio of a match between the two prints is then Q/R. In the illustrated case, distance I is small and so there is a strong probability of a match. If distance I were great then the value of Q would fall dramatically and the likelihood ratio would fall dramatically as a result. The later approach to the distance measure issue is advantageous as it achieves the result in a single iteration, provides a continuous output and does not require the determination of thresholds.
The databases used to define the two probability distributions preferably reflect the number of minutia being considered in the process. Thus different databases are used where three minutia are being considered, than where twelve minutia are being considered. The manner in which the databases are generated and applied are generally speaking the same, variations in the way the distances are calculated are possible without changing the operation of the database set up and use. Equally, it is possible to form the various databases from a common set of data, but with that data being considered using a different number of minutia to form the database specific to that number of minutia.
The databases may be generated in advance in respect of the numbers of minutia expected to be considered in practice, for instance 3 to 12, with the relevant databases being used for the number of minutia being considered in a particular case, for instance 6. Pregeneration of the databases avoids any delays whilst the databases are generated. However, it is also possible to have to hand the basic data which can be used to generate the databases and generate the database required in a specific case in response to the number of minutia which need to be considered. Thus, a mark may be best considered using six minutia and the desire to consider this mark would lead to the database being generated for six minutia from the basic database of fingerprint representations by considering that using six minutia. The data set size which needs to be stored would be reduced as a result.
In certain circumstances it is also possible to generate the probability distributions in advance. This can occur, for instance, where the within finger variation is being considered and that is considered on the basis of a single (or several) finger(s) not from the suspect. In the case of the model based approach, discussed below, it is possible to generate and store both probability distributions in advance.
Significant benefit from this overall approach arise due to: incorporating distortion and clarity in the numerator of the likelihood ratio; introducing the distance measure between the quantities in the feature vector; the use of probability distribution distances between features vectors from the same source and its estimation from a dedicated sets of data of replicates of the same finger; the use of probability distribution for the distances between print of different sources and its estimation from a reference database containing prints from different sources.
The description presented here exemplifies the use of this methodology, but the methodology is readily adapted for use in other forms. For instance, the Delauney triangulation form could be extended to cover more than three minutiae. Model based approach
Within the general concept of a likelihood ratio approach, another approach which allows the comparison to be expressed in terms of a measure of the strength of the match is through the use of a model based approach.
In such an approach, and after some algebraic operations a probability for the numerator of the likelihood ratio is computed using the following formula,
Num
where β> means feature vector, c means continuous, d means discrete, m means mark and s means suspect, and therefore: continuous data of the feature vector from the mark fv_{m4} : discrete data of the feature vector from the mark fv_{s c} : discrete data of the feature vector from the suspect β'_{sd} : discrete data of the feature vector from the suspect d(fv_{s c}, fv_{m c}) is the distance measured between the continuous data of the two feature vectors from the mark and the suspect
H_{p} is the prosecution hypothesis, that is the two feature vectors originate from the same source;
As noted before, the continuous quantities, when conditioning on/v, _{>c} and Jv _{m c} become measurement of the same finger and person. The subscript in the summation symbol means that the probabilities in the righthandside of the equation are added up for all the cases where the values of the discrete quantities of the features vectors coincide. In some occasions some or all of the discrete variables are present in the fingermark. For these cases the index of the summation is replaced by values of the quantities that are not present. The summation symbol is removed when all discrete quantities are present in the fingermark. The probability distribution for/v, _{c} is computed using a Bayesian network estimated from a database of prints taken from the same finger as described above. Many algorithms exists for estimating the graph and conditional probabilities in a Bayesian networks, but the preferred algorithms are the NPC algorithm for estimating acyclic directed graph, see Steck H., Hofmann, R., and Tresp, V. (1999). Concept for the PRONEL Learning Algorithm, Siemens AG, Munich and/or the EMalgorithm, S. L. Lauritzen (1995). The EM algorithm for graphical association models with missing data. Computational Statistics & Data Analysis, 19:191201. for estimating the conditional probability distributions. The contents of both documents, particularly in relation to the algorithms they describe are incorporated herein by reference.
Further explanation of the use of Bayesian networks follows below.
The manner in which the first representation is considered against the second representation, through the use of a probability distribution, is as described above, save for the probability distribution being computed using the Bayesian network approach rather than a series of example representations of the second representation.
Using this approach and after some algebraic operations a probability for the denominator of the likelihood ratio is computed using the following formula,
Den
where β> means feature vector, c means continuous, d means discrete, m means mark and s means suspect, and therefore: jy_{m c} : continuous data of the feature vector from the mark jv_{m d} : discrete data of the feature vector from the mark fv_{s c} : discrete data of the feature vector from the suspect jv_{s d} : discrete data of the feature vector from the suspect d(fv_{sc},β'_{m c}) is the distance measured between the continuous data of the two feature vectors from the mark and the suspectH_{d} is the defence hypothesis, that is the two feature vectors originate from different sources. The subscript in the summation symbol means that the probabilities in the righthandside of equation are added up for all the cases where the values of the discrete quantities of the features vectors coincide. In some occasions some or all of the discrete variables are present in the fingermark. For these cases the index of the summation is replaced by values of the quantities that are not present. The summation symbol is removed when all discrete quantities are present in the fingermark.
The probability distribution in the first factor of the right hand side of equation above is computed with a Bayesian network estimated from a database of feature vectors extracted from different sources. There are many methods for estimating Bayesian networks as noted above, but the preferred methods are the NPCalgorithm of Steck et al, 1999 for estimating an acyclic directed graph and/or the EMalgorithm of Lauritzen, 1995 for the conditional probability distributions. There is a Bayesian network for each combination of values of the discrete variables. The second factor Vr(β>_{m d}\H_{d}) is estimated in the same manner as described for the datadriven approach above.
Again the approach to considering the second representation against the population representations is as detailed above, save for the probability distribution being computed using the Bayesian network approach.
Assessing a comparison using the model based approach
Given a feature vector from know source jv_{s} and from an unknown source/v_{m}, the numerator is given by the equation and is calculated with a Bayesian network dedicated for modelling distortion. The second factor in the denominator is calculated in the same manner as with the datadriven approach. The first factor is computed using Bayesian networks. A Bayesian network is selected for the combination of values off_{m d} which is then use for computing a probability
This process is repeated for all values in the index of the summation. The likelihood ratio is then obtained by computing the quotient of the numerator over the denominator.Significant benefit from this approach arise due to: using Bayesian networks for computing the numerators and denominator of the likelihood ratio; estimating Bayesian networks for the numerator from dedicated databases containing replicates of the same finger and under several distortion conditions; estimating Bayesian networks for the denominator from dedicated databases containing prints from different fingers and people.
The description above is an example of using Bayesian networks for calculating the likelihood ratio, but the invention is not limited to it. Another example is estimating one Bayesian network per general pattern. This invention can also be used for more than three minutiae by defining suitable feature vectors.
As mentioned above, in order to estimate the numerator and denominator in the above likelihood ratio consideration, it is possible to use a Bayesian network representation to specify a probability distribution. For brevity of explaination the concept of a Bayesian network is presented through an example. A Bayesian network is an acyclic directed graph together with conditional probabilities associated to the nodes of the graph. Each node in the graph represents a quantity and the arrows represent dependencies between the quantities. Figure 14 displays an acyclic graph of a Bayesian network representation for the quantities X, Y and Z. This graph contains the information that the joint distribution of X, Y an d Z is given by the equation
p(x,y,z) = p(x)p(yx)pθy) for all x,y,z
and so the joint distribution is completely specified within the graph and the conditional probability distributions {p(x): for all x}, {p(y/x) for all x and y} and {p(z/y) for all z and y}. A detailed presentation on Bayesian networks can be found in a number of books, such as Cowell, R.G., Dawid A.P., Lauritzen S.L. and Spiegelhalter DJ. (1999) "Probabilistic networks and expert systems".
Claims
Priority Applications (6)
Application Number  Priority Date  Filing Date  Title 

GB0422786A GB0422786D0 (en)  20041014  20041014  Improvements in and relating to identifier investigation 
GB0422786.4  20041014  
GB0502893A GB0502893D0 (en)  20050211  20050211  Improvements in and relating to identifier investigation 
GB0502893.1  20050211  
US11084352 US20060083413A1 (en)  20041014  20050318  Identifier investigation 
US11/084,352  20050318 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

EP20050799948 EP1800239A1 (en)  20041014  20051014  A process to improve the quality the skeletonisation of a fingerprint image 
Publications (1)
Publication Number  Publication Date 

WO2006040576A1 true true WO2006040576A1 (en)  20060420 
Family
ID=35588942
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

PCT/GB2005/003968 WO2006040576A1 (en)  20041014  20051014  A process to improve the quality the skeletonisation of a fingerprint image 
Country Status (2)
Country  Link 

EP (1)  EP1800239A1 (en) 
WO (1)  WO2006040576A1 (en) 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

US8983153B2 (en)  20081017  20150317  Forensic Science Service Limited  Methods and apparatus for comparison 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

EP0636996A2 (en) *  19930721  19950201  Personix Ltd.  Apparatus and method for topological processing of fingerprints 
US6487662B1 (en) *  19990514  20021126  Jurij Jakovlevich Kharon  Biometric system for biometric input, comparison, authentication and access control and method therefor 
US20030063782A1 (en) *  20010913  20030403  Tinku Acharya  Method and apparatus to reduce false minutiae in a binary fingerprint image 
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

EP0636996A2 (en) *  19930721  19950201  Personix Ltd.  Apparatus and method for topological processing of fingerprints 
US6487662B1 (en) *  19990514  20021126  Jurij Jakovlevich Kharon  Biometric system for biometric input, comparison, authentication and access control and method therefor 
US20030063782A1 (en) *  20010913  20030403  Tinku Acharya  Method and apparatus to reduce false minutiae in a binary fingerprint image 
Cited By (1)
Publication number  Priority date  Publication date  Assignee  Title 

US8983153B2 (en)  20081017  20150317  Forensic Science Service Limited  Methods and apparatus for comparison 
Also Published As
Publication number  Publication date  Type 

EP1800239A1 (en)  20070627  application 
Similar Documents
Publication  Publication Date  Title 

Shufelt et al.  Fusion of monocular cues to detect manmade structures in aerial imagery  
Tuyls et al.  Practical biometric authentication with template protection  
Maio et al.  Direct grayscale minutiae detection in fingerprints  
Ross et al.  From template to image: Reconstructing fingerprints from minutiae points  
Heath et al.  A robust visual method for assessing the relative performance of edgedetection algorithms  
Moreno et al.  Face recognition using 3D surfaceextracted descriptors  
Ratha et al.  Robust fingerprint authentication using local structural similarity  
US6807286B1 (en)  Object recognition using binary image quantization and hough kernels  
US6263091B1 (en)  System and method for identifying foreground and background portions of digitized images  
Ross et al.  A deformable model for fingerprint matching  
Nomir et al.  A system for human identification from Xray dental radiographs  
US7236617B1 (en)  Method and device for determining a total minutiae template from a plurality of partial minutiae templates  
SanchezAvila et al.  Two different approaches for iris recognition using Gabor filters and multiscale zerocrossing representation  
US7359532B2 (en)  Fingerprint minutiae matching using scoring techniques  
US6466686B2 (en)  System and method for transforming fingerprints to improve recognition  
Wang et al.  Enhanced gradientbased algorithm for the estimation of fingerprint orientation fields  
US20060093188A1 (en)  Probabilistic exemplarbased pattern tracking  
US20060147094A1 (en)  Pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its  
US20070140531A1 (en)  standoff iris recognition system  
Wu et al.  Gabor wavelet representation for 3D object recognition  
US20100232659A1 (en)  Method for fingerprint template synthesis and fingerprint mosaicing using a point matching algorithm  
Ross et al.  Fingerprint warping using ridge curve correspondences  
US20060023921A1 (en)  Authentication apparatus, verification method and verification apparatus  
Bansal et al.  Minutiae extraction from fingerprint imagesa review  
US7369688B2 (en)  Method and device for computerbased processing a template minutia set of a fingerprint and a computer readable storage medium 
Legal Events
Date  Code  Title  Description 

AK  Designated states 
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW 

AL  Designated countries for regional patents 
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG 

121  Ep: the epo has been informed by wipo that ep was designated in this application  
WWE  Wipo information: entry into national phase 
Ref document number: 2005799948 Country of ref document: EP 

NENP  Nonentry into the national phase in: 
Ref country code: DE 

WWP  Wipo information: published in national office 
Ref document number: 2005799948 Country of ref document: EP 