WO2015185883A1 - Images synthétiques d'empreintes digitales latentes - Google Patents

Images synthétiques d'empreintes digitales latentes Download PDF

Info

Publication number
WO2015185883A1
WO2015185883A1 PCT/GB2015/000156 GB2015000156W WO2015185883A1 WO 2015185883 A1 WO2015185883 A1 WO 2015185883A1 GB 2015000156 W GB2015000156 W GB 2015000156W WO 2015185883 A1 WO2015185883 A1 WO 2015185883A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint image
feature points
image
positions
initial
Prior art date
Application number
PCT/GB2015/000156
Other languages
English (en)
Inventor
Thomas Kirwan
Original Assignee
The Secretary Of State For Defence
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Secretary Of State For Defence filed Critical The Secretary Of State For Defence
Publication of WO2015185883A1 publication Critical patent/WO2015185883A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1335Combining adjacent partial images (e.g. slices) to create a composite input or reference pattern; Tracking a sweeping finger movement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/13Sensors therefor

Definitions

  • This invention relates to synthetic latent fingerprint images, and to the training of trainee latent fingerprint analysts.
  • Latent fingerprint images of fingerprints may be may be developed by techniques such as powder dusting or cyanoacrylate ester fuming, as will be apparent to those skilled in the art, and then compared to database(s) of plain fingerprint images.
  • a plain fingerprint image of the person's finger may be taken under laboratory conditions, to include the whole area of the person's fingertip, and to give as clear a fingerprint as practically possible.
  • a plain fingerprint image may for example comprise lines upon a uniform background, the lines corresponding to ridges of the finger.
  • Latent fingerprint images developed from fingerprints that are unintentionally left behind on objects typically include the background colouration/texture of the object which was touched, and may have imperfections such as smearing and broken lines, poor contrast between lines and background, and may only comprise part of the person's full fingerprint.
  • the term Latent Fingerprint thus pertains to recordings of friction ridges (e.g. epidermal or papillary ridges as found on fingertips) under arbitrary and non-ideal conditions, and is in contrast to a Plain Fingerprint where the fingerprint is recorded under controlled and typically ideal conditions.
  • the matching of latent fingerprint images to plain fingerprint images is commonly done by identifying feature points in the latent fingerprint images, and checking whether the positions and the types of these feature points correspond to the positions and types of the feature points in the plain fingerprint images.
  • the feature points may for example comprise
  • the identification of feature points within latent fingerprint images is a difficult task, and is typically carried out by latent fingerprint analysts, each of which may require up to 5 years training in how to accurately identify and classify feature points in latent fingerprint images.
  • the training requires the use of latent fingerprint images that have known feature points, so that a trainee latent fingerprint analyst can attempt to identify and classify the feature points in the latent fingerprint image, and so that the feature points identified by the trainee latent fingerprint analyst can be evaluated against the known feature points. Then the trainee latent fingerprint analyst can be informed of which points they failed to identify or classify correctly.
  • Latent fingerprint images with known feature points are difficult to create, they require someone to place a fingerprint on a surface, forensically triage the print, and image it. They then require analysis by at least one or two experienced latent fingerprint analysts to identify and classify all the feature points that are present within them, before passing them to a trainee latent fingerprint analyst for training purposes.
  • a method of generating a synthetic latent fingerprint image and positions of feature points within the synthetic latent fingerprint image comprises: receiving an initial fingerprint image and positions of feature points within the initial fingerprint image;
  • the training including the trainee identifying feature points of the synthetic latent fingerprint image and including assessment of a level of success of the trainee by comparison of positions of potential feature points identified by the trainee against the positions of the outputted feature points.
  • the method enables the generation of synthetic latent fingerprint images having known feature points, which can be used as latent fingerprint images having known feature points for the training and evaluation of latent fingerprint analysts.
  • Synthetic Latent Fingerprint Image pertains to an image of a fingerprint where the fingerprint has at least one characteristic of a Latent Fingerprint as may be found during a forensic investigation, as opposed to of a Plain Fingerprint.
  • the training of the fingerprint analyst is performed manually, in the sense that it done by the trainee manually inspecting images.
  • the images may be displayed on a computer screen by a computer for these purposes.
  • the training may also be partly automated, requiring the trainee to input analysis in response to viewing the image, whereupon another images may be automatically displayed.
  • the assessment of the level of success of the trainee may similarly be automated and feedback may also be automatically provided by the above method.
  • the transforming of the initial fingerprint image onto the background image may comprise various operations such as: removing at least one region of the initial fingerprint image while retaining another region thereof. This may be performed to simulate when only part of a person's finger touches an object;
  • smearing at least one portion of the initial fingerprint image such that at least one region is smeared in preference to another region thereof.
  • This may be performed to simulate lateral movement of parts of a fingertip along an object that can occur when the finger touches an object.
  • This can be achieved optionally by providing a rotational smearing transformation with respect to an axis of rotation, that causes at least one region respectively distal to the axis of rotation to be smeared in preference to a region respectively proximate to the axis of rotation;
  • image noise to at least one region of the initial fingerprint image in preference to another region thereof. This may be performed to simulate image noise that may be introduced by processes such as power dusting or cyanoacrylate ester fuming used to develop a fingerprint into a latent fingerprint image.
  • the method may further comprise repeating the step of transforming the initial fingerprint image onto a background image (E.g. transforming, mapping, and outputting) a plurality of times, with varying transformations. This may be used to generate a fingerprint image with more complex deterioration as compared to applying one transformation.
  • a background image E.g. transforming, mapping, and outputting
  • the method may further comprise repeating the step of transforming the initial fingerprint image onto a background image (optionally multiple times each), for example as listed above, to generate a plurality of synthetic latent fingerprint images from the initial fingerprint image.
  • a background image optionally multiple times each
  • the parameters that are used to transform the initial fingerprint image into a synthetic latent fingerprint image may be randomly varied in order to automatically produce multiple synthetic latent fingerprint images, each one being different to the others.
  • the initial fingerprint image may be a plain fingerprint image comprising lines upon a uniform background, the lines corresponding to ridges of a finger.
  • the plain fingerprint image may be used multiple times to generate multiple latent fingerprint images all having the same starting point of the plain fingerprint image.
  • a synthetic latent fingerprint image resulting from performing the method a first time may be used as the initial fingerprint image in a subsequent repetition of the method, to generate another synthetic latent fingerprint image.
  • the generated synthetic latent fingerprint image will typically be of lower quality than the initial synthetic latent fingerprint image due to the additional transformation step, and so this method may be used to generate a series of synthetic latent fingerprint images which are progressively harder and harder to read.
  • the feature points may comprise minutiae points, the minutiae points being one or more of ridge endings, ridge bifurcations, short ridges, islands, spurs, and crossovers., etc, as will be apparent to those skilled in the art.
  • the feature points may further comprise one or more of pores* dots, incipient ridges, and ridge edge protrusions to further test and train the abilities of latent print analysts.
  • the background image may comprise at least one of texture and patterning.
  • the texture and patterning may be designed to increase the difficulty of identifying and classifying feature points of the generated synthetic latent fingerprint image, and/or may be designed to simulate a surface of an object upon which a fingerprint may be left and a latent fingerprint image developed.
  • Region pertains to a non-trivial area of the fingerprint image (and in practice is a region within a boundary defined by the fingerprint in the fingerprint image).
  • a transformation step is applied differentially to two non- overlapping regions, or only to one of two non-overlapping regions.
  • the regions may be neighbouring or distal within the image, and there may be further regions where the transformation step is further applied differently or to a different extent.
  • the application of the transformation may vary in the extent it is applied according to a smooth function (for example in the case of rotational smearing where the amount of smearing varies smoothly with position) and it is only necessary to be able to identify two non-trivially sized (e.g. large enough to for a feature point to at least originally have been identifiable therein) regions where the transformation has provided a different effect or an effect to a different degree therebetween.
  • Uniform changes such as changing the contrast ratio or brightness or position or overall orientation of the image do not provide this, and neither do uniformly adding image noise or a background pattern or linearly stretching or compressing the image.
  • rotational smearing does so, because the amount of smearing varies depending on proximity to an axis of rotation thereof.
  • the invention includes the step of manually training a trainee latent fingerprint analyst to analyse latent fingerprints, this step including presenting the trainee with a synthetic latent fingerprint image as generated in the above method, requiring the trainee to attempt to identify and/or classify feature points in the latent fingerprint image, evaluating feature points identified by the trainee against the known feature points, and informing the trainee of feature points he or she failed to identify or classify correctly.
  • a computer program product to configure a computer to perform the method of generating synthetic latent fingerprints of the first aspect of the invention.
  • the computer program product may for example be embodied in a computer readable storage media storing the computer program for configuring and controlling a computer to perform the above-described methods, or the computer program product may be in the form of a digital network signal transmitted to the computer via a communications network such as the Internet.
  • a synthetic latent fingerprint image generator for generating synthetic latent fingerprint images for permitting the manual training of a fingerprint forensic analyst trainee, the manual training including the trainee identifying feature points of the synthetic latent fingerprint image and including assessment of a level of success of the trainee by comparison of positions of potential feature points identified by the trainee against the positions of the outputted feature points.
  • the synthetic latent fingerprint image generator comprising: an input for receiving an initial fingerprint image and positions of feature points within the initial fingerprint image;
  • a processor configured to transform the initial fingerprint image onto a background image to provide a synthetic latent fingerprint image this including transforming one region of the fingerprint image differently than a separate region of the fingerprint image; and to map the positions of the feature points within the initial fingerprint image to positions of the feature points within the synthetic latent fingerprint image, based upon the transforming; and an output for outputting the synthetic latent fingerprint image with-the positions of the feature points that are within the synthetic latent fingerprint image.
  • Fig. 1 shows a flow diagram of a method according to an embodiment of the invention
  • Fig. 2 shows a schematic diagram of a computing device configured as a synthetic latent fingerprint image generator for performing the method of Fig. 1;
  • Fig. 3 shows a plain fingerprint image having feature points, the plain fingerprint image and feature points suitable for use as an initial fingerprint image and initial feature points in the method of Fig. 1 ;
  • Fig. 4 shows a schematic diagram of an orange bottle having an patterned label
  • Fig. 5 shows a synthetic latent fingerprint image based on transforming the plain fingerprint image of Fig. 3 onto the patterned label of Fig. 4, according to a first type of transformation
  • Fig. 6 shows a synthetic latent fingerprint image based on transforming the plain fingerprint image of Fig. 3 onto the patterned label of Fig. 4, according to a second type of transformation.
  • Fig. 7 shows a synthetic latent fingerprint image based on transforming the plain fingerprint image of Fig. 3 onto the patterned label of Fig. 4, according to a third type of transformation
  • Fig. 8 shows a synthetic latent fingerprint image based on transforming the plain fingerprint image of Fig. 3 onto the patterned label of Fig. 4, according to a fourth type of transformation.
  • Fig. 9 shows a rotation and reduction of the plain fingerprint image of Fig. 3, the rotation and reduction forming a first step of fifth and sixth types of transformation;
  • Fig. 10 shows an image of a pressure map that is used in the fifth and sixth types of transformation
  • Fig. 11 shows an intermediate image of the fifth and sixth types of transformation, the intermediate image generated by applying the pressure map of Fig. 10 to the rotated and reduced pain fingerprint image of Fig. 9;
  • Fig. 12 shows a synthetic latent fingerprint image based on superimposing the intermediate image of Fig. 11 onto the patterned label of Fig. 4, according to the fifth type of transformation;
  • Fig. 13 shows a further intermediate image of the sixth type of transformation, the further intermediate image generated by applying motion blur to the intermediate image of Fig. 11 ;
  • Fig. 14 shows a synthetic latent fingerprint image based on superimposing the further intermediate image of Fig. 13 onto the patterned label of Fig. 4, according to the sixth type of transformation.
  • the schematic diagram of Fig. 2 shows a computing device CD, for example a personal computer.
  • the computing device CD comprises a processor uP and a memory MRY for use by the processor.
  • the computing device CD has an input IP such as a connection to a network, disk drive, or scanner.
  • the computing device CD also has an output OP, such as a connection to a network, disk drive, or printer.
  • the computing device CD has been configured using a computer software program product to carry out the method shown in Fig. 1.
  • the computing device CD receives an initial fingerprint image from the input IP,. for example by requesting it from a network.
  • a set of initial feature points of the initial fingerprint image is also received, either along with the initial fingerprint image, or by an experienced fmgerprint analyst analysing the initial fingerprint image and entering the initial feature points into the computing device manually.
  • the processor uP of the computing device transforms the initial fingerprint image onto a background image that is retrieved from the memory MRY, including transforming one region of the fingerprint image differently than a separate region of the fingerprint image.
  • the background image is typically an image of a surface upon which fingerprints may be found, for example a surface of a door handle or a carry crate.
  • the processor uP maps the positions of the initial feature points to positions where the same feature points will be found in the latent fingerprint image.
  • the mapping is based on the operations that were carried out on the initial fingerprint image to transform it onto the background image.
  • the mapped feature points therefore designate the positions within the latent fingerprint image where the feature points of the initial fingerprint image will be found.
  • the processor uP outputs the transformed image, which is the synthetic latent fingerprint image, to the output OP, for example by sending the synthetic latent fingerprint image over a network.
  • the mapped feature points are also output to the output OP, along with the synthetic latent fingerprint image.
  • the synthetic latent fingerprint image can then be presented to trainee latent fingerprint analysts with an instruction to identify and classify the feature points within the synthetic latent fingerprint image, and with the mapped feature points constituting the correct answers to that instruction.
  • the step 6 may be carried out before the step 4, or the steps 4 and 6 may be performed substantially simultaneously to one another.
  • the processor uP is configured to randomly select and apply differing types and degrees of transformations to the initial fingerprint image when transforming the initial fingerprint image onto the background image, such that the steps 4, 6, and 8 can be repeated again and again to keep producing new synthetic latent fingerprint images.
  • Fig. 3 shows a plain fingerprint image 10.
  • the plain fingerprint 10 is a high-quality fingerprint image as may be obtained in a laboratory from a person willingly providing their fingerprint.
  • the plain fingerprint image covers substantially the whole of the person's fingertip, and is clearly defined on a uniform (typically white) background.
  • the feature 1 1 is the termination of a finger ridge approaching from the bottom right side of the image
  • the feature 12 is the termination of a finger ridge approaching from the bottom left side of the image
  • the feature 13 is the joining together of two finger ridges approaching from the bottom left side of the image
  • the feature 14 is the termination of a finger ridge approaching from the bottom of the image
  • the feature 15 is the termination of a finger ridge approaching from the bottom right of the image.
  • Each feature point comprises a position where the corresponding feature (minutia) is present in the image, and in this particular embodiment further comprises the type of the feature, so that trainee latent fingerprint analysts can be tested on identifying the feature type as well as identifying the feature position.
  • the plain fingerprint image 10 is suitable for use as an initial fingerprint image
  • the feature points marked on Fig. 3 are suitable for use as initial feature points.
  • Fig. 4 shows a diagram of an Orange drink bottle 20 having a patterned label 22.
  • the Orange bottle 20 simulates a typical surface upon which latent fingerprint images may be found, and may be used in conjunction with the plain fingerprint image 10 of Fig. 3 to synthesise synthetic latent fingerprint images according to the following illustrative embodiments of the invention.
  • the surface region 24 of the patterned label 22 is used as a background image 25 in the following four different types of transformations, which are described below with reference to Figs. 5 - 8.
  • Fig. 5 shows a synthetic latent fingerprint image 30 generated from the plain fingerprint image 10 and the background image 25 using a first type of transformation.
  • the transformation comprises slightly rotating the image 10 anticlockwise, reducing the rotated image in size to increase the difficulty of identifying features, faintening and removing the area 32 of the image to simulate light fingertip pressure at the area 32, and adding noise 34 to the upper fifth of the image to simulate imperfect latent fingerprint generation techniques. Finally, the image was superimposed on the background image 25 to give the synthetic latent fingerprint image 30.
  • the feature points of the plain fingerprint image 10 are mapped according to the same transformations as described above, aind the positions of the features 14 and 15 in the synthetic latent fingerprint image 30 are marked on Fig. 5. Specifically, the feature points of features 14 and 15 in the plain fingerprint image are rotated about one another with the same rotation as in the above transformation, the feature points are moved closer to one another according to the reduction of the image size in the above transformation, and the feature points are defined with respect to the background image in the above superimposition.
  • the above transformation steps of faintening and noise addition clearly do not have any influence on the positions of the features, and so are not taken into account when performing the mapping.
  • the synthetic latent fingerprint image 30 could be presented to a trainee latent fingerprint analyst with an instruction to identify feature points, and the identified feature points could be compared against the mapped feature points above for training purposes, for example to evaluate the trainee latent fingerprint analyst's performance.
  • Fig. 6 shows a synthetic latent fingerprint image 40 generated from the plain fingerprint image 10 and the background image 25 using a second type of transformation.
  • the transformation comprises rotating the image 10 clockwise by around 180 degrees, removing the area 42 of the image to simulate a smearing of the fingertip across the surface, and adding noise 44 to the bottom left and upper right corners of the image to simulate imperfect latent fingerprint generation techniques. Finally, the image was superimposed on the background image 25 to give the synthetic latent fingerprint image 40.
  • Fig. 7 shows a synthetic latent fingerprint image 50 generated from the plain fingerprint image 10 and the background image 25 using a third type of transformation.
  • the transformation comprises rotating the image 10 clockwise by around 145 degrees, reducing the rotated image in size to increase the difficulty of identifying features, cropping away the area 52 of the image to simulate only a central region of the fingertip touching the surface, and adding noise 54 to the upper right corner of the image to simulate imperfect latent fingerprint generation techniques.
  • the image was superimposed on the background image 25 to give the synthetic latent fingerprint image 50.
  • the feature points of the plain fmgerprint image 10 are mapped according to the same transformations, and the positions of the features 11, 13, and 15 in the synthetic latent fingerprint image 50 are marked on Fig. 7.
  • Fig. 8 shows a synthetic latent fingerprint image 60 generated from the plain fingerprint image 10 and the background image 25 using a fourth type of transformation.
  • the transformation comprises rotating the image 10 clockwise by around 170 degrees, reducing the rotated image in size to increase the difficulty of identifying features, compressing the right side of the rotated image but not the left side (a non-linear transform) to simulate compression and squidging together of the fingerprint ridges of the fingertip at the right side of the rotated image, and adding noise 64 to the upper right corner of the image to simulate imperfect latent fingerprint generation techniques.
  • the image was superimposed on the background image 25 to give the synthetic latent fingerprint image 60.
  • the feature points of the plain fingerprint image 10 are mapped according to the same transformations, and the positions of the features 11 and 15 in the synthetic latent fingerprint image 50 are marked on Fig. 8.
  • the plain fingerprint image 10 is rotated anticlockwise by around 115 degrees, and reduced in size to provide a rotated and reduced plain fingerprint image 70 that is shown in Fig. 9.
  • the feature points of the plain fingerprint image are also rotated and reduced by the same amounts, to provide rotated and reduced feature points that designate the positions of the features 11 - 15 of the rotated and reduced pain fingerprint image 70.
  • a pressure map 80 that is shown in Fig.10 is generated.
  • the pressure map defines pressures at which a simulated fingertip touches the surface region 24 of the Orange bottle 20, and comprises white areas where the fingertip presses against the surface region 24 with high pressure, and black areas where the fingertip either presses against the surface region 24 with lower pressure or does not press against the surface region 24 at all.
  • the pressure map would have more pressure graduations than just the two black and white pressure graduations shown in Fig. 10.
  • a greyscale pressure map with 256 pressure graduations may be used.
  • the pressure map 80 is applied to the rotated and reduced plain fingerprint image 70, which results in the intermediate fingerprint image 90 that is shown in Fig. 11.
  • the areas of the rotated and reduced plain fingerprint image 70 that the pressure map 80 indicates the simulated fingertip contacts the surface region 24 with high pressure (white) are maintained in the intermediate fingerprint image 90, and the areas of the rotated and reduced plain fingerprint image 70 that the pressure map 80 indicates the simulated fingertip contacts the surface region 24 with low pressure (black) are removed in the intermediate fingerprint image 90.
  • the rotated and reduced feature points remain unchanged by this step.
  • the features 11, 12, 14, and 15 that are within still- visible parts of the fingerprint image are indicated on Fig. 11.
  • the intermediate fingerprint image 90 is superimposed onto the background image 25, for example by adding the images together.
  • the rotated and reduced feature points then are defined with respect to the synthetic latent fingerprint image 100, by translating them from a co-ordinate system of the
  • the intermediate fingerprint image 90 to a co-ordinate system of the synthetic latent fingerprint image 100. No changes to the rotated and reduced feature points may be required if the background image was the same size and orientation as the intermediate fingerprint image 90, and therefore the same size and orientation as the synthetic latent fingerprint image 100.
  • the intermediate fingerprint image 90 is further transformed before superimposing it on the background image 25.
  • the intermediate fingerprint image 90 is processed by a motion blur filter to produce a further intermediate fingerprint image 110 that is shown in Fig. 12.
  • the motion blur filter applies a smearing effect to the intermediate fingerprint image 90, to simulate smearing of a fingerprint that is left behind on the surface region 24 of the Orange bottle 20, as the fingertip is withdrawn from the surface region 24 at a non- perpendicular angle to the surface region 24.
  • the feature points may be slightly translated in the direction of motion blur and in accordance with the amount of motion blur that is applied.
  • the further intermediate fingerprint image 110 is superimposed upon the background image 25 in just the same manner as described above in relation to the fifth type of transformation, producing the synthetic latent fingerprint image 120 that is shown in Fig.. 12.

Abstract

L'invention concerne un procédé qui permet de générer une image synthétique d'empreintes digitales latentes et des positions de points de caractéristique dans l'image synthétique d'empreintes digitales latentes, et qui consiste à recevoir une image initiale d'empreintes digitales et des positions de points de caractéristique dans l'image initiale d'empreintes digitales, à transformer l'image initiale d'empreintes digitales en une image d'arrière-plan pour fournir une image synthétique d'empreintes digitales latentes, cette étape consistant à transformer une région de l'image d'empreintes digitales de manière différente d'une région séparée de l'image d'empreintes digitales, à faire correspondre les positions des points de caractéristique dans l'image initiale d'empreintes digitales avec les positions des points de caractéristique dans l'image synthétique d'empreintes digitales latentes, sur la base de la transformation ; à sortir l'image synthétique d'empreintes digitales latentes et les positions des points de caractéristique qui se trouvent dans l'image synthétique d'empreintes digitales latentes pour permettre la formation manuelle d'un apprenti expert médico-légal d'empreintes digitales, la formation manuelle comprenant l'identification, par l'apprenti, de points de caractéristique de l'image synthétique d'empreintes digitales latentes et comprenant l'évaluation d'un niveau de réussite de l'apprenti en comparant les positions de points de caractéristique potentiels, identifiés par l'apprenti, aux positions des points de caractéristique sortis.
PCT/GB2015/000156 2014-06-06 2015-06-03 Images synthétiques d'empreintes digitales latentes WO2015185883A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB1410063.0A GB2526874A (en) 2014-06-06 2014-06-06 Synthetic latent fingerprint images
GB1410063.0 2014-06-06

Publications (1)

Publication Number Publication Date
WO2015185883A1 true WO2015185883A1 (fr) 2015-12-10

Family

ID=51214837

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2015/000156 WO2015185883A1 (fr) 2014-06-06 2015-06-03 Images synthétiques d'empreintes digitales latentes

Country Status (2)

Country Link
GB (1) GB2526874A (fr)
WO (1) WO2015185883A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108307041A (zh) * 2017-12-26 2018-07-20 努比亚技术有限公司 一种根据指纹获取操作指令的方法、移动终端及存储介质
WO2018201847A1 (fr) * 2017-05-03 2018-11-08 Oppo广东移动通信有限公司 Procédé d'identification optique d'empreintes digitales et produit associé
CN111639521A (zh) * 2020-04-14 2020-09-08 北京迈格威科技有限公司 指纹合成方法、装置、电子设备及计算机可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3103045B1 (fr) * 2019-11-07 2021-10-15 Idemia Identity & Security France Procédé d’augmentation d’une base d’images d’apprentissage représentant une empreinte sur un arrière-plan au moyen d’un réseau antagoniste génératif

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101149787B (zh) * 2006-09-20 2010-09-08 中国科学院自动化研究所 一种基于方向场模型和Gabor滤波器指纹合成方法及系统

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
CRYSTAL M. RODRIGUEZ ET AL: "Introducing a Semi-Automatic Method to Simulate Large Numbers of Forensic Fingermarks for Research on Fingerprint Identification", JOURNAL OF FORENSIC SCIENCES, vol. 57, no. 2, 21 March 2012 (2012-03-21), pages 334 - 342, XP055203118, ISSN: 0022-1198, DOI: 10.1111/j.1556-4029.2011.01950.x *
D. MALTONI: "Handbook of Fingerprint Recognition", 1 February 2009, SPRINGER-VERLAG LONDON, ISBN: 978-1-84882-253-5, pages: 271 - 301, XP002742445, DOI: 10.1007/978-1-84882-254-2 *
RAFFAELE CAPPELLI: "SFinGe: an Approach to Synthetic Fingerprint Generation", INTERNATIONAL WORKSHOP ON BIOMETRIC TECHNOLOGIES, 1 June 2012 (2012-06-01), pages 147 - 154, XP055203149, Retrieved from the Internet <URL:https://math.la.asu.edu/~dieter/courses/Math_Modeling_2013/Cappelli_2004.pdf> [retrieved on 20150717] *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018201847A1 (fr) * 2017-05-03 2018-11-08 Oppo广东移动通信有限公司 Procédé d'identification optique d'empreintes digitales et produit associé
CN108307041A (zh) * 2017-12-26 2018-07-20 努比亚技术有限公司 一种根据指纹获取操作指令的方法、移动终端及存储介质
CN111639521A (zh) * 2020-04-14 2020-09-08 北京迈格威科技有限公司 指纹合成方法、装置、电子设备及计算机可读存储介质
CN111639521B (zh) * 2020-04-14 2023-12-01 天津极豪科技有限公司 指纹合成方法、装置、电子设备及计算机可读存储介质

Also Published As

Publication number Publication date
GB201410063D0 (en) 2014-07-16
GB2526874A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
US10685224B1 (en) Unsupervised removal of text from form images
Dominio et al. Combining multiple depth-based descriptors for hand gesture recognition
Hildebrandt et al. Benchmarking face morphing forgery detection: Application of stirtrace for impact simulation of different processing steps
Qidwai et al. Digital image processing: an algorithmic approach with MATLAB
Davenport et al. Joint manifolds for data fusion
TWI532011B (zh) 影像處理裝置、影像處理方法、程式、印刷媒體及記錄媒體
Lei et al. Scale insensitive and focus driven mobile screen defect detection in industry
US10990845B2 (en) Method of augmented authentification of a material subject
US9171226B2 (en) Image matching using subspace-based discrete transform encoded local binary patterns
DE102017009276A1 (de) Erzeugen eines dreidimensionalen modells aus einem gescannten gegenstand
KR20120047991A (ko) 지문 인페인팅 타겟 영역의 자동 식별
WO2015185883A1 (fr) Images synthétiques d&#39;empreintes digitales latentes
CN105740787B (zh) 基于多核鉴别彩色空间的人脸识别方法
Manjunatha et al. Deep learning-based technique for image tamper detection
Priesnitz et al. Deep learning-based semantic segmentation for touchless fingerprint recognition
Priesnitz et al. Syncolfinger: Synthetic contactless fingerprint generator
CN110110110A (zh) 一种以图搜图方法、装置、电子设备及存储介质
Warif et al. A comprehensive evaluation procedure for copy-move forgery detection methods: results from a systematic review
US20100322486A1 (en) Hand-based gender classification
CN113963193A (zh) 车身颜色分类模型生成的方法、装置以及存储介质
CN117237736A (zh) 一种基于机器视觉和深度学习的大曲质量检测方法
Makihara et al. Grasp pose detection for deformable daily items by pix2stiffness estimation
CN105915883A (zh) 基于极限学习和双目融合的盲参考立体图像质量评价方法
CN115546796A (zh) 一种基于视觉计算的无接触式数据采集方法及系统
KR101766787B1 (ko) Gpu장치를 기반으로 하는 딥러닝 분석을 이용한 영상 보정 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15728074

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15728074

Country of ref document: EP

Kind code of ref document: A1