WO2016032410A1 - Intelligent system for photorealistic facial composite production from only fingerprint - Google Patents

Intelligent system for photorealistic facial composite production from only fingerprint Download PDF

Info

Publication number
WO2016032410A1
WO2016032410A1 PCT/TR2015/000299 TR2015000299W WO2016032410A1 WO 2016032410 A1 WO2016032410 A1 WO 2016032410A1 TR 2015000299 W TR2015000299 W TR 2015000299W WO 2016032410 A1 WO2016032410 A1 WO 2016032410A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
fingerprint
module
minutiae
images
Prior art date
Application number
PCT/TR2015/000299
Other languages
French (fr)
Inventor
Seref SAGIROGLU
Uraz YAVANOGLU
Original Assignee
Sagiroglu Seref
Yavanoglu Uraz
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sagiroglu Seref, Yavanoglu Uraz filed Critical Sagiroglu Seref
Publication of WO2016032410A1 publication Critical patent/WO2016032410A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • G06V40/1353Extracting features related to minutiae or pores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • G06V40/1371Matching features related to minutiae or pores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/155Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands use of biometric patterns for forensic purposes

Definitions

  • Our invention relates to an intelligent system, which synthesizes a photorealistic face from a fingerprint, for use in the criminological identifications.
  • the fingerprint is one of the invariable and unchangeable unique biometric properties of a person, which is almost impossible to imitate.
  • the biometric data of fingerprint and face have the quality of evidence according to national and international laws.
  • the fingerprints obtained from a crime scene or the police sketches developed in line with the statements of the eyewitnesses are the most important solutions used by the law enforcement officers for fighting the unidentified crimes.
  • the use of such important data as evidence against the persons who are not registered in archives or in databases such as AFIS (Automated Fingerprint Identification System) is possible only in cases where the offenders may be predicted. It becomes difficult to identify the offenders especially in cases where the fingerprint is available but there is no eyewitness or in cases where the eyewitness fails to provide satisfactory information for developing the police sketch.
  • the object of the invention is to investigate the biometric properties, form the transformational models, turn the formed models into an application able to be used in criminology and provide said application to the security units, obtain photorealistic solutions and find partial or complete solutions for the problems mentioned above.
  • Another object of the invention is to form a body of models based on the coordinates corresponding to the fingerprint instead of the point estimations for providing the methods of identifying the appearance, for providing in a photorealistic manner the face images that are based on the lips, eyebrows, eyes, face perimeter, forehead and similar features and for obtaining in a photorealistic manner the face appearance that is criminally closest to the real in the most rapid way via a fully automated system.
  • Another object of the invention is to develop a novel method via which it would become easy to trace the offender based on the fingerprint.
  • Another object of the invention is to reveal the relationship between the fingerprint and the face and develop a system generating photorealistic results.
  • Another object of the invention is to establish a system generating the results with the quality of police sketch. Another object of the invention is to develop a system that could gain acceptance owing to the association with the photorealistic face images generated by the commercial software used in identifying the appearance.
  • the invention relates to a fingerprint and face synthesis system developed in order to form a body of models based on the coordinates corresponding to the fingerprint for providing in a photorealistic manner the face images of the members by the use of the lips, eyebrows, eyes, region around the eyes, hair, forehead and similar features and obtaining the face appearance that is criminally closest to the real in the most rapid manner via a fully automated system.
  • the datasets are generated, which are obtained from the fingerprint and face images and are capable of operating in harmony with the system. For the generation of these datasets, the following are performed:
  • the minutiae are identified on the fingerprint (FP) in order to express the fingerprint as a vector pattern.
  • the FPs are separated into their minutiae, they are subjected to preprocessing via suitable methods, these images are saved and they are required to be used together with the comparison algorithms in line with the planned obj ectives .
  • one of the similarity-based, minutiae-based and non-minutiae-based comparison techniques is employed for matching the fingerprint vectors.
  • the comparison techniques perform mathematical matching in certain respects.
  • the fingerprint vectors do not have constant length and same angular information. Dislocation, rotation, partial overlap, non-linear distortion, pressure and skin condition, noise and minutiae extraction errors may occur when taking a fingerprint and these may influence the quality of the minutiae.
  • the angle and position of the minutiae vectors belonging to the fingerprints are determined automatically by means of the software developed.
  • the minutiae-based matching methods are used for generating the fingerprint vectors.
  • the matching is a computer-based method and it is the method most frequently used by the fingerprint experts.
  • the method is devised in order to provide a solution for the problem of matching the minutiae points having variable size obtained from the fingerprints with the existing pattern files in the databases such as AFIS in a way to enable maximum number of correspondence .
  • the vector models were formed with a fixed length corresponding to the common numerical value of the vector quantities able to be obtained from different printings of a fingerprint, from the crime scenes or from the offenders.
  • the processes provided below were performed for the purposes of identifying the relationship between the fingerprint minutiae (angle, the points x and y) and the face features (nose size, face length, etc.) and the relationship between the appearances obtained from the fingerprints and the appearances identified by the experts based on the appearance features of the persons; generating the eye, nose, face, eyebrow and chin features in criminal appearance identification form by way of generation of the minutiae (angle, the points x and y) for the latent fingerprints able to be obtained from a crime scene, without the need for the eyewitnesses in the crime scene; using the appearance to be identified by the fingerprint minutiae (angle, the points x and y) in tracing the offender based on the evidence; and identifying the photorealistic appearance with the fingerprint minutiae (angle, the points x and y) .
  • Processing of data 1. Reviewing the files of the dead persons under the police supervision and sorting and classifying the same,
  • the x and y coordinates and the theta ( ⁇ ) angle and the helical pattern of the bifurcation and termination points with respect to the core/center point of the fingerprint were taken into 5 account.
  • the meaningful structure of the fingerprint vector is comprised by the union set of the first n minutiae values, which assume the center as the middle point in particular.
  • An exemplary FP data set was generated for the ANN (artificial neural network) model according to the invention. From the first 50 minutiae points, a vector
  • Table 1 some values are given for the vector with 150 members where the angle, x and y information were sampled for the minutiae for 8 fingerprints. The angular values were converted from the radians into
  • the face dataset is based on the principle of classification of a face feature by way of interpretation from the perspective of an expert.
  • the biometric information about the individuals is divided into categories by means of a proposed classifier, thereby reducing the information to a level perceivable to the human eye.
  • the face profiles of the criminals obtained from the criminal databases are expressed with a smaller number of minutiae vectors for the purpose of photorealistic trans formation .
  • Table 2 A chart of mathematical representation was formed in Table 2 as an example for the generation of the face dataset and the application of the same to the ANN model as an output. Each numerical value in the table constitutes the output of a different ANN model. This structure was determined based on the sorting form used in the criminal laboratories.
  • the numerical conversions were made for the features where the value of 0 was used for straight eyebrow structure, the value of 1 was used for the curved eyebrow structure, the value 0 was used for the small eye structure, the value of 1 was used for the medium eye structure, the value of 2 was used for the big eye structure, the value of 0 was used for the small nose structure, the value of 1 was used for the medium nose structure, the value of 2 was used for the big nose structure , the value of 0 was used for the long face structure , the value of 1 was used for the round face structure, finally, the value of 0 was used for the round chin structure and the value of 1 was used for the angled chin structure.
  • a network structure with single output was formed, where the FP dataset was applied as the input and the face dataset was applied as the output.
  • Independent models and network structures were formed, capable of separately estimating each face feature for the face dataset. These were formed as the ANN structures capable of performing the following predictions:
  • ANN Model-1 the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of eyebrow structure, namely the straight and curved, was used as the output.
  • the root mean square error was found as 4.28 x 10 ⁇ 25 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 75% success in the tests conducted with a test set containing 40 persons.
  • ANN Model-2 the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of eyebrow structure, namely the straight and curved, was used as the output.
  • the root mean square error was found as 4.28 x 10 ⁇ 25 according to the iteration for the training set with 360 persons. It was
  • the FP dataset with 150 inputs was used as the input and the face dataset with 3 types of eye structure, namely the small, medium and big, was used as the output.
  • the root mean square error was found as 1.27 x 1CT 20 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 60% success in the tests conducted with a test set containing 40 persons.
  • the FP dataset with 150 inputs was used as the input and the face dataset with 3 types of nose structure, namely the small, medium and big, was used as the output.
  • the root mean square error was found as 2.33 x 10 —24 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 70% success in the tests conducted with a test set containing 40 persons.
  • ANN Model-4 the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of face structure, namely the long and round, was used as the output.
  • the root mean square error was found as 3.01 x 10 ⁇ 23 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 60% success in the tests conducted with a test set containing 40 persons.
  • the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of chin structure, namely the round and angled, was used as the output.
  • the results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross- Validation, were presented.
  • the root mean square error was found as 4.21 x 10 ⁇ 24 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 75% success in the tests conducted with a test set containing 40 persons.
  • the classification in Table 2 was performed for the chin and the chin structures were divided into two groups, namely the angled chin and round chin.
  • the K-NN classification process was applied for the dataset with 2 classes.
  • the error values obtained as a result of classification were indicated. These are MAE (Mean Absolute Error) values that indicate the degree of mean absolute deviation of the estimated values from the actual values. This value shows dependency on the minimum and maximum observation values in the dataset, and therefore, the work should not be based solely on this value. Since the values obtained from MAE remain dependent on the minimum and maximum values, the value of mean absolute percentage error was used in comparing the estimation models.
  • MAE Mean Absolute Error
  • MAPE Mel Absolute Percentage Error
  • NMSRE Normalized Mean Square Root Error
  • the eye model was divided into 3 classes, namely the small, medium and big, according to the classification made in Table 3.
  • the class values determined by the k-nearest neighbor classifier corresponding to the actual class values for each person in the persons column.
  • a 350*151 array was formed for the training set and a 50*51 array was formed for the test dataset and these were evaluated independently of each other.
  • all of the three distance criteria employed in the k-nearest algorithm technique Euclidean, Mahalanobis and Minkowski, were used.
  • 1, 5, 10, 25, 50, 100 6 different k values were used in order to catch the correct value of analysis.
  • the FP minutiae are comprised by the values of X, Y, Angle.
  • SVM Model-1 consists of the expression of each fingerprint with a 2-dimensional vector including the averages of the X and Y coordinate values and the averages of the angle values.
  • the chin features of the individuals were applied as the output to the system corresponding to the 2- dimensional training set.
  • the FP minutiae are comprised by the values of X, Y, Angle.
  • SVM Model-2 consists of the expression of each fingerprint with a 3-dimensional vector including the averages of the X coordinate values, the averages of the Y coordinate values and the averages of the angle values.
  • the chin features of the individuals were applied as the output to the system corresponding to the 3-dimensional training set.
  • the FP minutiae are comprised by the values of X, Y, Angle.
  • SVM Model-3 consists of the expression of each fingerprint with a 30-dimensional vector including the first 30 X coordinate values, Y coordinate values and angle values.
  • the chin features of the individuals were applied as the output to the system corresponding to the 30-dimensional training set.
  • the developed system is a body of software made for estimating the face features from the fingerprints in line with the models generated.
  • the system the flow chart of which is provided in Figure 3, is comprised by different modules that run in harmony.
  • the system for generating the face appearance based on the fingerprint consist of the following process steps:
  • Biometric Data Collection Module (BDCM)
  • the biometric data collection module is the first point of input for the fingerprint and face images that will be applied as input to the system from different sources.
  • the module was designed to be versatile so that the data may be collected both from the digital scanners (fingerprint reader and camera) and from the paper media provided by the criminal sources.
  • the outputs of the data collection module proposed within the scope of the invention is applied as input for the experimental system by way of processing the fingerprints of the criminals printed on paper and the photos taken at the instant of arrest .
  • Preprocessing module This is the module in which the digital image processing tools such as cleaning, filtering, rotating, noise reduction and digitizing are used for the fingerprint and face images to be applied as input to the system.
  • the module the detailed flow chart of which is provided in Figure 1, generates as output the processed fingerprints and processed photographs for use in the vector transformations.
  • Vector transformation module This is the part where the digital images, which have been cleaned and made ready for vector transformations via various image processing techniques in the preprocessing module, are transferred into the mathematical expressions by means of the proposed models. In other words, the module realizes the use within the system of a whole of the values dependent on the x, y and theta coordinates for all the values obtained from the fingerprint sampling. Once the outline image, calculated as an output of the edge detection algorithms, is found, the processed fingerprint images are separated into the minutiae points by the vector transformation module. In Table 4, the exemplary minutiae properties are shown for a fingerprint .
  • Classifier module This is the part capable performing scalable operations and forming the infrastructure for separating the data of the proposed system and/or converting the same into the formats able to be applied to new systems.
  • the module aims to make mathematical sense of the data for the fingerprints coming from the minutiae separator, while solving, via the manual processing approach, the problems likely to be encountered when developing a method adoptable to non-standard face images obtained from different sources.
  • the classifier module the flow chart of which is provided in Figure 5, is comprised by two portions, namely the fingerprint classifier and the face classifier, proposed for different purposes.
  • the proposed system has a learning architecture.
  • the fingerprint and face expressions which are the outputs of the classifier, are trained with the artificial neural networks to find the relationships between the models.
  • an intelligent system has been designed, which contains 5 different artificial neural network models for the estimation of the face feature properties classified according to the expert opinions based on the mathematically expressed fingerprint minutiae properties.
  • the proposed system has a learning architecture.
  • the fingerprint and face expressions which are the outputs of the classifier, are trained with the artificial neural networks to establish the relationships between the models.
  • an intelligent system has been designed, which contains 5 different ANN models for the estimation of the face feature properties classified according to the expert opinions based on the mathematically expressed fingerprint minutiae properties.
  • Face synthesis module This module uses the outputs sampled as a result of application of the mathematical minutiae properties, which the system has not encountered before, to the artificial intelligence module. The attempt is made to generate a model from the face feature models, which correspond to the classifier and require expert opinion, by way of estimation of the face features corresponding to a fingerprint, such as the round or pointed chin, small, medium or big eye, etc.
  • Query module This is a query page where it is possible to query the generated synthetic faces in the databases and to trace the offender from the police sketch without the fingerprint. In other words, a comparison is made between the existing face image and the face image generated by the system. It is proposed that the query model may be used particularly in the partial fingerprints taken from the crime scene, in the persons not registered in AFIS, in the cases where there is no eyewitness and most importantly, in the devices such as CSC (City Surveillance Cameras) capable of identification based on not the fingerprints but the face images.
  • CSC Cross Surveillance Cameras
  • the software containing a great number of sub-drawings of the face features such as lips, eyebrows, eyes, hair, forehead, etc. and the drawings of the face features belonging to the races from different geographical areas, including the software for analyzing the minutiae developed with the ability to process a single fingerprint or n fingerprints simultaneously and to generate the minutiae vectors based on the proposed models, as well as the methods used for identifying the appearance (FACES), are also used within the system.
  • An architectural structure (service) was formed for the purpose of generating an estimated face model corresponding to a fingerprint that is input, following the processes that occur in a fully automated manner.
  • the developed system is a body of software made in order to estimate the face features based on the fingerprints in line with the proposed models.
  • the system which is our invention, is the software that accommodates an intelligent architecture and provides automatic outputs according to the inputs applied via only the jointly used databases without web connection.

Abstract

Our invention relates to an intelligent system, which synthesizes the photorealistic faces from the fingerprints for use in criminal identifications, wherein said system produces the face images in a photorealistic manner by making use of also the programs containing a great number of sub-drawings of the face features such as the lips, eyebrows, eyes, hair, forehead, face perimeter, etc. by way of forming with the fingerprints a whole of the values based on the coordinates that correspond to the x, y and theta points and wherein said system finds the face appearance that is criminally closest to the real in the quickest manner.

Description

DESCRIPTION
INTELLIGENT SYSTEM FOR PHOTOREALISTIC FACIAL COMPOSITE PRODUCTION FROM ONLY FINGERPRINT
Relevant Technical Field:
Our invention relates to an intelligent system, which synthesizes a photorealistic face from a fingerprint, for use in the criminological identifications.
State of the Art:
The fingerprint is one of the invariable and unchangeable unique biometric properties of a person, which is almost impossible to imitate.
For this reason, the fingerprint minutiae began to be used in many practices of today's technologies involving security. Besides, the fingerprint has been in use for many years also for the purpose identification of the criminal/s. The studies conducted revealed that the association is possible to be made between some biometric properties. Said associations could be used in many cases, notably in the criminal events.
The biometric data of fingerprint and face have the quality of evidence according to national and international laws. The fingerprints obtained from a crime scene or the police sketches developed in line with the statements of the eyewitnesses are the most important solutions used by the law enforcement officers for fighting the unidentified crimes. The use of such important data as evidence against the persons who are not registered in archives or in databases such as AFIS (Automated Fingerprint Identification System) is possible only in cases where the offenders may be predicted. It becomes difficult to identify the offenders especially in cases where the fingerprint is available but there is no eyewitness or in cases where the eyewitness fails to provide satisfactory information for developing the police sketch.
The appearances formed based on what the eyewitnesses see, remember or tell constitutes another limitation and difficulty that is encountered.
According to the ordinary state of the art, the situation that the commercial facial recognition software generate the vector patterns with the property of "trade secret" for the efforts of developing approximate face appearances based on the fingerprints and that the facial recognition algorithms recommended in literature generate the results intended for performing identification from among the faces at a quantity of "n" using minimum number of minutiae, rather than for performing the repeated estimation of the face image based on the vector, fails to provide outcomes that make sense in terms of criminology. When the approaches taken by the law enforcement officers in solving the criminal events are considered, the appearances, which are formed based on the statements of the eyewitnesses present in the crime scene are used, instead of the ones based on the point space that constitutes the face. As a result, there is a need for novel approaches for identifying a general model for each face feature and for modeling the relationship between the forensic fingerprints and these features.
According to the ordinary state of the art, there are inventions that carry out identification with the available biometric data. The invention no. 2012/07018 entitled "Intelligent System Recognizing the Gender from the Fingerprint" is such an example; however, said invention is intended for performing the gender estimation based on the fingerprint data. According to another invention with no. 2011/12255, entitled "Smart system capable of personal identification using the biometric properties", it is possible to form a two-dimensional face sketch obtained by way of joining the facial points provided by the system. Here, the face image corresponds to a caricature-like image .
Object of the Invention:
The object of the invention is to investigate the biometric properties, form the transformational models, turn the formed models into an application able to be used in criminology and provide said application to the security units, obtain photorealistic solutions and find partial or complete solutions for the problems mentioned above. Another object of the invention is to form a body of models based on the coordinates corresponding to the fingerprint instead of the point estimations for providing the methods of identifying the appearance, for providing in a photorealistic manner the face images that are based on the lips, eyebrows, eyes, face perimeter, forehead and similar features and for obtaining in a photorealistic manner the face appearance that is criminally closest to the real in the most rapid way via a fully automated system. Another object of the invention is to develop a novel method via which it would become easy to trace the offender based on the fingerprint.
Another object of the invention is to reveal the relationship between the fingerprint and the face and develop a system generating photorealistic results.
Another object of the invention is to establish a system generating the results with the quality of police sketch. Another object of the invention is to develop a system that could gain acceptance owing to the association with the photorealistic face images generated by the commercial software used in identifying the appearance.
Description of the Figures:
Figure 1. Flow chart of the preprocessing module
Figure 2. Design chart of the proposed ANN model
Figure 3. Flow chart of the proposed system for
synthesizing face from the fingerprint
Figure 4. Flow chart of the biometric data collection module
Figure 5. Flow chart of the dataset generation and
intelligent system module
Description of the Invention
The invention relates to a fingerprint and face synthesis system developed in order to form a body of models based on the coordinates corresponding to the fingerprint for providing in a photorealistic manner the face images of the members by the use of the lips, eyebrows, eyes, region around the eyes, hair, forehead and similar features and obtaining the face appearance that is criminally closest to the real in the most rapid manner via a fully automated system.
In order to form the synthesis system, the datasets are generated, which are obtained from the fingerprint and face images and are capable of operating in harmony with the system. For the generation of these datasets, the following are performed:
Developing new technigues for extracting the minutiae vectors from the fingerprints, improving the vectors and enhancing their identification performance,
Developing new techniques for determining the facial sizes such as eyes, nose, mouth, face perimeter from the face images,
Generating the fingerprint and face from the vector patterns,
Generating the real-time FP (fingerprint) and face images obtained from the pictures and matching these with the photorealistic images,
Classifying the fingerprints and facial organs,
Generating the input and output datasets for the processes of expressing the photos of the individuals with photorealistic models and classifying the same, Establishing the intelligent system model for the establishment of the relationship between the fingerprints and the face vectors The ability of a fingerprint algorithm to smoothly function depends first of all on the quality of the image taken. In this regard, it is necessary to evaluate the obtained image according to some criteria before any work is performed on the image with the algorithms for generating the minutiae. It is necessary to clear, correct and improve the noise- generating factors on the image such as the wounds, the wet or oily fingerprints or improperly taken fingerprints, by means of certain algorithms.
After the steps of improvement, the minutiae are identified on the fingerprint (FP) in order to express the fingerprint as a vector pattern. After the FPs are separated into their minutiae, they are subjected to preprocessing via suitable methods, these images are saved and they are required to be used together with the comparison algorithms in line with the planned obj ectives .
After the minutiae points are determined, one of the similarity-based, minutiae-based and non-minutiae-based comparison techniques is employed for matching the fingerprint vectors.
The comparison techniques perform mathematical matching in certain respects. The fingerprint vectors do not have constant length and same angular information. Dislocation, rotation, partial overlap, non-linear distortion, pressure and skin condition, noise and minutiae extraction errors may occur when taking a fingerprint and these may influence the quality of the minutiae. There are a great number of algorithms used to match a fingerprint added to the databases with the other fingers according to the relation of 1:N, N:N. According to the present invention, the angle and position of the minutiae vectors belonging to the fingerprints are determined automatically by means of the software developed.
In the face synthesis system according to the invention, the minutiae-based matching methods are used for generating the fingerprint vectors.
In the minutiae-based matching method, the matching is a computer-based method and it is the method most frequently used by the fingerprint experts. The method is devised in order to provide a solution for the problem of matching the minutiae points having variable size obtained from the fingerprints with the existing pattern files in the databases such as AFIS in a way to enable maximum number of correspondence .
In order to identify the sub-models of a face based on the minutiae of a fingerprint, the vector models were formed with a fixed length corresponding to the common numerical value of the vector quantities able to be obtained from different printings of a fingerprint, from the crime scenes or from the offenders. In order to form these vector models, the processes provided below were performed for the purposes of identifying the relationship between the fingerprint minutiae (angle, the points x and y) and the face features (nose size, face length, etc.) and the relationship between the appearances obtained from the fingerprints and the appearances identified by the experts based on the appearance features of the persons; generating the eye, nose, face, eyebrow and chin features in criminal appearance identification form by way of generation of the minutiae (angle, the points x and y) for the latent fingerprints able to be obtained from a crime scene, without the need for the eyewitnesses in the crime scene; using the appearance to be identified by the fingerprint minutiae (angle, the points x and y) in tracing the offender based on the evidence; and identifying the photorealistic appearance with the fingerprint minutiae (angle, the points x and y) .
1. Planning and preliminary preparations
2. Collection and classification of the fingerprint and face images
3. Extraction, analysis and preprocessing of the data
4. Generation of the training and test datasets
3.1 Generation of FP datasets
3.2 Generation of face model datasets
5. Formation of ANN structure
6. Trainings and Tests for ANN (artificial neural network) models
7. Matching the photorealistic appearances with training and test processes
1. Planning and Preliminary Preparations
In the planning, which is the first step in developing a system capable of estimating in a photorealistic manner the face features based on the fingerprint minutiae vectors, new approaches are taken for processing and subjecting to vector transformations the fingerprint and face images obtained by the law enforcement officers. The minutes to provide assistance in the identification were obtained from the law enforcement officers on the methods used for identifying the appearance (FACES) and the reports of appearance identification and these constituted the face reference points for this study. 2. Data Analysis and Proce
It is necessary to analyze, classify and process the obtained data in order to determine which FPs (fingerprints) match (or do not match) with which face feature .
Processing of data: 1. Reviewing the files of the dead persons under the police supervision and sorting and classifying the same,
2. Transferring the FP taking forms available in printed form in appropriate paper files to the electronic environment with a quality of 600 DPI, via a scanner, in case the data are not available in electronic environment,
3. Dividing the FP forms, which are transferred to electronic environment, into 20 pieces, 10 being straight print FP and 10 being round print FP,
4. Removing the portions other than the fingerprint (text on the print, signature on the print, double print on the same point, printing on the form line, etc.) from the divided fingerprints via image processing programs in preprocessing step,
5. Generating the minutiae points from the separated FP images using the software developed,
6. Converting the minutiae points extracted from the FP images into the vectors via the proposed models, 7. Transferring the images in the files that include FP to the electronic environment with a quality of 600 DPI, via a scanner, 8. Rendering the scanned face images free from noise by means of the image processing programs in the preprocessing step,
9. Selecting the FP and face images with acceptable quality,
10. Classifying the cleared face images with the FP and face image card obtained from the multiple biometric databases to form the minutiae vectors,
11. Bringing the obtained training and test data into a state able to be used in ANNs,
12. Forming the proper ANN model structures, determining the ANN model parameters,
13. Completing the training and test processes for the ANN models,
14. Converting the results of the ANN structures into a form able to be expressed with photorealistic images
The data processed by performing the above process steps were used in the formation of the datasets by way of classification with the forms containing the appearance properties used in the criminal units and with the methods proposed in the literature, employed for classifying the FP and face features.
3. Generation of the datasets
3.1. Generation of FP dataset
When performing the calculations of fingerprint minutiae vectors, the x and y coordinates and the theta (Θ) angle and the helical pattern of the bifurcation and termination points with respect to the core/center point of the fingerprint were taken into 5 account. The meaningful structure of the fingerprint vector is comprised by the union set of the first n minutiae values, which assume the center as the middle point in particular.
10 The singular vectors were generated for the minutiae points whose fingerprint classification criteria were taken at constant length. In order to satisfy this condition in the FP dataset, the angle, x and y values were set to the minutiae points at an average number
15 of n, wherein the first n vectors with greater minutiae number and the remaining values for those with smaller number up to the value of n were assigned the value of 0.
20 Example 1
An exemplary FP data set was generated for the ANN (artificial neural network) model according to the invention. From the first 50 minutiae points, a vector
25 with 150 members of theta, x and y was formed. In
Table 1, some values are given for the vector with 150 members where the angle, x and y information were sampled for the minutiae for 8 fingerprints. The angular values were converted from the radians into
30 the degrees.
Table 1. An example of the FP dataset used in ANN training (for 8 FPs)
Figure imgf000012_0001
Y3 79 79 97 102 110 56 120 66 θ4 325, 008 5, 710593 191, 3099 221, 9872 210, 9638 174, 2894 206, 5651 196, 6992
Χ4 73 131 135 173 186 108 194 140
Yi 143 70 110 99 117 67 125 71
θ48 129, 509 153, 435 158, 1986 63, 43495 321, 3402 348, 6901 145, 008 16, 69924 χ48 55 98 84 156 86 90 117 135
YiS 157 80 123 126 153 91 150 101
Θ49 116, 565 348, 6901 349, 6706 153, 435 174, 2894 174, 2894 343, 3007 201, 8014
Χ49 71 136 88 100 137 101 112 173
Yi9 169 80 115 139 147 84 167 126
Θ50 106,699 163, 3008 5, 710593 116, 565 128, 6598 201, 8014 135 338, 1986
Χ50 84 145 144 96 81 174 112 76
Ys 176 110 119 145 160 89 174 131
3.2 Generation of the face dataset
The face dataset is based on the principle of classification of a face feature by way of interpretation from the perspective of an expert. When generating the face dataset, the biometric information about the individuals is divided into categories by means of a proposed classifier, thereby reducing the information to a level perceivable to the human eye.
In the ANN model according to the invention, the face profiles of the criminals obtained from the criminal databases are expressed with a smaller number of minutiae vectors for the purpose of photorealistic trans formation .
Example 2
A chart of mathematical representation was formed in Table 2 as an example for the generation of the face dataset and the application of the same to the ANN model as an output. Each numerical value in the table constitutes the output of a different ANN model. This structure was determined based on the sorting form used in the criminal laboratories. The numerical conversions were made for the features where the value of 0 was used for straight eyebrow structure, the value of 1 was used for the curved eyebrow structure, the value 0 was used for the small eye structure, the value of 1 was used for the medium eye structure, the value of 2 was used for the big eye structure, the value of 0 was used for the small nose structure, the value of 1 was used for the medium nose structure, the value of 2 was used for the big nose structure , the value of 0 was used for the long face structure , the value of 1 was used for the round face structure, finally, the value of 0 was used for the round chin structure and the value of 1 was used for the angled chin structure.
Table 2. The face models that give the meaning of the intelligent model's output
Figure imgf000014_0001
4. ANN (artificial neural network) models
For the generated models, a network structure with single output was formed, where the FP dataset was applied as the input and the face dataset was applied as the output. Independent models and network structures were formed, capable of separately estimating each face feature for the face dataset. These were formed as the ANN structures capable of performing the following predictions:
• Eyebrow structure in the Model-1 output,
· Eye structure in the Model-2 output,
• Nose structure in the Model-3 output,
• Face outline structure in the Model-4 output, · Chin structure in the Model-5 output
The common ANN model for the proposed approaches is given in Figure 2.
EXAMPLE 3
The cross tests were conducted in order to identify the most suitable ANN structure and learning algorithm.
5 different ANN models allowing the identification of 5 different features of a face, 4 different SVM models intended for the identification of the chin feature and 6 different k-NN exemplary models intended for the identification of the eye feature according to the system of the invention are provided below.
Example 4
Exemplary ANN models generated:
ANN Model-1
For the ANN Model-1, the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of eyebrow structure, namely the straight and curved, was used as the output. The results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross-Validation, were presented. The root mean square error was found as 4.28 x 10~25 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 75% success in the tests conducted with a test set containing 40 persons. ANN Model-2
For the ANN Model-2, the FP dataset with 150 inputs was used as the input and the face dataset with 3 types of eye structure, namely the small, medium and big, was used as the output. The results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross-Validation, were presented. The root mean square error was found as 1.27 x 1CT20 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 60% success in the tests conducted with a test set containing 40 persons.
ANN Model-3
For the ANN Model-3, the FP dataset with 150 inputs was used as the input and the face dataset with 3 types of nose structure, namely the small, medium and big, was used as the output. The results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross-Validation, were presented. The root mean square error was found as 2.33 x 10 —24 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 70% success in the tests conducted with a test set containing 40 persons.
ANN Model-4
For the ANN Model-4, the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of face structure, namely the long and round, was used as the output. The results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross- Validation, were presented. The root mean square error was found as 3.01 x 10~23 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 60% success in the tests conducted with a test set containing 40 persons. ANN Model-5
For the ANN Model-5, the FP dataset with 150 inputs was used as the input and the face dataset with 2 types of chin structure, namely the round and angled, was used as the output. The results of the first fold of the model, which was tested to be internally consistent via K-Fold Cross- Validation, were presented. The root mean square error was found as 4.21 x 10~24 according to the iteration for the training set with 360 persons. It was demonstrated that the eyebrow types corresponding to the fingerprint vectors could be correctly estimated with 75% success in the tests conducted with a test set containing 40 persons.
Exemplary K-NN models generated K-NN Model -1
The classification in Table 2 was performed for the chin and the chin structures were divided into two groups, namely the angled chin and round chin. The K-NN classification process was applied for the dataset with 2 classes. The error values obtained as a result of classification were indicated. These are MAE (Mean Absolute Error) values that indicate the degree of mean absolute deviation of the estimated values from the actual values. This value shows dependency on the minimum and maximum observation values in the dataset, and therefore, the work should not be based solely on this value. Since the values obtained from MAE remain dependent on the minimum and maximum values, the value of mean absolute percentage error was used in comparing the estimation models. MAPE (Mean Absolute Percentage Error) reveals as a percentage the degree by which the estimated values show mean absolute deviation from the actual values. On the other hand, due to the value with the class information of 0 contained in the dataset, it is not possible to use the MAPE scale in this analysis. As a result, before taking the average of the absolute error values in the final step, the analysis of
NMSRE (Normalized Mean Square Root Error) was undertaken and the detection of big errors was performed. In addition, according to the analysis results for the dataset with classes, the k-NN classifier using the Manhattan distance relation for k=10 showed the most stable estimation performance .
K-NN Model -2
The eye model was divided into 3 classes, namely the small, medium and big, according to the classification made in Table 3.
Referring to Table 3, there are present the class values determined by the k-nearest neighbor classifier corresponding to the actual class values for each person in the persons column. A 350*151 array was formed for the training set and a 50*51 array was formed for the test dataset and these were evaluated independently of each other. When working out the analysis results, all of the three distance criteria employed in the k-nearest algorithm technique, Euclidean, Mahalanobis and Minkowski, were used. Moreover, 1, 5, 10, 25, 50, 100 6 different k values were used in order to catch the correct value of analysis. According to the analysis results for the dataset with 3 classes, the k-NN classifier using the Manhattan distance relation for k=100 showed the most stable estimation performance .
Table 3. k-NN model-1 classifier test results performed for the estimation of three types of eye size
Figure imgf000018_0001
5 1 2 1.6 1.8 1.48 2 1.6 1.8 1.68 2 2 1.6 1.52
6 1 1 1.6 1.1 1.04 1 1 1.2 0.96 1 1.2 1.3 1.08
7 2 2 1.6 1.4 1.08 2 1 1.3 1.2 2 1.6 1.4 1.16
8 0 1 1 1 1.16 1 1.2 1.2 1.12 0 1 1.1 1
9 0 1 1.4 1.2 1.04 1 1.2 1 1.04 1 1.4 1.3 1.16
10 1 1 0.4 0.9 1.04 1 1 0.9 0.92 0 0.8 0.9 1.04
11 2 1 1.4 1.1 1.2 1 1.2 1.2 1.32 1 1.2 1.1 1.16
12 1 1 1.2 1 1.16 1 1.4 1 1.12 1 0.8 1 1.16
13 2 1 1 1 1 1 1 1.2 1 1 1 1.1 0.96
14 1 2 1.6 1.1 1.32 2 1.2 1.2 1.28 2 1.4 1.5 1.04
15 2 2 1.4 1.1 1.04 2 1.4 1.1 1 1 1.4 1.1 1.04
39 1 2 1.8 1.4 1.36 2 1.8 1.4 1.32 2 1.8 1.5 1.44
40 1 2 1.4 1.4 1.28 2 1.6 1.3 1.12 2 1.8 1.5 1.24
41 1 1.2 1.2 1.16 1 1.2 1.4 1.28 1 1.4 1 1.04
42 1 2 1.6 1.3 1.24 1 1.2 1.2 1.24 2 1.6 1.6 1.2
43 1 0 1 0.9 1 1 1.2 1 1.12 0 1 0.9 1.24
44 1 1 0.8 1 1.04 1 0.6 1 1 1 1 1 1
45 2 1.6 1.1 1.16 2 1.6 1.1 1.12 2 1.2 1.1 1.12
46 1 0 1 1.1 1.08 1 0.8 0.8 1 0 1 1.1 1.2
47 1 0 0.8 0.9 1.04 0 1 1.3 1.08 1 1.2 0.9 0.8
48 2 2 1.2 1.2 1.32 0 1 1.1 1.12 2 1.2 1.2 1.32
49 1 2 1 1.2 1.08 2 1 0.9 1.08 2 1.4 1 1.2
50 2 0 1.2 1.1 0.96 1 1.4 1.1 0.96 0 1.2 1 0.88
MAE 0.8 0.64 0.592 0.582 0.72 0.64 0.606 0.59 0.76 0.636 0.592 0.585
MAP E sea e may not be used in this analysis, since t he "0" is present in some of
MAPE
the class information for the obser vations.
50.99 36.62 36.74 38.07 37.11 38.85 36.56 36.43
NMSRE 40% 50% 40% 50%
% % % % % % % %
Exemplary SVM models generated: SVM Model-1
The FP minutiae are comprised by the values of X, Y, Angle. SVM Model-1 consists of the expression of each fingerprint with a 2-dimensional vector including the averages of the X and Y coordinate values and the averages of the angle values. The chin features of the individuals were applied as the output to the system corresponding to the 2- dimensional training set.
SVM Model-2
The FP minutiae are comprised by the values of X, Y, Angle. SVM Model-2 consists of the expression of each fingerprint with a 3-dimensional vector including the averages of the X coordinate values, the averages of the Y coordinate values and the averages of the angle values. The chin features of the individuals were applied as the output to the system corresponding to the 3-dimensional training set.
SVM Model-3
The FP minutiae are comprised by the values of X, Y, Angle. SVM Model-3 consists of the expression of each fingerprint with a 30-dimensional vector including the first 30 X coordinate values, Y coordinate values and angle values. The chin features of the individuals were applied as the output to the system corresponding to the 30-dimensional training set.
SVM Model-4
As in the case with the work performed when generating the ANN models, in the trainings of the 50-dimensional FP vector array performed with SVM models, the classification was made in the feature arrays with 31 dimensions and above, for the solution of the SVM' s problem of identifying the face features from the FP .
The developed system is a body of software made for estimating the face features from the fingerprints in line with the models generated. The system, the flow chart of which is provided in Figure 3, is comprised by different modules that run in harmony. The system for generating the face appearance based on the fingerprint consist of the following process steps:
Collection of the fingerprint and face images via the biometric data collection module, Improvement and classification of the FP and face images via the preprocessing module,
Identification of the FP minutiae vectors via the vector transformation module,
Identification of the features of the face images via the classifier module,
Training and testing of the FP and face datasets with ANN (Artificial Neural Networks) via the artificial intelligence module,
Conversion of the obtained test results into the types of appearance and expression of the same with the face models via the face synthesis module,
Comparison of the results with the actual images in the face identification databases via the query module,
Identification of the offenders or missing persons
1. Biometric Data Collection Module (BDCM)
As shown in Figure 4, the biometric data collection module is the first point of input for the fingerprint and face images that will be applied as input to the system from different sources. The module was designed to be versatile so that the data may be collected both from the digital scanners (fingerprint reader and camera) and from the paper media provided by the criminal sources. The outputs of the data collection module proposed within the scope of the invention is applied as input for the experimental system by way of processing the fingerprints of the criminals printed on paper and the photos taken at the instant of arrest .
2. Preprocessing module: This is the module in which the digital image processing tools such as cleaning, filtering, rotating, noise reduction and digitizing are used for the fingerprint and face images to be applied as input to the system. The module, the detailed flow chart of which is provided in Figure 1, generates as output the processed fingerprints and processed photographs for use in the vector transformations.
3. Vector transformation module: This is the part where the digital images, which have been cleaned and made ready for vector transformations via various image processing techniques in the preprocessing module, are transferred into the mathematical expressions by means of the proposed models. In other words, the module realizes the use within the system of a whole of the values dependent on the x, y and theta coordinates for all the values obtained from the fingerprint sampling. Once the outline image, calculated as an output of the edge detection algorithms, is found, the processed fingerprint images are separated into the minutiae points by the vector transformation module. In Table 4, the exemplary minutiae properties are shown for a fingerprint .
Table 4. Exemplary minutiae properties of a fingerprint
Figure imgf000022_0001
3, 80978298187256 Bifurcation 208-96
3, 87749409675598 Bifurcation 249 -96
4. Classifier module: This is the part capable performing scalable operations and forming the infrastructure for separating the data of the proposed system and/or converting the same into the formats able to be applied to new systems. The module aims to make mathematical sense of the data for the fingerprints coming from the minutiae separator, while solving, via the manual processing approach, the problems likely to be encountered when developing a method adoptable to non-standard face images obtained from different sources. The classifier module, the flow chart of which is provided in Figure 5, is comprised by two portions, namely the fingerprint classifier and the face classifier, proposed for different purposes.
The proposed system has a learning architecture. The fingerprint and face expressions, which are the outputs of the classifier, are trained with the artificial neural networks to find the relationships between the models. In other words, an intelligent system has been designed, which contains 5 different artificial neural network models for the estimation of the face feature properties classified according to the expert opinions based on the mathematically expressed fingerprint minutiae properties.
5. Artificial Intelligence Module (AIM)
The proposed system has a learning architecture. The fingerprint and face expressions, which are the outputs of the classifier, are trained with the artificial neural networks to establish the relationships between the models. In other words, an intelligent system has been designed, which contains 5 different ANN models for the estimation of the face feature properties classified according to the expert opinions based on the mathematically expressed fingerprint minutiae properties.
6. Face synthesis module: This module uses the outputs sampled as a result of application of the mathematical minutiae properties, which the system has not encountered before, to the artificial intelligence module. The attempt is made to generate a model from the face feature models, which correspond to the classifier and require expert opinion, by way of estimation of the face features corresponding to a fingerprint, such as the round or pointed chin, small, medium or big eye, etc.
7. Query module: This is a query page where it is possible to query the generated synthetic faces in the databases and to trace the offender from the police sketch without the fingerprint. In other words, a comparison is made between the existing face image and the face image generated by the system. It is proposed that the query model may be used particularly in the partial fingerprints taken from the crime scene, in the persons not registered in AFIS, in the cases where there is no eyewitness and most importantly, in the devices such as CSC (City Surveillance Cameras) capable of identification based on not the fingerprints but the face images.
Other than these, the software containing a great number of sub-drawings of the face features such as lips, eyebrows, eyes, hair, forehead, etc. and the drawings of the face features belonging to the races from different geographical areas, including the software for analyzing the minutiae developed with the ability to process a single fingerprint or n fingerprints simultaneously and to generate the minutiae vectors based on the proposed models, as well as the methods used for identifying the appearance (FACES), are also used within the system. An architectural structure (service) was formed for the purpose of generating an estimated face model corresponding to a fingerprint that is input, following the processes that occur in a fully automated manner.
The developed system is a body of software made in order to estimate the face features based on the fingerprints in line with the proposed models.
The system, which is our invention, is the software that accommodates an intelligent architecture and provides automatic outputs according to the inputs applied via only the jointly used databases without web connection.

Claims

A method for identifying the sub-models of a face based on the minutiae of a fingerprint characterized in that it comprises the process steps of analyzing and processing the data in order to find to which face feature the FPs (fingerprints) correspond,
^ generating the FP dataset from the union set of particularly the first n minutiae values assuming the center as the middle point according to the meaningful structure of the fingerprint vector, taking into account the helical pattern of the bifurcation and termination points with respect to the core/center point and the x and y coordinates and the theta (Θ) angle, in order to perform the calculations for the fingerprint minutiae vectors ,
generating the face dataset where the face profiles of the criminals obtained from the criminal databases are expressed with a smaller number of minutiae vectors for the photorealistic transformation and a face feature is interpreted and classified from the perspective of an expert,
^ generating the ANN (artificial neural network) models, which are the network structures with single output where the FP dataset is applied as input and the face dataset is applied as output.
>
A face synthesis system, which is developed in order to produce the face images of the persons in a photorealistic manner and generate the face appearances that are criminally closest to the real by way of forming a whole of the values based on the coordinates corresponding to the fingerprint and using the features such as the lips, eyebrows, eyes, hair, forehead and the like, characterized in that it is a fully automated system comprising the process steps of
collection of the fingerprint and face images via the biometric data collection module,
improvement and classification of the FP and face images via the preprocessing module,
identification of the FP minutiae vectors via the vector transformation module,
identification of the features of the face images via the classifier module,
training and testing of the FP and face datasets with ANN (Artificial Neural Networks) via the artificial intelligence module,
conversion of the obtained test results into the types of appearance and expression of the same with the face models via the face synthesis module,
comparison of the results with the actual images in the face identification databases via the query module,
identification of the offenders or missing persons
Process step of collection of the fingerprint and face images via the biometric data collection module, according to Claim 1, characterized in that in order to collect the data both from the digital scanners (fingerprint reader and camera) and from the paper media provided by the criminal sources; the outputs of the proposed data collection module, the fingerprints of the dead criminals printed on paper and the photos taken by the law enforcement officers at the instant of arrest are processed and provided as input to the system .
Process step of improvement and classification of the FP and face images with the preprocessing module, according to Claim 1, characterized in that the digital image processing tools such as cleaning, filtering, rotating, noise reduction and digitizing are used for the fingerprint and face images to be applied as input to the system, thereby obtaining the processed fingerprints and processed photographs as the module output, for use in the vector transformations .
Process step of identification of the FP minutiae vectors via the vector transformation module, according to Claim 1, characterized in that the digital images, which have been cleaned and made ready for vector transformations via various image processing techniques in the preprocessing module, are transferred into the mathematical expressions by means of the proposed models.
Process step of identification of the FP minutiae vectors via the vector transformation module, according to Claim 1, characterized in that a whole of the values dependent on the x, y and theta (Θ) coordinates for all the values obtained from the fingerprint sampling are used within the system, and once the images, calculated as an output of the edge detection algorithms, are found, the processed fingerprint images are separated into the minutiae points by the vector transformation module.
Process step of identification of the features of the face images via the classifier module, according to Claim 1, characterized in that the module makes mathematical sense of the data for the fingerprints coming from the minutiae separator in order to perform scalable operations and form the infrastructure for separating the data of the proposed system and/or converting the same into the formats able to be applied to new systems.
Process step of conversion of the obtained test results into the types of appearance and expression of the same with the face models via the face synthesis module, according to Claim 1, characterized in that an intelligent system is established, which contains different ANN models for the estimation of the face feature properties classified according to the expert opinions based on the mathematically expressed fingerprint minutiae properties.
Process step of comparison of the results with the actual images in the face identification databases via the guery module, according to Claim 1, characterized in that a model is generated from the face feature models, which correspond to the classifier and require expert opinion, by way of estimation of the face features corresponding to a fingerprint, such as the round or pointed chin, small, medium or big eye, etc.
Process step of identification of the offenders or missing persons, according to Claim 1, characterized in that a comparison is made between the existing face image and the face image generated by the system.
PCT/TR2015/000299 2014-08-29 2015-08-31 Intelligent system for photorealistic facial composite production from only fingerprint WO2016032410A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TR201410148 2014-08-29
TR2014/10148 2014-08-29

Publications (1)

Publication Number Publication Date
WO2016032410A1 true WO2016032410A1 (en) 2016-03-03

Family

ID=54347801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/TR2015/000299 WO2016032410A1 (en) 2014-08-29 2015-08-31 Intelligent system for photorealistic facial composite production from only fingerprint

Country Status (1)

Country Link
WO (1) WO2016032410A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778679A (en) * 2017-01-05 2017-05-31 唐常芳 A kind of specific crowd video frequency identifying method and system based on big data machine learning
KR20200070409A (en) * 2018-09-30 2020-06-17 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. Human hairstyle creation method based on multiple feature search and transformation
CN111325954A (en) * 2019-06-06 2020-06-23 杭州海康威视系统技术有限公司 Personnel loss early warning method, device, system and server

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NECLA OZKAYA ET AL: "Generating One Biometric Feature from Another: Faces from Fingerprints", SENSORS, vol. 10, no. 5, 28 April 2010 (2010-04-28), CH, pages 4206 - 4237, XP055236752, ISSN: 1424-8220, DOI: 10.3390/s100504206 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778679A (en) * 2017-01-05 2017-05-31 唐常芳 A kind of specific crowd video frequency identifying method and system based on big data machine learning
CN106778679B (en) * 2017-01-05 2020-10-30 唐常芳 Specific crowd video identification method based on big data machine learning
KR20200070409A (en) * 2018-09-30 2020-06-17 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. Human hairstyle creation method based on multiple feature search and transformation
KR102154470B1 (en) * 2018-09-30 2020-09-09 플렉스-브이알 디지털 테크놀로지 (상하이) 씨오., 엘티디. 3D Human Hairstyle Generation Method Based on Multiple Feature Search and Transformation
CN111325954A (en) * 2019-06-06 2020-06-23 杭州海康威视系统技术有限公司 Personnel loss early warning method, device, system and server
CN111325954B (en) * 2019-06-06 2021-09-17 杭州海康威视系统技术有限公司 Personnel loss early warning method, device, system and server

Similar Documents

Publication Publication Date Title
Galdámez et al. A brief review of the ear recognition process using deep neural networks
El Khiyari et al. Age invariant face recognition using convolutional neural networks and set distances
CN101281598A (en) Method for recognizing human face based on amalgamation of multicomponent and multiple characteristics
Alheeti Biometric iris recognition based on hybrid technique
Li et al. Common feature discriminant analysis for matching infrared face images to optical face images
Rao et al. Sign Language Recognition System Simulated for Video Captured with Smart Phone Front Camera.
Zhang et al. Advanced biometrics
Chen et al. Contactless multispectral palm-vein recognition with lightweight convolutional neural network
Tistarelli et al. Biometrics in forensic science: challenges, lessons and new technologies
WO2016032410A1 (en) Intelligent system for photorealistic facial composite production from only fingerprint
Rajasekar et al. Efficient multimodal biometric recognition for secure authentication based on deep learning approach
Zhang et al. A study of similarity between genetically identical body vein patterns
Mangla et al. Sketch-based facial recognition: a weighted component-based approach (WCBA)
Krishnaprasad et al. A Conceptual Study on User Identification and Verification Process using Face Recognition Technique
Ramesh et al. Pattern extraction methods for ear biometrics-A survey
Deshpande et al. Fusion of dorsal palm vein and palm print modalities for higher security applications
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Verma et al. Touchless region based palmprint verification system
Wang et al. Expression robust three-dimensional face recognition based on Gaussian filter and dual-tree complex wavelet transform
Bala et al. An effective multimodal biometric system based on textural feature descriptor
Mukane et al. EMERGING FORENSIC FACE MATCHING TECHNOLOGY TO APPREHEND CRIMINALS:: A SURVEY
Omar et al. New feature-level algorithm for a face-fingerprint integral multi-biometrics identification system
Zhai et al. A novel Iris recognition method based on the contourlet transform and Biomimetic Pattern Recognition Algorithm
Chihaoui et al. A novel face recognition system based on skin detection, HMM and LBP
Dakre et al. An efficient technique of multimodal biometrics using fusion of face and iris features

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15784789

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15784789

Country of ref document: EP

Kind code of ref document: A1