WO2011068395A2 - A method for identity recognition based on lip image - Google Patents

A method for identity recognition based on lip image Download PDF

Info

Publication number
WO2011068395A2
WO2011068395A2 PCT/MY2010/000225 MY2010000225W WO2011068395A2 WO 2011068395 A2 WO2011068395 A2 WO 2011068395A2 MY 2010000225 W MY2010000225 W MY 2010000225W WO 2011068395 A2 WO2011068395 A2 WO 2011068395A2
Authority
WO
WIPO (PCT)
Prior art keywords
lip
subject
print
template
contour
Prior art date
Application number
PCT/MY2010/000225
Other languages
French (fr)
Other versions
WO2011068395A3 (en
Inventor
Mei Kuan Lim
Kim Meng Liang
Kadim Zulaikha
Sze Ling Tang
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2011068395A2 publication Critical patent/WO2011068395A2/en
Publication of WO2011068395A3 publication Critical patent/WO2011068395A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Definitions

  • the present invention relates to a method of identity recognition via human lip images, relying on the fact that the human lips have a unique trait which is exclusive between individuals.
  • Physiological biometrics are related to intrinsic body features of a human that includes fingerprints, irises, pupils, palm prints, face and DNA whereas behavioral traits refers to keystroke, signature, handwriting and voice. Each measurement however has its own merits and flaws. Thus, the findings of novel and innovative solutions continue. Some of the new biometrics modalities include human scent recognition, knuckles texture and finger-nails recognition. Amongst all of the biometric techniques, fingerprint-based identification is the oldest and has been used in numerous applications. Traditional fingerprint identification by feature extraction has been used by institutions like the Federal Bureau of Investigations (FBI) for identifying criminals and is the most common fingerprint identification system.
  • FBI Federal Bureau of Investigations
  • United States patent application no. 6241288 has introduced another method of fingerprint identification via using two-dimensional bitmaps instead of the conventional feature extraction.
  • An accurate reference point is located and two- dimensional areas in the vicinity of the reference point of the input image of the fingerprint is correlated with the stored fingerprint recognition information to determine the similarity and thus identify the input fingerprint.
  • a common problem arising in fingerprint-based identification is the possibility to manipulate one's biometric used, for example by cutting off one's finger or by creating a synthetic model of a fingerprint.
  • Another issue is that finger injuries are common, ranging from minor cuts and scrapes to wounds with major damage to bones, tendons and ligaments.
  • iris recognition uses the unique patterns in an individual's iris for identification.
  • Retinal recognition uses the unique pattern of blood vessels on an individual's retina behind the eye.
  • a typical issue in optical features biometric system is the availability of different colors and patterns of contact lenses in the market that can easily mask the actual iris color. Also, problem occurs in recognizing for individuals wearing hard gas permeable contact lenses.
  • An alternative biometric technique is by using face recognition for identification as disclosed in another United States Patent application no. 7440594.
  • face recognition the facial features such as eyes, nose, mouth and ears are determined. Then, information on the shape and position of each facial point is extracted.
  • face identification is the indefinite information of the face such as when a subject puts on makeup, wears glasses, has a different hairstyle or having different pose of head at the time of identification. Therefore, it is difficult to recognize an individual only by facial shape singly.
  • the fact that lip features are unique for humans have been confirmed by Yasuo Tsuchihasi and Kazuo Suzuki in their studies in Tokio University (1968-1971) in which an examination of the lips of 1364 subjects from 3 to 60 years of age from both genders was conducted.
  • WO 2008/135907 proposes a method of evaluating lip type based on freshness and sharpness of the lips and an evaluation system for implementing such a method.
  • the method uses the lip color as one of the main features and evaluates the lip type by fineness, fullness, outline neatness and vmnkleness.
  • the lip is evaluated as a whole, and not in portions, to determine types of lips in terms of shapes and size of the lips. Further, no unique biometric data is generated for a subject through the disclosed system.
  • the present invention aims to provide a novel method of identity recognition by identifying unique biometirc features available on human lips.
  • the method is capable of generating identity template which is unique to each individual based on scanned images of the lips contour of a subject.
  • the present invention can be applied especially in the area of biometrics, video surveillance and video indexing in which it is important to discriminate between individuals from a still image or sequence of images.
  • the present invention also disclose a lip recognition method in which the disclosed method has the flexibility to be carried out using conventional biometric sensor such as optical sensor, retinal sensor for iris scanning or even sensor for facial scanning. This feature allows the disclosed method to be compatibly incorporated into existing biometric scanning devices without incurring additional cost.
  • the lip biometric allows the use of other existing sensors such as capacitance or ultrasonic sensors as long the system designed is customized to suit the need and usage of acquire lip biometric.
  • Further object of the present invention includes offering a biometric-based identity recognition approach which is complementary to other existing method to counterbalance the limitations found on the existing methods such as finger print, iris scanning or facial scanning.
  • the present invention may also expand its application to complement and aid in crime investigations and forensic sciences.
  • lip identification allows additional features for crime investigations especially in cases where the common biometrics such as fingerprints and D A is not found or is not sufficient. Due to the popularity of identification by fingerprints, criminals are more cautious about leaving behind their fingerprints in a crime scene and therefore making it difficult to trace them.
  • lip identification allows more prominent features of the subject to be extracted due to inherently better clarity and contrast of lip images, even when the subject wears lipsticks or gloss. Ai least one of the preceding objects is met.
  • one of the embodiment of the present invention includes a method of identity recognition to be performed on a computerized apparatus comprising the steps of acquiring at least one first live scan (715) containing at least one quadrant of a lip contour of a first subject; processing the lip contour to produce a first lip print (725) which is analyzable by the computerized apparatus; generating a subject template (735) unique to the subject by computing at leas;t one pre-defined characteristic found on the first lip print (725); comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score (755) for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template and the subject template (735) is within a preset threshold or vice versa.
  • the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip. ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
  • the disclosed method may further comprise the step of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step.
  • the disclosed method further comprises the s _ .ep of grouping the first lip print (725) of the first subject according to at least one pre-defined characteristic found on the lip print.
  • the pre-defined characteristic used in this grouping step is pattern of grooves on the lip print.
  • the disclosed method may include the step of registering the subject template (735) into the database and tagging the subject template (735) with available information regarding the subject when no matched stored template is found.
  • the present invention also discloses a method of enrolling a registered template into a database which can be used in conjunction with the above identification process.
  • the steps include acquiring a second live scan containing at least one quadrant of a lip contour of a second subject; processing the lip contour of the second subject to produce a second lip print which is analyzable by a computerized apparatus; generating a template unique to the second subject by computing at least one pre-defined characteristic found on the second lip print; and registering the template into the database while the identity and/or personal information of the second subject is tagged to the registered template.
  • the step may further include identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step and/or the step of enhancing quality of the second live scan prior to the processing step.
  • a step of grouping the second lip print of the second subject according to at least one pre-defined characteristic found on the second lip print may be included, while the pre-defined characteristic in the grouping step is pattern of grooves on the lip print,
  • the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
  • a system of human identity registering and recognition is disclosed herein based on the abovementioned method which comprises a database storing a plurality of registered templates that each registered template containing unique biometric information of lips contour of an individual and each registered template is tagged with the identity as well as personal information of that particular individual; an input means to acquire a live scan containing at least one quadrant of a lip contour of a subject; a data processor capable of carrying out at least one of the process of receiving the live scan from the input means, processing the lip contour of the live scan to produce a lip print, generating a subject template (735) unique to the subject based on unique biometric feature on the lip print by computing at least one predefined characteristic found on the lip print; comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the
  • the data processor is capable of identifying the lip contour of the subject on the received live scan; enhancing the quality of the live scan; grouping the lip print of the subject according to at least one pre-defined characteristic found on the lip print and/or registering the subject template (735) into the database and tagging the subject template (735) with other available information regarding the subject when no matched stored template is found.
  • the predefined characteristic on the lip print comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip. distance between points on upper lip and/or lower lip. ratio of distances between points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
  • the present invention also discloses a method of generating biometric information from a lip print comprising the steps of dividing an upper lip portion or a lower lip portion of the lip print into two symmetrical quadrants; overlapping the symmetrical quadrants to acquire an overlapped quadrant print: extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print.
  • the at least one symmetrical quadrants may be flipped at horizontal axis or vertical axis prior to the overlapping step to generate additional biometric information of the subject.
  • Figure 1 illustrates a block diagram showing identity recognition method disclosed in one embodiment of the present
  • Figure 2 is a block diagram showing one process applied on to the lip contour to carry out in one embodiment of the present invention
  • Fisure 3 is a block diagram showing one process use in recruiting stored template into the database for identity recognition
  • Figure 4 shows examples of groups of the lip print based on different types of grooves
  • Figure 5 shows division of the lip prints into 4 different symmetrical quadrants
  • Figure 6 shows one way to generate more biometric data based on one method disclosed in the present invention.
  • Figure 7 shows an example of the different local patterns of the feature extraction that such as horizontal, vertical and two diagonal orientations, according to an embodiment of the invention with various reference points pre-assigned.
  • the present invention is a method of identity recognition (700) to be performed on a computerized apparatus comprising the steps of acquiring (710) a first live scan (715) containing at least one quadrant of a lip contour of a first subject; processing (720) the lip contour to produce a first lip print (725) which is analyzable by the computerized apparatus; generating a subject template (735) unique to the subject by computing at least one pre-defined characteristic found on the first lip print (725); comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template: computing a match score (755) for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template and the subject template (735) is within a preset threshold or vice versa.
  • the identity recognition method (700) can be d 'ided into few different sub-steps, each with its own functionality.
  • the sub-steps includes namely acquisition (710). processing (720). feature extraction and template generating, (730) template matching (740) and decision making (760) as in Figure 1.
  • the identity recognition process (700) is performed on a computerized apparatus while the user database (150) is connected to the computerized apparatus to feed the stored template for the apparatus.
  • a live scan (715) or images containing image with at least one quadrant of the lip contour being captured can derive from biometrical sensor such as iris sensors; face sensors or any other type existing biometrical sensor as long the sensor capture an image of the lip contour in an image resolution sufficient to be processed in the subsequent steps.
  • biometrical sensor such as iris sensors; face sensors or any other type existing biometrical sensor as long the sensor capture an image of the lip contour in an image resolution sufficient to be processed in the subsequent steps.
  • the sensors referred herein can be any image capturing devices which are able to capture a digital image of the lips including other various sensor technologies like optical imaging, thermal imaging, ultrasonic imaging and passive capacitance or active capacitance imaging.
  • the image quality of the live scan may be enhanced by the computerized apparatus automatically or manually by a user of the present invention. More preferable, the enhancing procedure mainly focuses on the lip contour of the live scan (715) to expedite the process flow while enhancing unnecessary area of the live scan shall burden the computerized apparatus with more processing work.
  • the present invention may further involve a step of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step (720).
  • identification may be performed by classifying pixels of the first live scan (715) into color information and identifying the lip contour via the color information.
  • the lip identification can be performed using dynamic programming algorithms such as Viterbi algorithm to extract the lip's contour or envelope or other techniques such as mouth corner detection and feature points representation to detect and highlight the lip contour.
  • the live scan (715) enhancing may not be needed if the live scan (715) provided is in good resolution and quality to be analyzed by the computerized apparatus.
  • the enhancing procedures refer to operations on images at the lowest level of abstraction that may be useful in a variety of situations. Such improvements made onto the live scan may involve suppressing undesired distortions, enhancing some image features which are important for subsequent processing, reducing artifacts on the live scan due to sensors used, performing gradient filtering to enhance the contour of the lips and mouth comers or any combinations thereof.
  • the processed lip contour or live scan (715) is herein named as the first lip print (725).
  • the first lip print (725) of the first subject may have to be standardized in terms of orientation, size and so on prior to generating the subject template (735).
  • the. first lip print is used for extracting a collection of unique biometric features unique to the first subject.
  • a single biometric feature may be insufficient to represent uniqueness of a subject thus a combination of biometrical characteristics are preferably employed to enhance the significance of the biometrical uniqueness for subject representation.
  • the computerized apparatus computes the extracted unique biometrical characteristics into a subject template.
  • the subject template is a digital format which the biometrical of the first subject is recorded, preferably in a binary numeric form to facilitate the later matching step. It is more preferable also that the generated subject template is capable to be used for quadrant lips matching instead of requiring the whole lips contour.
  • the biometrical characteristics to be extracted from the first lip print (725) is pre-defined that the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip. distance between reference points on upper lip and/or lower lip. ratio of distances between reference points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation. It was found by the inventors of the present invention that the first lip print may not necessary to be in a whole in order to carry out the disclosed method. Availability of full contour of the lips in the live scan may not be always in reality.
  • the present invention is feasible to perform the identity recognition based on partial of the lip contours, preferably a quadrant of the lips contour. Lip print of that specified contour will be generated followed by characteristics extraction and be compared for the corresponding quadrants. In best cases where the entire lip is available, feature of the entire lip will be compared for optimal result whereas comparison between features of quadrants aids to strengthen the identification method.
  • the disclosed method may further comprising the step of grouping the first lip print (725) of the first subject according to at least one pre-defined characteristic found on the lip print, more preferably, the pre-defined characteristic in the grouping step is pattern of grooves on the lip print as examples illustrated in figure 2.
  • the pre-defined characteristic in the grouping step is pattern of grooves on the lip print as examples illustrated in figure 2.
  • lip prints of each individual subject may not be treated in a same manner in the disclosed method.
  • biometrical characteristics of each individual lip print may be extracted based on some major predetermined features.
  • lip print with rectangular grooves or diamond- shaped grooves may preferably require measurement on the area enclosed by the rectangular grooves or a mean can be then acquired for all the measured enclosed area and so on which are used as the biometrical characteristics.
  • the disclosed method classifies the first lip print (725) into separate categories before extracting the biometrical characteristics, in which the biometrical characteristics to be extracted may vary from one category to another, to generate the subject template (735).
  • the lips classification (220) is introduced to allow more accurate and precise biometrical characteristics to be extracted from the frrst lip print (725) based on the different categories of lips, for classification, may typically be the five major categories as shown in Figure 4: ;) diamond grooves (310). ii) long vertical grooves (320).
  • iii short vertical grooves (330).
  • the discrimination of lips into different categories is based on the furrow characteristics on the lips. It is important as the biometrical characteristics to be extracted from the first lip print (725) belonging to one category, may differ from another category. For an instance, lips belonging to the branching grooves (340) category have a variety of bifurcations and ridges, therefore features involving the bifurcations and ridges should be given higher priority to be extracted as compared to other features.
  • a lip image may be revealed as a surface with visible elements of lines representing furrows.
  • the in-depth analysis of this local pattern allows identification since it is unique for every individuals.
  • the significant local patterns of the lips may be extracted in four orientations: vertical, horizontal and two diagonal orientations as shown in Figure 6 while different preset reference points can be assigned onto the first lip print (725) to generate the necessary biometrical information.
  • These patterns are amongst the fundamental patterns but it is not restricted to the aforementioned ones and may include other patterns that are extracted with other orientations as well.
  • Other shape features that can be extracted includes the concavity of the upper lip and lower lip, the thickness of the lip, the distance or ratio between the points on the upper lip and lower lip or between mouth corners, color information, the thickness or thinness of the furrows, scars or clefts and lip area.
  • the bifurcations or divergence features may be extracted. All of these features extracted are then combined to generate a subject template (735) which represents the unique identity of a subject. Possibly, the biometrical characteristics extraction process (230) is applied into all the 4 different quadrants as well as the entire lip that results in a total of 5 different templates (135).
  • the main advantage of having 5 different templates (135) is the ability to perform identification even in cases where the lip image is not entirely available.
  • template matching can be performed in situations only part of the lip images is available or where only some of the quadrants are available. The features will be compared for the corresponding quadrants.
  • a comparing or matching step (740) is performed between the subject template (735) and the registered template (135) to find a match.
  • the subject template (735) is digitally compared against all of the existing registered template (135) which have been registered and stored in the database (150).
  • the match score (755) is computed for ever)' pair of template matching using various matching algorithm(s) which may include the Hamming distance or the Bhattacharyya distance, but is not limited to the aforementioned distance techniques.
  • the match score (755) is then fed into the decision making (760) whereby the scores are compared against a threshold value. If the match score (755) is within the allowed threshold value, the identification of the subject is decided to be a match (775) with the compared registered template (135) in the database (150). If the match score (755) is beyond or lower that the allowed threshold value, the compared subject template is decided to be a mismatch (765) with the compared registered template available in the database (150). This is useful for any specified use or applications such as restricted entrance to a secured area and verification in conjunction with a smart card.
  • the threshold value can be of any value predetermined empirically depending on the type of applications and the sensitivity of the identification system.
  • the comparing step (740) can be conducted by matching just one quadrant of the lip print (725) instead of using entire lip.
  • all of the registered template (135) in the database (150) is tagged with personal information related to the registered individual such as identification number, certain birth details, addresses, and so on.
  • the identity of the subject is deemed to be related to the matched registered individual and so does the personal information thus completing the identity' recognition process.
  • the present invention also offers a method of enrolling a registered template into database which is preferably used in the identification method.
  • the method of enrolling a registered template comprises the steps of acquiring a second live scan (1 15) containing at least one quadrant of a lip contour of a second subject: processing the lip contour of the second subject to produce a second lip print (125) which is analyzable by a computerized apparatus: generating a template (135) unique to the second subject by computing at least one pre-defined characteristic found on the second lip print; and registering the template into the database (150) while identity and/or personal info of the second subject tagged to the registered template.
  • most of the steps performed in the template registration method are similar to the recognition method in foregoing but with some modification.
  • the method of enrolling registered template (100) has been described with an overall process flow as illustrated in Figure 3.
  • the acquiring step (1 10) deals with acquiring second image or second live scan (1 15) containing the lip information in the form of live scan (1 15).
  • sensors similar to iris sensors or face sensors may be used in which the focus is on the human face.
  • the sensors may refer to any other image capturing devices including optical imaging, thermal imaging, ultrasonic imaging and passive capacitance or active capacitance imaging devices.
  • the processing step (120) refers in this embodiment refers to operations on the second live scan (1 15) aiming to improve the second live scan by either suppressing undesired distortions or enhancing some image features which are important for subsequent processing, or both. Artifacts derived from the may be reduced in this step too.
  • the processing step (120) focuses on enhancing the features of the lips and amongst the processing works that may be carried out in this phase, includes performing gradient filtering to enhance the contour of the lips and mouth corners.
  • processing works that may be carried out in this phase, includes performing gradient filtering to enhance the contour of the lips and mouth corners.
  • common image transform processes, simplification techniques are available to remove unwanted pixels and histogram equalization, to enhance the quality of the second live scan (115) in order to ease its shape extraction.
  • this embodiment may include the step of identifying or detecting the lip contour of the second subject on the provided second live scan (1 15) prior to the processing step.
  • This approach can greatly reduce the data to be processed as it only focuses on the relevant area, lip contour, to produce the second lip print (125).
  • the lip identifying step (210) locates the lips and further processing to enhance the lips area may be applied herein.
  • the lip identifying step (210) may classify pixels into color information and identify the lip area based on the color information or apply dynamic programming algoritlams such as Viterbi algorithm to extract the lip's contour or envelope. Other than that, other techniques such as mouth comer detection and feature points representation may be applied to detect and highlight the lip area. Also, it may be useful to standardize the orientation of the detected lip to promote systematic computation to generate the template (135) in the subsequent step.
  • the enrolling method further comprise the step of grouping the second lip print of the second subject according to at least one predefined characteristic found on the second lip print.
  • the grouping step as in the lip grouping (220) of figure 2 classifies the second lip print (125) into separate categories before extracting the predefined biometrical characteristics on the second lip print (125) to generate the template (135) and the biometrical characteristics to be extracted may vary from one group to another. Through the grouping step, more accurate and precise biometrical characteristics are to be extracted from the second lip print (125) to represent uniqueness of the second subject. Likewise in the recognition method. the preferred embodiment of the enrolling method employs few major categories to group the second lip prints based on the pattern of the grooves namely diamond grooves (310), long vertical grooves (320). short vertical grooves (330), branching grooves (340) rectangular grooves (350) or any combinations thereof.
  • the discrimination of lips into different categories is based on the furrow characteristics on the lips. This phase is important as the features to be extracted from the lips belonging to one category 7 , may differ from another category.
  • lips belonging to the branching grooves (340) category have a variety of bifurcations and ridges, therefore features involving the bifurcations and ridges should be given higher priority to be extracted as compared to other features.
  • the template can be generated either from the whole second lip print (125) or at least one quadrant of the second lip print (125).
  • the disclosed enrolling method allows more variations of significant biometrical characteristics to be extracted from the second lip print (125).
  • the second lip print (125) is divided into 4 quadrants; 2 symmetrical quadrants on the upper lip (410, 420) and 2 symmetrical quadrants on the lower lip (430, 440) as shown in Figure 5.
  • Quadrant 1 (410) refers to the upper right lip
  • quadrant 2 (420) refers to the upper left lip.
  • quadrant 3 (430) refers to the lower left lip and quadrant 4 (440) refers to the lower right lip while quadrant 5 (450) on the other hand refers to the entire lip.
  • the division of lip into quadrants allows the computation of additional features which is derived from the difference of features between quadrants as shown in Figure 6.
  • the predefined biometrical characteristics can be extracted from the first lip print (125) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
  • the second lip print (125) may be revealed as a surface with visible elements of lines representing furrows.
  • the in-depth analysis of this local pattern allows identification since it is unique for every individuals.
  • the significant local patterns of the lips may be extracted in four orientations: vertical, horizontal and two diagonal orientations as shown in Figure 7. These patterns are amongst the fundamental patterns but it is not restricted to the aforementioned ones and may include other patterns that are extracted with other orientations as well.
  • Other shape features that can be extracted includes the concavity of the upper lip and lower lip. the thickness of the lip. the distance or ratio between the points on the upper lip and lower lip or between mouth corners, color information, the thickness or thinness of the furrows, scars or clefts and lip area.
  • the bifurcations or divergence features may be extracted. All of these features extracted are then combined to generate the template (135) which represents the unique identity of the second subject.
  • a single feature may be insufficient to represent the uniqueness of biometrical characteristics of the second subject and thus a combination of features is preferably used in the present invention to enhance the significance of the subject representation.
  • the template contains unique biometrical characteristics of the lip contour of the second subject.
  • the registration (140) step further stores the template into the database (150) meanwhile associates or tags personal information of the second subject to the registered stored template (135).
  • the template (135) stored for the second subject is associated with personal information of the subject which may include, but is not limited to, identification number, birth details, address and so on.
  • the steps of enrolling method are repeated for every other individual and the templates are generated. All the final oulput is stored into a database (150) for to be used for identity recognition.
  • Another embodiment of the present invention involves a method of generating biometric information from a lip print comprising the steps of dividing an upper lip portion or a lower lip portion of the lip print into two symmetrical quadrants; overlapping the symmetrical quadrants to acquired an overlapped quadrant print; extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print. It is important to be noted that the disclosed method of generating biometrical information is applicable in both present disclosed method of identity recognition and method of providing the registered template.
  • the present invention employs a method to divide the lip into 4 quadrants; 2 symmetrical quadrants on the upper lip (410, 420) and 2 symmetrical quadrants on the lower lip (430, 440) as shown in Figure 5.
  • Quadrant 1 (410) refers to the upper right lip
  • quadrant 2 (420) refers to the upper left lip
  • quadrant 3 (430) refers to the lower left lip
  • quadrant 4 (440) refers to the lower right lip.
  • - Quadrant 5 (450) on the other hand refers to the entire lip.
  • the division of lip into quadrants allows the computation of additional features which is derived from the difference of features between quadrants as depicted in Figure 6.
  • the digital difference can be computed by overlapping 2 quadrants on top of one another (520) and performing the feature extraction (530) on the overlapped lip images.
  • one of the quadrants Prior to extracting the digital difference, one of the quadrants can be flipped, but not limited to, horizontally (520) first for better overlapping result. It is possible to have the one the symmetrical quadrants flipped at either horizontal axis or vertical axis prior to the overlapping step because overlapping the quadrants either way shall generate biometrical information of the first subject relying on the extracting algorithms used. Nevertheless, this flipping step is not mandator ⁇ '.
  • the ability to extract (530) the digital difference (510) on a single biometric and not pair-based biometric via the quadrants method is another key merit of the proposed invention.
  • the method of generating biometric information from a lip print comprising the steps of dividing a lip portion at the vertical axis into two symmetrical quadrants namely a left half portion and a right half portion; overlapping the symmetrical quadrants to acquire an overlapped quadrant print: extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print.
  • one of the symmetrical quadrants may be optionally flipped at horizontal axis or vertical axis prior to the overlapping step.
  • the present invention also discloses herein a system of human identity registering and recognition comprising a database storing a plurality of registered templates that each registered template containing unique biometric information of lips contour of an individual and each registered template is tagged with identity as well as personal information of that individual; an input means to acquire a live scan containing at least one quadrant of a lip contour of a subject; a data processor capable of carrying out at least one of the process of receiving the live scan from the input means, processing the lip contour of the live scan to produce a lip print, generating a subject template (735) unique to the subject based on unique biometric feature on the lip print by computing at least one pre-defined characteristic found on the lip print; comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score for every pair of compared templates: and relating the subject to the tagged identity belongs to the matched registered template when the computed match score between the matched registered registered
  • the database as known in the art can be any means capable of storing digital information such as hard disc drive, solid state drive, or even optical disc which is connected to the data processor for storing registered template (135) and feeding the registered template (135) for comparison purpose.
  • the database may be placed in a remotely located sen'er that the data processor is connected to this server via wired or wireless network.
  • the data processor in the present invention shall capable of optionally conducting one or more of other tasks like identifying the lip contour of the subject on the received live scan, enhancing quality of the live scan, grouping lip print of the subject according to at least one pre-defmed characteristic found on the lip print, registering the subject template (735) into the database and tagging the subject template (735) with other available information regarding the subject when no matched stored template is found in order to effectively carry out the identity recognition and registering.
  • the tasks above described cover basic operations MY2010/000225
  • the predefined characteristic on the lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between points on upper lip and/or lower lip, ratio of distances between points on upper lip and/or lower lip, thickness of the cleft, bi furcations, and features of at least a quadrant extracted in one or more orientation.
  • the input means is image capturing devices such as camera or video recording apparatus.
  • devices like flash drive, optical disc, non-volatile memory cards and so on containing images can be used as the live scan should be considered the input means as well in the present invention.
  • Other devices like keyboard should not be ignored as an input means particularly for entering personal information for registering individual template.
  • the user interface offers the necessary flexibility for the user to manipulate the system particularly adjusting different parameters to run the disclosed system at various sensitivity and reliability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

A method of identity recognition via human (subject) lip images is proposed. The method includes registration (140) of templates (135) of the lip images for known subjects for later matching with lip images from subjects for identification, by digitally matching (740) with the registered templates (135). The lip portions are divided in four quadrants (410-440) for feature extraction which permits template matching (740) even when only partial Hp images are available for identification. The method includes classifying (220) the lips into categories, for defined characteristic features to be extracted, where the different categories of lips (310-350) contain different prominent features that are unique for representation. The feature extraction from the quadrants may be done in different orientations (610-640). The acquisition of the lip images (110,710) is by available image sensor technologies such as optical imaging, thermal imaging, ultrasonic imaging, passive capacitance and active capacitance imaging.

Description

A METHOD FOR IDENTITY RECOGNITION BASED ON LIP IMAGE
FIELD OF INVENTION
The present invention relates to a method of identity recognition via human lip images, relying on the fact that the human lips have a unique trait which is exclusive between individuals.
BACKGROUND OF THE INVENTION
Physiological biometrics are related to intrinsic body features of a human that includes fingerprints, irises, pupils, palm prints, face and DNA whereas behavioral traits refers to keystroke, signature, handwriting and voice. Each measurement however has its own merits and flaws. Thus, the findings of novel and innovative solutions continue. Some of the new biometrics modalities include human scent recognition, knuckles texture and finger-nails recognition. Amongst all of the biometric techniques, fingerprint-based identification is the oldest and has been used in numerous applications. Traditional fingerprint identification by feature extraction has been used by institutions like the Federal Bureau of Investigations (FBI) for identifying criminals and is the most common fingerprint identification system.
United States patent application no. 6241288 has introduced another method of fingerprint identification via using two-dimensional bitmaps instead of the conventional feature extraction. An accurate reference point is located and two- dimensional areas in the vicinity of the reference point of the input image of the fingerprint is correlated with the stored fingerprint recognition information to determine the similarity and thus identify the input fingerprint. However, a common problem arising in fingerprint-based identification is the possibility to manipulate one's biometric used, for example by cutting off one's finger or by creating a synthetic model of a fingerprint. Thus there has been active research on methods and mechanisms to detect the 'liveness' of the fingerprint. Another issue is that finger injuries are common, ranging from minor cuts and scrapes to wounds with major damage to bones, tendons and ligaments. Thus until the injury is healed, fingerprint identification is not applicable. Another popular biometric technique these days is the optical features recognition such as for the iris and the retina. Iris is the colored muscles in front of the eye and iris recognition uses the unique patterns in an individual's iris for identification. Retinal recognition on the other hand, uses the unique pattern of blood vessels on an individual's retina behind the eye. A typical issue in optical features biometric system is the availability of different colors and patterns of contact lenses in the market that can easily mask the actual iris color. Also, problem occurs in recognizing for individuals wearing hard gas permeable contact lenses.
An alternative biometric technique is by using face recognition for identification as disclosed in another United States Patent application no. 7440594. In face recognition, the facial features such as eyes, nose, mouth and ears are determined. Then, information on the shape and position of each facial point is extracted. A common problem is face identification is the indefinite information of the face such as when a subject puts on makeup, wears glasses, has a different hairstyle or having different pose of head at the time of identification. Therefore, it is difficult to recognize an individual only by facial shape singly. The fact that lip features are unique for humans have been confirmed by Yasuo Tsuchihasi and Kazuo Suzuki in their studies in Tokio University (1968-1971) in which an examination of the lips of 1364 subjects from 3 to 60 years of age from both genders was conducted. The research findings proved that lip characteristics are unique and unchangeable for each examined person, similar to the distinctive characteristics of fingerprint. Yet the most related system in detecting lip shape has been found in the automatic speech recognition (ASR) domain, in which a computerized apparatus predicts the spoken words according to the shape and movement of the lips using some algorithms. Nevertheless, such system is not capable of producing biometric information regarding the user besides the intended usage as speech recognition.
Further in International patent application no. WO 2008/135907 proposes a method of evaluating lip type based on freshness and sharpness of the lips and an evaluation system for implementing such a method. The method uses the lip color as one of the main features and evaluates the lip type by fineness, fullness, outline neatness and vmnkleness. The lip is evaluated as a whole, and not in portions, to determine types of lips in terms of shapes and size of the lips. Further, no unique biometric data is generated for a subject through the disclosed system.
It is a known fact that each measurement or biometric techniques has its own merits and faults. Thus, it is vital to find innovative solutions to complement the existing ones in order to eradicate the drawbacks.
SUMMARY OF THE INVENTION
The present invention aims to provide a novel method of identity recognition by identifying unique biometirc features available on human lips. In more specific, the method is capable of generating identity template which is unique to each individual based on scanned images of the lips contour of a subject. The present invention can be applied especially in the area of biometrics, video surveillance and video indexing in which it is important to discriminate between individuals from a still image or sequence of images. The present invention also disclose a lip recognition method in which the disclosed method has the flexibility to be carried out using conventional biometric sensor such as optical sensor, retinal sensor for iris scanning or even sensor for facial scanning. This feature allows the disclosed method to be compatibly incorporated into existing biometric scanning devices without incurring additional cost. Also, the lip biometric allows the use of other existing sensors such as capacitance or ultrasonic sensors as long the system designed is customized to suit the need and usage of acquire lip biometric.
Further object of the present invention includes offering a biometric-based identity recognition approach which is complementary to other existing method to counterbalance the limitations found on the existing methods such as finger print, iris scanning or facial scanning. The present invention may also expand its application to complement and aid in crime investigations and forensic sciences. In the area of forensic science, lip identification allows additional features for crime investigations especially in cases where the common biometrics such as fingerprints and D A is not found or is not sufficient. Due to the popularity of identification by fingerprints, criminals are more cautious about leaving behind their fingerprints in a crime scene and therefore making it difficult to trace them. Contradicting to the face biometric in which a subject wearing make up raise an issue in identification, lip identification allows more prominent features of the subject to be extracted due to inherently better clarity and contrast of lip images, even when the subject wears lipsticks or gloss. Ai least one of the preceding objects is met. in whole or in part, by the present invention, in which one of the embodiment of the present invention includes a method of identity recognition to be performed on a computerized apparatus comprising the steps of acquiring at least one first live scan (715) containing at least one quadrant of a lip contour of a first subject; processing the lip contour to produce a first lip print (725) which is analyzable by the computerized apparatus; generating a subject template (735) unique to the subject by computing at leas;t one pre-defined characteristic found on the first lip print (725); comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score (755) for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template and the subject template (735) is within a preset threshold or vice versa. In the preferred embodiment, the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip. ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
Preferably, the disclosed method may further comprise the step of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step. In another aspect, the disclosed method further comprises the s_.ep of grouping the first lip print (725) of the first subject according to at least one pre-defined characteristic found on the lip print. Preferably, the pre-defined characteristic used in this grouping step is pattern of grooves on the lip print.
In another aspect, the disclosed method may include the step of registering the subject template (735) into the database and tagging the subject template (735) with available information regarding the subject when no matched stored template is found.
Additionally, the present invention also discloses a method of enrolling a registered template into a database which can be used in conjunction with the above identification process. In this embodiment, the steps include acquiring a second live scan containing at least one quadrant of a lip contour of a second subject; processing the lip contour of the second subject to produce a second lip print which is analyzable by a computerized apparatus; generating a template unique to the second subject by computing at least one pre-defined characteristic found on the second lip print; and registering the template into the database while the identity and/or personal information of the second subject is tagged to the registered template.
In the method of enrolling, the step may further include identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step and/or the step of enhancing quality of the second live scan prior to the processing step.
Still another embodiment of the enrolling method, a step of grouping the second lip print of the second subject according to at least one pre-defined characteristic found on the second lip print may be included, while the pre-defined characteristic in the grouping step is pattern of grooves on the lip print,
Preferably, in the enrolling method, the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
Besides, a system of human identity registering and recognition is disclosed herein based on the abovementioned method which comprises a database storing a plurality of registered templates that each registered template containing unique biometric information of lips contour of an individual and each registered template is tagged with the identity as well as personal information of that particular individual; an input means to acquire a live scan containing at least one quadrant of a lip contour of a subject; a data processor capable of carrying out at least one of the process of receiving the live scan from the input means, processing the lip contour of the live scan to produce a lip print, generating a subject template (735) unique to the subject based on unique biometric feature on the lip print by computing at least one predefined characteristic found on the lip print; comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template is within a preset threshold or vice versa and providing an outcome; and a user interface receives the provided outcome and allows a user manually manipulating the database, the input means and the data processor. In addition, the data processor is capable of identifying the lip contour of the subject on the received live scan; enhancing the quality of the live scan; grouping the lip print of the subject according to at least one pre-defined characteristic found on the lip print and/or registering the subject template (735) into the database and tagging the subject template (735) with other available information regarding the subject when no matched stored template is found.
In further aspect, the predefined characteristic on the lip print comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip. distance between points on upper lip and/or lower lip. ratio of distances between points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
The present invention also discloses a method of generating biometric information from a lip print comprising the steps of dividing an upper lip portion or a lower lip portion of the lip print into two symmetrical quadrants; overlapping the symmetrical quadrants to acquire an overlapped quadrant print: extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print.
In further embodiment, the at least one symmetrical quadrants may be flipped at horizontal axis or vertical axis prior to the overlapping step to generate additional biometric information of the subject.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following drawings, same reference numbers generally refer to the same parts throughout. The drawings are not necessarily to scale, instead emphasis is placed upon illustrating the principles of the invention. The various embodiments and advantages of the present invention will be more fully understood when considered with respect to the following detailed description, appended claims and accompanying drawings wherein: Figure 1 illustrates a block diagram showing identity recognition method disclosed in one embodiment of the present;
Figure 2 is a block diagram showing one process applied on to the lip contour to carry out in one embodiment of the present invention;
Fisure 3 is a block diagram showing one process use in recruiting stored template into the database for identity recognition Figure 4 shows examples of groups of the lip print based on different types of grooves;
Figure 5 shows division of the lip prints into 4 different symmetrical quadrants; Figure 6 shows one way to generate more biometric data based on one method disclosed in the present invention; and
Figure 7 shows an example of the different local patterns of the feature extraction that such as horizontal, vertical and two diagonal orientations, according to an embodiment of the invention with various reference points pre-assigned.
DETAILED DESCRIPTION OF THE INVENTION
The following description presents several preferred embodiments of the present invention in sufficient detail such that those skilled in the art can make and use the invention.
Before describing in detail embodiments that are in accordance with the present invention, it should be noted that all of the figures are drawn for ease of explanation of the basic teachings of the present invention only. The extension of the figures with respect to the number, position, relationship and dimension of the parts of the preferred embodiment will be within the skill of the art after the following teachings of the present invention have been read and understood. Further, the exact dimensions and dimensional proportions to conform to specific force, weight, strength and similar requirements will likewise be within the skill of the art after the following teachings of the present invention have been read and understood.
In general, as shown in figure 1 , the present invention is a method of identity recognition (700) to be performed on a computerized apparatus comprising the steps of acquiring (710) a first live scan (715) containing at least one quadrant of a lip contour of a first subject; processing (720) the lip contour to produce a first lip print (725) which is analyzable by the computerized apparatus; generating a subject template (735) unique to the subject by computing at least one pre-defined characteristic found on the first lip print (725); comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template: computing a match score (755) for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template and the subject template (735) is within a preset threshold or vice versa. More particularly, the identity recognition method (700) can be d 'ided into few different sub-steps, each with its own functionality. The sub-steps includes namely acquisition (710). processing (720). feature extraction and template generating, (730) template matching (740) and decision making (760) as in Figure 1. Preferably the identity recognition process (700) is performed on a computerized apparatus while the user database (150) is connected to the computerized apparatus to feed the stored template for the apparatus.
Preferably, in the acquiring step or acquisition (710). a live scan (715) or images containing image with at least one quadrant of the lip contour being captured. The live scan (715) can derive from biometrical sensor such as iris sensors; face sensors or any other type existing biometrical sensor as long the sensor capture an image of the lip contour in an image resolution sufficient to be processed in the subsequent steps. Besides, the sensors referred herein can be any image capturing devices which are able to capture a digital image of the lips including other various sensor technologies like optical imaging, thermal imaging, ultrasonic imaging and passive capacitance or active capacitance imaging.
In the subsequent processing step, optionally, the image quality of the live scan may be enhanced by the computerized apparatus automatically or manually by a user of the present invention. More preferable, the enhancing procedure mainly focuses on the lip contour of the live scan (715) to expedite the process flow while enhancing unnecessary area of the live scan shall burden the computerized apparatus with more processing work. In order to enhance the area of preferred only, the present invention may further involve a step of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step (720). There are various approaches that can be applied for lip contour identification for the live scan (715). For example, identification may be performed by classifying pixels of the first live scan (715) into color information and identifying the lip contour via the color information. In other embodiment, the lip identification can be performed using dynamic programming algorithms such as Viterbi algorithm to extract the lip's contour or envelope or other techniques such as mouth corner detection and feature points representation to detect and highlight the lip contour. Nevertheless, the live scan (715) enhancing may not be needed if the live scan (715) provided is in good resolution and quality to be analyzed by the computerized apparatus. The enhancing procedures refer to operations on images at the lowest level of abstraction that may be useful in a variety of situations. Such improvements made onto the live scan may involve suppressing undesired distortions, enhancing some image features which are important for subsequent processing, reducing artifacts on the live scan due to sensors used, performing gradient filtering to enhance the contour of the lips and mouth comers or any combinations thereof. It is important to be noted that the processed lip contour or live scan (715) is herein named as the first lip print (725). Moreover, the first lip print (725) of the first subject may have to be standardized in terms of orientation, size and so on prior to generating the subject template (735).
Pursuant to the preferred embodiment, in the template generating step, the. first lip print is used for extracting a collection of unique biometric features unique to the first subject. In most applications, a single biometric feature may be insufficient to represent uniqueness of a subject thus a combination of biometrical characteristics are preferably employed to enhance the significance of the biometrical uniqueness for subject representation. Further, the computerized apparatus computes the extracted unique biometrical characteristics into a subject template. The subject template is a digital format which the biometrical of the first subject is recorded, preferably in a binary numeric form to facilitate the later matching step. It is more preferable also that the generated subject template is capable to be used for quadrant lips matching instead of requiring the whole lips contour. In respect to the preferred embodiment, the biometrical characteristics to be extracted from the first lip print (725) is pre-defined that the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip. distance between reference points on upper lip and/or lower lip. ratio of distances between reference points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation. It was found by the inventors of the present invention that the first lip print may not necessary to be in a whole in order to carry out the disclosed method. Availability of full contour of the lips in the live scan may not be always in reality. Thus, the present invention is feasible to perform the identity recognition based on partial of the lip contours, preferably a quadrant of the lips contour. Lip print of that specified contour will be generated followed by characteristics extraction and be compared for the corresponding quadrants. In best cases where the entire lip is available, feature of the entire lip will be compared for optimal result whereas comparison between features of quadrants aids to strengthen the identification method.
In accordance with another preferred embodiment, the disclosed method may further comprising the step of grouping the first lip print (725) of the first subject according to at least one pre-defined characteristic found on the lip print, more preferably, the pre-defined characteristic in the grouping step is pattern of grooves on the lip print as examples illustrated in figure 2. To accurately and precisely identity the unique biometrical characteristics specific to the subject, lip prints of each individual subject may not be treated in a same manner in the disclosed method. Preferably, biometrical characteristics of each individual lip print may be extracted based on some major predetermined features. For example, lip print with rectangular grooves or diamond- shaped grooves may preferably require measurement on the area enclosed by the rectangular grooves or a mean can be then acquired for all the measured enclosed area and so on which are used as the biometrical characteristics. Thus, in the most preferred embodiment, the disclosed method classifies the first lip print (725) into separate categories before extracting the biometrical characteristics, in which the biometrical characteristics to be extracted may vary from one category to another, to generate the subject template (735). The lips classification (220) is introduced to allow more accurate and precise biometrical characteristics to be extracted from the frrst lip print (725) based on the different categories of lips, for classification, may typically be the five major categories as shown in Figure 4: ;) diamond grooves (310). ii) long vertical grooves (320). iii) short vertical grooves (330). iv) branching grooves (340) and v) rectangular grooves (350) or any combination derived thereof but the classification of the lips is not limited to the aforementioned five categories. The discrimination of lips into different categories is based on the furrow characteristics on the lips. It is important as the biometrical characteristics to be extracted from the first lip print (725) belonging to one category, may differ from another category. For an instance, lips belonging to the branching grooves (340) category have a variety of bifurcations and ridges, therefore features involving the bifurcations and ridges should be given higher priority to be extracted as compared to other features. Moreover, a lip image may be revealed as a surface with visible elements of lines representing furrows. The in-depth analysis of this local pattern allows identification since it is unique for every individuals. Besides, the significant local patterns of the lips may be extracted in four orientations: vertical, horizontal and two diagonal orientations as shown in Figure 6 while different preset reference points can be assigned onto the first lip print (725) to generate the necessary biometrical information. These patterns are amongst the fundamental patterns but it is not restricted to the aforementioned ones and may include other patterns that are extracted with other orientations as well. Other shape features that can be extracted includes the concavity of the upper lip and lower lip, the thickness of the lip, the distance or ratio between the points on the upper lip and lower lip or between mouth corners, color information, the thickness or thinness of the furrows, scars or clefts and lip area. Also, in categories of lip where bifurcations are prominent, the bifurcations or divergence features may be extracted. All of these features extracted are then combined to generate a subject template (735) which represents the unique identity of a subject. Possibly, the biometrical characteristics extraction process (230) is applied into all the 4 different quadrants as well as the entire lip that results in a total of 5 different templates (135). The main advantage of having 5 different templates (135) is the ability to perform identification even in cases where the lip image is not entirely available. In this invention, template matching can be performed in situations
Figure imgf000013_0001
only part of the lip images is available or where only some of the quadrants are available. The features will be compared for the corresponding quadrants. In best cases where the entire lip is available, feature of the entire lip will be compared for optimal result whereas comparison between features of quadrants aids to strengthen the identification method. As in foregoing description, a comparing or matching step (740) is performed between the subject template (735) and the registered template (135) to find a match. Particularly, the subject template (735) is digitally compared against all of the existing registered template (135) which have been registered and stored in the database (150). The match score (755) is computed for ever)' pair of template matching using various matching algorithm(s) which may include the Hamming distance or the Bhattacharyya distance, but is not limited to the aforementioned distance techniques. The match score (755) is then fed into the decision making (760) whereby the scores are compared against a threshold value. If the match score (755) is within the allowed threshold value, the identification of the subject is decided to be a match (775) with the compared registered template (135) in the database (150). If the match score (755) is beyond or lower that the allowed threshold value, the compared subject template is decided to be a mismatch (765) with the compared registered template available in the database (150). This is useful for any specified use or applications such as restricted entrance to a secured area and verification in conjunction with a smart card. The threshold value can be of any value predetermined empirically depending on the type of applications and the sensitivity of the identification system. According to another embodiment, the comparing step (740) can be conducted by matching just one quadrant of the lip print (725) instead of using entire lip. To be noted that all of the registered template (135) in the database (150) is tagged with personal information related to the registered individual such as identification number, certain birth details, addresses, and so on. Upon finding the matched registered template (135) for the subject template (735), the identity of the subject is deemed to be related to the matched registered individual and so does the personal information thus completing the identity' recognition process.
Nonetheless, it is important for registering subject template (735) into the database and tagging the subject template (735) with available information regarding the subject when no matched stored template is found considering not all subject template (735) is able to find a match registered template (135) in the database or not all subject template has been stored or enrolled in the database, This is particularly important for criminal identification. Through this embodiment, any other individual template registers later which matches such subject templates may be then identified without much effort. Even, via this method, different unknown crime case can be linked together. The available information may include crime case, number or date of the crime and sort of. In conjunction with the above mentioned method, the present invention also offers a method of enrolling a registered template into database which is preferably used in the identification method. The method of enrolling a registered template comprises the steps of acquiring a second live scan (1 15) containing at least one quadrant of a lip contour of a second subject: processing the lip contour of the second subject to produce a second lip print (125) which is analyzable by a computerized apparatus: generating a template (135) unique to the second subject by computing at least one pre-defined characteristic found on the second lip print; and registering the template into the database (150) while identity and/or personal info of the second subject tagged to the registered template. In fact, most of the steps performed in the template registration method are similar to the recognition method in foregoing but with some modification. The method of enrolling registered template (100) has been described with an overall process flow as illustrated in Figure 3. The acquiring step (1 10) deals with acquiring second image or second live scan (1 15) containing the lip information in the form of live scan (1 15). To capture the second live scan (1 15) to be used in the present invention, sensors similar to iris sensors or face sensors may be used in which the focus is on the human face. The sensors may refer to any other image capturing devices including optical imaging, thermal imaging, ultrasonic imaging and passive capacitance or active capacitance imaging devices. More preferably the processing step (120) refers in this embodiment refers to operations on the second live scan (1 15) aiming to improve the second live scan by either suppressing undesired distortions or enhancing some image features which are important for subsequent processing, or both. Artifacts derived from the may be reduced in this step too. In this invention, the processing step (120) focuses on enhancing the features of the lips and amongst the processing works that may be carried out in this phase, includes performing gradient filtering to enhance the contour of the lips and mouth corners. Alternatively, common image transform processes, simplification techniques are available to remove unwanted pixels and histogram equalization, to enhance the quality of the second live scan (115) in order to ease its shape extraction.
Similar to the recognition method in foregoing, it is important in this method that only the lip contour is improved rather than the other area in the second live scan (1 15) to acquire a second lip print (125) which is in good quality for computing the template (135). Thus, this embodiment may include the step of identifying or detecting the lip contour of the second subject on the provided second live scan (1 15) prior to the processing step. This approach can greatly reduce the data to be processed as it only focuses on the relevant area, lip contour, to produce the second lip print (125). The lip identifying step (210) locates the lips and further processing to enhance the lips area may be applied herein. The lip identifying step (210) may classify pixels into color information and identify the lip area based on the color information or apply dynamic programming algoritlams such as Viterbi algorithm to extract the lip's contour or envelope. Other than that, other techniques such as mouth comer detection and feature points representation may be applied to detect and highlight the lip area. Also, it may be useful to standardize the orientation of the detected lip to promote systematic computation to generate the template (135) in the subsequent step. In one of the preferred embodiments, the enrolling method further comprise the step of grouping the second lip print of the second subject according to at least one predefined characteristic found on the second lip print. The grouping step as in the lip grouping (220) of figure 2 classifies the second lip print (125) into separate categories before extracting the predefined biometrical characteristics on the second lip print (125) to generate the template (135) and the biometrical characteristics to be extracted may vary from one group to another. Through the grouping step, more accurate and precise biometrical characteristics are to be extracted from the second lip print (125) to represent uniqueness of the second subject. Likewise in the recognition method. the preferred embodiment of the enrolling method employs few major categories to group the second lip prints based on the pattern of the grooves namely diamond grooves (310), long vertical grooves (320). short vertical grooves (330), branching grooves (340) rectangular grooves (350) or any combinations thereof. However, the other grouping approach can be used too in the disclosed enrolling method. The discrimination of lips into different categories is based on the furrow characteristics on the lips. This phase is important as the features to be extracted from the lips belonging to one category7, may differ from another category. For an instance, lips belonging to the branching grooves (340) category have a variety of bifurcations and ridges, therefore features involving the bifurcations and ridges should be given higher priority to be extracted as compared to other features.
Further in the biometrical characteristics extraction and template generating step (230), the template can be generated either from the whole second lip print (125) or at least one quadrant of the second lip print (125). Through the division of the second lip print into quadrants, the disclosed enrolling method allows more variations of significant biometrical characteristics to be extracted from the second lip print (125). Particularly, the second lip print (125) is divided into 4 quadrants; 2 symmetrical quadrants on the upper lip (410, 420) and 2 symmetrical quadrants on the lower lip (430, 440) as shown in Figure 5. Quadrant 1 (410) refers to the upper right lip, quadrant 2 (420) refers to the upper left lip. quadrant 3 (430) refers to the lower left lip and quadrant 4 (440) refers to the lower right lip while quadrant 5 (450) on the other hand refers to the entire lip. The division of lip into quadrants allows the computation of additional features which is derived from the difference of features between quadrants as shown in Figure 6. The predefined biometrical characteristics can be extracted from the first lip print (125) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation. In more specific, the second lip print (125) may be revealed as a surface with visible elements of lines representing furrows. The in-depth analysis of this local pattern allows identification since it is unique for every individuals. The significant local patterns of the lips may be extracted in four orientations: vertical, horizontal and two diagonal orientations as shown in Figure 7. These patterns are amongst the fundamental patterns but it is not restricted to the aforementioned ones and may include other patterns that are extracted with other orientations as well. Other shape features that can be extracted includes the concavity of the upper lip and lower lip. the thickness of the lip. the distance or ratio between the points on the upper lip and lower lip or between mouth corners, color information, the thickness or thinness of the furrows, scars or clefts and lip area. Also, in categories of lip where bifurcations are prominent, the bifurcations or divergence features may be extracted. All of these features extracted are then combined to generate the template (135) which represents the unique identity of the second subject. To be noted that, a single feature may be insufficient to represent the uniqueness of biometrical characteristics of the second subject and thus a combination of features is preferably used in the present invention to enhance the significance of the subject representation.
The template contains unique biometrical characteristics of the lip contour of the second subject. Upon forming tine template, preferably a digital binary recording the unique biometrical characteristics belong to the second subject, the registration (140) step further stores the template into the database (150) meanwhile associates or tags personal information of the second subject to the registered stored template (135). For an instance, the template (135) stored for the second subject is associated with personal information of the subject which may include, but is not limited to, identification number, birth details, address and so on. The steps of enrolling method are repeated for every other individual and the templates are generated. All the final oulput is stored into a database (150) for to be used for identity recognition.
Amongst the novelty introduced in this invention is the division of lip into quadrants to allow more variations of significant features to be extracted from the lip. Another embodiment of the present invention involves a method of generating biometric information from a lip print comprising the steps of dividing an upper lip portion or a lower lip portion of the lip print into two symmetrical quadrants; overlapping the symmetrical quadrants to acquired an overlapped quadrant print; extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print. It is important to be noted that the disclosed method of generating biometrical information is applicable in both present disclosed method of identity recognition and method of providing the registered template. In more specific, as setting forth, the present invention employs a method to divide the lip into 4 quadrants; 2 symmetrical quadrants on the upper lip (410, 420) and 2 symmetrical quadrants on the lower lip (430, 440) as shown in Figure 5. Quadrant 1 (410) refers to the upper right lip, quadrant 2 (420) refers to the upper left lip. quadrant 3 (430) refers to the lower left lip and quadrant 4 (440) refers to the lower right lip.- Quadrant 5 (450) on the other hand refers to the entire lip. The division of lip into quadrants allows the computation of additional features which is derived from the difference of features between quadrants as depicted in Figure 6. The digital difference can be computed by overlapping 2 quadrants on top of one another (520) and performing the feature extraction (530) on the overlapped lip images. Prior to extracting the digital difference, one of the quadrants can be flipped, but not limited to, horizontally (520) first for better overlapping result. It is possible to have the one the symmetrical quadrants flipped at either horizontal axis or vertical axis prior to the overlapping step because overlapping the quadrants either way shall generate biometrical information of the first subject relying on the extracting algorithms used. Nevertheless, this flipping step is not mandator}'. The ability to extract (530) the digital difference (510) on a single biometric and not pair-based biometric via the quadrants method is another key merit of the proposed invention. Further modification can be made based on the setting forth method that in this embodiment the method of generating biometric information from a lip print comprising the steps of dividing a lip portion at the vertical axis into two symmetrical quadrants namely a left half portion and a right half portion; overlapping the symmetrical quadrants to acquire an overlapped quadrant print: extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print. Similarly, one of the symmetrical quadrants may be optionally flipped at horizontal axis or vertical axis prior to the overlapping step. Based on the above mentioned methods, the present invention also discloses herein a system of human identity registering and recognition comprising a database storing a plurality of registered templates that each registered template containing unique biometric information of lips contour of an individual and each registered template is tagged with identity as well as personal information of that individual; an input means to acquire a live scan containing at least one quadrant of a lip contour of a subject; a data processor capable of carrying out at least one of the process of receiving the live scan from the input means, processing the lip contour of the live scan to produce a lip print, generating a subject template (735) unique to the subject based on unique biometric feature on the lip print by computing at least one pre-defined characteristic found on the lip print; comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score for every pair of compared templates: and relating the subject to the tagged identity belongs to the matched registered template when the computed match score between the matched registered template and the subject template (735) is within a preset threshold or vice versa and providing an outcome: and an user interface receives the provided outcome and allows a user manually manipulating the database, the input means and the data processor.
The database as known in the art can be any means capable of storing digital information such as hard disc drive, solid state drive, or even optical disc which is connected to the data processor for storing registered template (135) and feeding the registered template (135) for comparison purpose. Moreover, the database may be placed in a remotely located sen'er that the data processor is connected to this server via wired or wireless network. Further, the data processor in the present invention shall capable of optionally conducting one or more of other tasks like identifying the lip contour of the subject on the received live scan, enhancing quality of the live scan, grouping lip print of the subject according to at least one pre-defmed characteristic found on the lip print, registering the subject template (735) into the database and tagging the subject template (735) with other available information regarding the subject when no matched stored template is found in order to effectively carry out the identity recognition and registering. The tasks above described cover basic operations MY2010/000225
- 20 - to perform the foregoing disclosed methods while the data processor can be a computer processor unit and the like. Similarly, the predefined characteristic on the lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between points on upper lip and/or lower lip, ratio of distances between points on upper lip and/or lower lip, thickness of the cleft, bi furcations, and features of at least a quadrant extracted in one or more orientation.
Preferably, the input means is image capturing devices such as camera or video recording apparatus. However, devices like flash drive, optical disc, non-volatile memory cards and so on containing images can be used as the live scan should be considered the input means as well in the present invention. Other devices like keyboard should not be ignored as an input means particularly for entering personal information for registering individual template. It is important to be noted that the user interface offers the necessary flexibility for the user to manipulate the system particularly adjusting different parameters to run the disclosed system at various sensitivity and reliability.
While the foregoing description presents preferred embodiments of the present invention along with many details set forth for purpose of illustration, it will be understood by those skilled in the art that many variations or modifications in details of design, construction and operation may be made without departing from the present invention as defined in the claims. The scope of the invention is as indicated by the appended claims and all changes that come within the meaning and range of equivalency of the claims are intended to be embraced therein.

Claims

Claims
1. A method of identity recognition to be performed on a computerized apparatus comprising the steps of
acquiring a first live scan containing at least one quadrant of a lip contour of a first subject;
processing the lip contour to produce a first lip print (725) which is analyzable by the computerized apparatus;
generating a subject template (735) unique to the subject by computing at least one pre-defined characteristic found on the first lip print (725);
comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template;
computing a match score for every pair of compared templates; and
relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template and the subject template (735) is within a preset threshold or vice versa.
2. A method according to claim 1 further comprising the step of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step.
3. A method according to claim 1 further comprising the step of grouping the first lip print (725) of the first subject according to at least one pre-defined characteristic found on the lip print.
4. A method according to claim 3, wherein the pre-defined characteristic in the grouping step is pattern of grooves on the lip print.
5. A method according to claim 1 further comprising the step of registering the subject template (735) into the database and tagging the subject template (735) with available information regarding the subject when no matched stored template is found.
6. A method according to claim 1 or 3. wherein the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations., and features of at least a quadrant extracted in one or more orientation.
7. A method according to claim 2, wherein the identification is performed by classifying pixels of the first live scan (715) into color information and identifying the lip contour via the color information.
8. A method according to claim 2, wherein the identification is performed via applying Viterbi algorithm to extract the lip contour.
9. A method according to claim 1 , wherein the processing step comprises removing unwanted pixel from the live scan.
10. A method of enrolling a registered template in a database as in claim 1 comprising the steps of
acquiring a second live scan containing at least one quadrant of a lip contour of a second subject;
processing the lip contour of the second subject to produce a second lip print which is analyzable by a computerized apparatus:
generating a template unique to the second subject by computing at least one predefined characteristic found on the second lip print: and
registering the template into the database while identity and/or personal info of the second subject tagged to the registered template.
1 1. A method according to claim 10 further comprising the steps of identifying the lip contour of the first subject on the provided first live scan (715) prior to the processing step.
12. A method according to claim 10 further comprising the step of grouping the second lip print of the second subject according to at least one pre-defined characteristic found on the second lip print.
13. A method according to claim 10. wherein the pre-defined characteristic in the grouping step is pattern of grooves on the lip print.
14. A method according to claim 10 or 12, wherein the predefined characteristic on the first lip print (725) comprises any one or combination of color information, pattern of lip grooves, feature points representation, concavity of upper lip and lower lip, thickness of upper and/or lower lip, distance between reference points on upper lip and/or lower lip, ratio of distances between reference points on upper lip and/or lower lip, thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
15. A method according to claim 1 1 , wherein the identification is performed by classifying pixels of the second live scan into color information and identifying the lip contour via the color information.
16. A method according to claim 11 , wherein the identification is performed via applying Viterbi algorithm to extract the lip contour.
17. A method according to claim 10. wherein the processing step comprises removing unwanted pixel from the live scan.
18. A system of human identity registering and recognition comprising
a database storing a plurality of registered templates that each registered template containing unique biometric information of lips contour of an individual and each registered template is tagged with identity as well as personal information of that individual:
an input means to acquire a live scan containing at least one quadrant of a lip contour of a subject;
a data processor capable of carrying out at least one of the process of receiving the live scan from the input means, processing the lip contour of the live scan to produce a lip print, generating a subject template (735) unique to the subject based on unique biometric feature on the lip print by computing at least one pre-defined characteristic found on the lip print: comparing the subject template (735) to each of other registered templates in a database to find a matched registered template that an identity is tagged to each registered template; computing a match score for every pair of compared templates; and relating the subject to the tagged identity belongs to the matched registered template when the computed match score of the matched registered template, the subject template (735) is within a preset threshold or vice versa and providing an outcome: and
an user interface receives the provided outcome and allows a user manually manipulating the database, the input means and the data processor.
19. A system according to claim 18, wherein the data processor is capable of identifying the lip contour of the subject on the received live scan.
20. A system according to claim 18, wherein the data processor is capable of enhancing quality of the live scan.
21. A system according to claim 18, wherein the data processor is capable of grouping lip print of the subject according to at least one pre-defined characteristic found on the lip print.
22. A system according to claim 18, wherein the data processor is capable of registering the subject template (735) into the database and tagging the subject template (735) with other available information regarding the subject when no matched stored template is found.
23. A system according to claim 18 or 21 , wherein the predefined character! sxic on the lip print comprises any one or combination of color information, partem of lip grooves, feature points representation, concavity of upper lip and lower lip. thickness of upper and/or lower lip. distance between points on upper lip and/or lower lip. ratio of distances between points on upper lip and/or lower lip. thickness of the cleft, bifurcations, and features of at least a quadrant extracted in one or more orientation.
24. A method of generating biometric information from a lip print comprising the steps of
dividing an upper lip portion or a lower lip portion of the lip print into two symmetrical quadrants;
overlapping the symmetrical quadrants to acquired an overlapped quadrant print:
extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print.
25. A method according to claim 24. wherein one of the symmetrical quadrants is flipped at horizontal axis or vertical axis prior to the overlapping step.
26. A metliod of generating biometric information from a lip print comprising the steps of
dividing a lip portion at the vertical axis into two symmetrical quadrants;
overlapping the symmetrical quadrants to acquire an overlapped quadrant print;
extracting biometric information from the overlapped quadrant print including distribution pattern of grooves and ridges of the lip print on the overlapped quadrant print.
27. A method according to claim 26, wherein one of the symmetrical quadrants is flipped at horizontal axis or vertical axis prior to the overlapping step.
PCT/MY2010/000225 2009-12-02 2010-10-28 A method for identity recognition based on lip image WO2011068395A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20095129 2009-12-02
MYPI20095129A MY167109A (en) 2009-12-02 2009-12-02 A method for identity recognition based on lip image

Publications (2)

Publication Number Publication Date
WO2011068395A2 true WO2011068395A2 (en) 2011-06-09
WO2011068395A3 WO2011068395A3 (en) 2011-08-04

Family

ID=44115443

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2010/000225 WO2011068395A2 (en) 2009-12-02 2010-10-28 A method for identity recognition based on lip image

Country Status (2)

Country Link
MY (1) MY167109A (en)
WO (1) WO2011068395A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711350A (en) * 2018-12-28 2019-05-03 武汉大学 A kind of identity identifying method merged based on lip movement and voice
WO2023140099A1 (en) * 2022-01-19 2023-07-27 パナソニックIpマネジメント株式会社 Biometric authentication system and biometric authentication method
WO2023140098A1 (en) * 2022-01-19 2023-07-27 パナソニックIpマネジメント株式会社 Biometric authentication device, biometric authentication method, and biometric authentication system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100397916B1 (en) * 2001-07-16 2003-09-19 (주)니트 젠 Fingerprint registration and authentication method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100397916B1 (en) * 2001-07-16 2003-09-19 (주)니트 젠 Fingerprint registration and authentication method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HAJIME UTSUNO ET AL.: 'Preliminary Study of Post Mortem Identification Using Lip Prints.' FORENSIC SCIENCE INTERNATIONAL vol. 149, no. ISSUES, 10 May 2005, pages 129 - 132 *
JIN OK KIM ET AL.: 'Lip Print Recognition for Security Systems by Multi-resol ution Architecture.' FUTURE GENERATION COMPUTER SYSTEMS vol. 20, no. 2, 16 February 2004, pages 295 - 301 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109711350A (en) * 2018-12-28 2019-05-03 武汉大学 A kind of identity identifying method merged based on lip movement and voice
CN109711350B (en) * 2018-12-28 2023-04-07 武汉大学 Identity authentication method based on lip movement and voice fusion
WO2023140099A1 (en) * 2022-01-19 2023-07-27 パナソニックIpマネジメント株式会社 Biometric authentication system and biometric authentication method
WO2023140098A1 (en) * 2022-01-19 2023-07-27 パナソニックIpマネジメント株式会社 Biometric authentication device, biometric authentication method, and biometric authentication system

Also Published As

Publication number Publication date
WO2011068395A3 (en) 2011-08-04
MY167109A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
Zhang et al. Online joint palmprint and palmvein verification
Dubey et al. Fingerprint liveness detection from single image using low-level features and shape analysis
Masupha et al. Face recognition techniques, their advantages, disadvantages and performance evaluation
Ghiani et al. Experimental results on fingerprint liveness detection
Malathi et al. An efficient method for partial fingerprint recognition based on local binary pattern
Garg et al. Biometric authentication using finger nail surface
Aleem et al. Fast and accurate retinal identification system: Using retinal blood vasculature landmarks
Awalkar et al. A multi-modal and multi-algorithmic biometric system combining iris and face
Tazim et al. Biometric authentication using CNN features of dorsal vein pattern extracted from NIR image
Kadri et al. Palmprint & iris for a multibiometric authentication scheme using Log-Gabor filter response
Jin et al. Fingerprint liveness detection based on multiple image quality features
WO2011068395A2 (en) A method for identity recognition based on lip image
Priya et al. Authentication of identical twins using tri modal matching
Awasthi et al. Fingerprint analysis using termination and bifurcation minutiae
Abdel-Latif et al. Achieving information security by multi-modal iris-retina biometric approach using improved mask R-CNN
Malik et al. An efficient retinal vessels biometric recognition system by using multi-scale local binary pattern descriptor
Topcu et al. Fingerprint matching utilizing non-distal phalanges
Kovač et al. Openfinger: Towards a combination of discriminative power of fingerprints and finger vein patterns in multimodal biometric system
Attallah et al. Application of BSIF, Log-Gabor and mRMR transforms for iris and palmprint based Bi-modal identification system
Muthukumaran et al. Face and Iris based Human Authentication using Deep Learning
Alsufyani et al. Exploring the potential of facial skin regions for the provision of identity information
Choras A review of image processing methods and biometric trends for personal authentication and identification
Ogundepo et al. Development of a real time fingerprint authentication/identification system for students’ record
Nezhadian et al. Inner-knuckle-print for human authentication by using ring and middle fingers
Meraoumia et al. Person’s recognition using palmprint based on 2D Gabor filter response

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10834809

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10834809

Country of ref document: EP

Kind code of ref document: A2