WO2023140099A1 - Biometric authentication system and biometric authentication method - Google Patents

Biometric authentication system and biometric authentication method Download PDF

Info

Publication number
WO2023140099A1
WO2023140099A1 PCT/JP2022/048688 JP2022048688W WO2023140099A1 WO 2023140099 A1 WO2023140099 A1 WO 2023140099A1 JP 2022048688 W JP2022048688 W JP 2022048688W WO 2023140099 A1 WO2023140099 A1 WO 2023140099A1
Authority
WO
WIPO (PCT)
Prior art keywords
person
lip
camera
biometric authentication
terminal device
Prior art date
Application number
PCT/JP2022/048688
Other languages
French (fr)
Japanese (ja)
Inventor
毅 藤山
剛 中村
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Publication of WO2023140099A1 publication Critical patent/WO2023140099A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to a biometric authentication system and a biometric authentication method.
  • Patent Document 1 discloses a face authentication system that performs image processing on a face image of a visitor's face, extracts features, searches registered face images based on the features, and performs personal authentication.
  • the face authentication system normalizes the size of a face image to detect contours, detects general features consisting of closed regions with contours closed, and unique features consisting of line segments and isolated points where the contours are open ends, and recognizes a visitor by pattern-matching a feature-processed face image file registered by overwriting the general features and specific features on a registrant's face image and a feature-processed image file with the general features and specific features.
  • general features refer to contours of the face, hair, eyes, eyebrows, nostrils, and mouth.
  • Unique features refer to wrinkles, blemishes, moles, etc. caused by unevenness of the face.
  • Patent Document 1 since the size of the face image is normalized, there is a possibility that the image quality will deteriorate. Moreover, when detecting general features and specific features from a face image whose image quality has been degraded by normalization, the face authentication system may not be able to extract the visitor's lip print from the face image.
  • the present invention has been made in view of the conventional situation described above, and aims to provide a biometric authentication system and a biometric authentication method that more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
  • the present disclosure is a biometric authentication system including a terminal device and a server capable of communicating with the terminal device, wherein the terminal device outputs an image capturing command to a camera that captures an image of a person, acquires a captured image of the person captured by the camera, extracts the face area of the person reflected in the captured image, generates the image capturing command based on the size of the face area, outputs the generated image capturing command to the camera, extracts the lip print of the person from the captured image, and extracts the lip print of the person from the captured image. and the server provides a biometric authentication system that identifies the person based on the extracted lip print.
  • the present disclosure is a biometric authentication method performed by one or more computers, which outputs an imaging command to a camera that captures an image of a person, acquires a captured image of the person captured by the camera, extracts a face region of the person in the captured image, generates the imaging command based on the size of the face region, outputs the generated imaging command to the camera, extracts the lip print of the person from the captured image, and identifies the living person based on the extracted lip print.
  • an authentication method which outputs an imaging command to a camera that captures an image of a person, acquires a captured image of the person captured by the camera, extracts a face region of the person in the captured image, generates the imaging command based on the size of the face region, outputs the generated imaging command to the camera, extracts the lip print of the person from the captured image, and identifies the living person based on the extracted lip print.
  • FIG. 1 is a diagram showing an internal configuration example of a biometric authentication system according to Embodiment 1;
  • FIG. 2 shows an example of the overall configuration of a biometric authentication system according to a modification of Embodiment 1;
  • FIG. 2 shows an example of the overall configuration of a biometric authentication system according to Embodiment 2;
  • FIG. 1 is a diagram showing an internal configuration example of a biometric authentication system 100 according to Embodiment 1. As shown in FIG. 1
  • the biometric authentication system 100 includes a camera C1, a terminal device P1, and a server S1.
  • the biometric authentication system 100 captures an image of a person to be authenticated using a camera C1.
  • the biometric authentication system 100 uses the terminal device P1 to estimate the distance between the camera C1 and the person appearing in the captured image, controls the zoom magnification of the camera C1 based on the estimated distance, and captures a captured image used for biometric authentication (that is, a captured image from which the lip print of the person to be authenticated can be obtained).
  • the biometric authentication system 100 uses the terminal device P1 to generate an image (hereinafter referred to as a "lip image") that extracts (cuts out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image.
  • the biometric authentication system 100 identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1 with the lip print of at least one person registered in the database DB by the server S1.
  • the camera C1 is connected to the terminal device P1 so that data can be transmitted and received.
  • the camera C1 performs zoom magnification adjustment processing and imaging processing based on the control command transmitted from the terminal device P1.
  • the camera C1 captures an image of a person to be authenticated, and transmits the captured image to the terminal device P1.
  • the terminal device P1 is implemented using, for example, a PC (Personal Computer), a notebook PC, a tablet terminal, a smartphone, or the like.
  • the terminal device P1 acquires the captured image transmitted from the camera C1, and extracts a face region in which a person's face is captured from the acquired captured image.
  • the terminal device P1 determines whether or not the captured image captured by the camera C1 is a captured image from which a lip print used for biometric authentication can be obtained, based on the size (area) of the face region for the entire captured image.
  • the terminal device P1 determines that the captured image captured by the camera C1 is a captured image from which a lip print used for biometric authentication can be obtained, the terminal device P1 generates a lip image by extracting (cutting out) the lip region from the captured image. The terminal device P1 extracts a lip print from the generated lip image, generates lip print data, and transmits the generated lip print data to the server S1.
  • the terminal device P1 determines that the captured image captured by the camera C1 is not a captured image from which a lip print used for biometric authentication can be obtained, the terminal device P1 outputs to the display unit 16 an image capture support screen (not shown) that requests the person to approach or leave the camera C1, or executes control to adjust the zoom magnification of the camera C1 to recapture the person.
  • the terminal device P1 includes a communication unit 10, a processor 11, a memory 15, and a display unit 16.
  • the communication unit 10 is connected to the camera C1 so that data can be transmitted and received, and is connected to the communication unit 20 of the server S1 through the network NW so as to be able to communicate wirelessly or by wire.
  • the wireless communication here is communication via a wireless LAN (Local Area Network) such as Wi-Fi (registered trademark).
  • the communication unit 10 outputs the captured image transmitted from the camera C1 to the processor 11, and outputs the matching result transmitted from the server S1 to the processor 11.
  • the communication unit 10 also transmits various control commands output from the processor 11 (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1, and transmits lip print data output from the processor 11 to the server S1.
  • the processor 11 is configured using, for example, a CPU (Central Processing Unit), a DSP (Digital Signal Processor) or an FPGA (Field Programmable Gate Array), and controls the operation of each part.
  • the processor 11 cooperates with the memory 15 to collectively perform various processes and controls.
  • the processor 11 refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the imaging control unit 12, the lip print extraction unit 14, and the like.
  • the imaging control unit 12 executes imaging control of the camera C1.
  • the imaging control unit 12 generates a control command for changing the zoom magnification of the camera C1, and transmits the generated control command to the camera C1 to execute processing for adjusting the zoom magnification of the camera C1.
  • the imaging control unit 12 is configured to be able to implement the function of the distance measurement unit 13 .
  • the distance measuring unit 13 performs image analysis on the captured image transmitted from the camera C1 to detect a person's face area.
  • the distance measurement unit 13 estimates the distance between the camera C1 and the captured person based on the size (area, number of pixels) of the face region of the person with respect to the entire area (total number of pixels) of the captured image, and outputs information about the estimated distance to the imaging control unit 12.
  • the imaging control unit 12 determines whether or not the distance output from the distance measuring unit 13 is equal to or less than a predetermined distance.
  • the predetermined distance referred to here is a distance determined based on the resolution of an image sensor (not shown) provided in the camera C1, and may be set to a distance at which the lip print of the captured person can be obtained.
  • the imaging control unit 12 determines that the distance estimated by the distance measuring unit 13 is equal to or less than the predetermined distance, the imaging control unit 12 outputs the captured image transmitted from the camera C1 to the lip print extraction unit 14 .
  • the imaging control unit 12 determines that the estimated distance is not equal to or less than the predetermined distance, it determines the zoom magnification of the camera C1 based on the difference between the estimated distance and the predetermined distance.
  • the imaging control part 12 changes the currently set zoom magnification to the determined new zoom magnification, generates a control command for imaging the person again, outputs it to the communication part 10, and transmits it to the camera C1.
  • the lip print extraction unit 14 detects a person's face from the captured image captured by the camera C1, and extracts a face area containing the detected face. Based on the information of the extracted face area, the lip print extraction unit 14 detects each part (part) constituting the face such as the eyes, nose, and mouth (lips) of the person to be authenticated from the face area, cuts (extracts) the lip area (that is, the area where the mouth is shown) from the captured image, and generates a lip image.
  • the lip print extraction unit 14 corrects the contrast of the lip image in order to make it easier to detect lip wrinkles (edges) from the generated lip image.
  • the lip pattern extracting unit 14 corrects the contrast of the lip image so that the contrast ratio or the contrast difference between the contrast of the lip wrinkle portion and the contrast of the lip portion other than the lip wrinkle is equal to or greater than a predetermined value.
  • the lip print extraction unit 14 extracts lip creases (edges) from the contrast-corrected lip image. Further, the lip pattern extraction unit 14 performs binarization processing on the contrast-corrected lip image based on the extracted lip wrinkle (edge) information. As a result, the lip print extraction unit 14 can generate a lip image from which a lip print based on lip wrinkles can be more easily obtained from the lip image. A lip print extraction unit 14 extracts a lip print from the lip image after the binarization process. The lip print extraction unit 14 outputs the extracted lip print data to the communication unit 10 and causes it to be transmitted to the server S1.
  • the memory 15 has, for example, a RAM (Random Access Memory) as a work memory used when executing each process of the processor 11, and a ROM (Read Only Memory) for storing programs and data that define the operation of the processor 11.
  • the RAM temporarily stores data or information generated or acquired by the processor 11 .
  • a program that defines the operation of the processor 11 is written in the ROM.
  • the display unit 16 is configured using a display such as an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence).
  • the display unit 16 displays an imaging support screen (not shown) output from the processor 11, a screen (not shown) for notifying the authentication result, and the like.
  • the server S1 is realized by a general server or a cloud server.
  • a server S1 acquires lip print data transmitted from a terminal device P1, compares the acquired lip print feature quantity with the lip print feature quantity of at least one person registered in a database DB, and calculates a collation score.
  • the server S1 identifies a person to be authenticated based on the calculated matching score, generates an authentication result including information about the identified person, information indicating authentication success or authentication failure, etc., and transmits the authentication result to the terminal device P1.
  • the server S1 includes a communication unit 20, a processor 21, a memory 25, and a database DB.
  • the server S1 shown in FIG. 1 shows an example including a database DB, but is not limited to this.
  • the database DB may be configured separately from the server S1 and implemented by an external device communicably connected to the server S1. Also, there may be a plurality of databases DB.
  • the communication unit 20 is connected to the communication unit 10 of the terminal device P1 via the network NW so as to be capable of wireless communication or wired communication, and transmits and receives data.
  • the wireless communication here is communication via a wireless LAN such as Wi-Fi (registered trademark).
  • the communication unit 20 outputs to the processor 21 the lip print data transmitted from the communication unit 10 of the terminal device P1. In addition, the communication unit 20 transmits the authentication result information output from the processor 21 to the communication unit 10 of the terminal device P1.
  • the processor 21 is configured using, for example, a CPU or FPGA, and cooperates with the memory 25 to perform various types of processing and control. Specifically, the processor 21 refers to the program held in the memory 25 and implements the functions of each unit such as the matching unit 22 and the score determination unit 24 by executing the program.
  • the matching unit 22 extracts a feature amount indicating the individuality of the person to be authenticated from the lip print data transmitted from the terminal device P1, compares the extracted feature amount of the lip print with the feature amount of the lip print of at least one person registered in the database DB, and calculates a matching score indicating the degree of matching between the extracted feature amount and the feature amount of the lip print registered in the database DB.
  • the collation unit 22 is configured to be able to implement the function of the feature amount extraction unit 23 .
  • the matching score here is a score that is calculated higher as the distance between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print of the person registered in the database DB is smaller (that is, the closer they are to each other). Also, the distance here may be the Euclidean distance.
  • the collation score is calculated such that the smaller the Euclidean distance between the acquired lip print feature amount and the lip print feature amount registered in the database DB, the higher the score calculated, and the larger the Euclidean distance, the lower the score calculated.
  • the matching score may be a score indicating the degree of similarity between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print of the person registered in the database DB. In such a case, if the matching score is a score indicating similarity, the higher the similarity, the higher the matching score, and the lower the similarity, the lower the matching score.
  • the feature quantity extraction unit 23 extracts the feature quantity of the lip print of the person to be authenticated from the lip print data transmitted from the terminal device P1.
  • the feature amount extraction unit 23 outputs the extracted feature amount of the lip print to the matching unit 22 .
  • the collation unit 22 collates the feature amount of the lip print output from the feature amount extraction unit 23 with the feature amount of the lip print registered in the database DB, and calculates a collation score.
  • the matching unit 22 selects the highest matching score among the calculated matching scores.
  • the matching unit 22 extracts the personal information of the person corresponding to the selected matching score from the database DB, associates the selected matching score with the extracted personal information, and outputs them to the score determination unit 24 .
  • the score determining unit 24 determines whether the person to be authenticated and the person corresponding to this matching score are the same person.
  • the score determination unit 24 may determine that the person to be authenticated is not registered in the database DB (that is, authentication has failed).
  • the score determination unit 24 generates an authentication result based on the determination result of determining whether or not the person to be authenticated and the person corresponding to the matching score are the same person, and transmits the authentication result to the terminal device P1 via the communication unit 20. It should be noted that the authentication result is generated including the specified person information, information indicating authentication success or authentication failure, and the like.
  • the score determination unit 24 determines whether the matching score is 80 or less when the matching score is calculated as a value of "0 (zero) to 100". When the score determination unit 24 determines that the matching score is not equal to or less than 80, it determines that the person to be authenticated and the person corresponding to this matching score are the same person (that is, authentication is successful).
  • the score determination unit 24 determines that the matching score is 80 or less, it determines that the person to be authenticated and the person corresponding to this matching score are not the same person (that is, authentication failure). As a result, the score determination unit 24 can more effectively suppress authentication errors that can occur when the matching score is low (that is, the similarity between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print registered in the database DB is low).
  • the memory 25 has, for example, a RAM as a work memory that is used when executing the processing of the processor 21, and a ROM that stores a program that defines the processing of the processor 21. Data generated or acquired by the processor 21 is temporarily stored in the RAM. A program that defines the processing of the processor 21 is written in the ROM.
  • the database DB is a so-called storage, and is configured using a storage medium such as flash memory, HDD (Hard Disk Drive), or SSD (Solid State Drive).
  • the database DB stores (registers) personal information (for example, name, age, gender, etc.) for each person in association with a lip print or a feature amount of the lip print.
  • the database DB may store (register) each of a plurality of lip prints with different mouth openings for each person.
  • the lip prints registered in the database DB may include, for each person, a lip print with the mouth closed, a lip print while uttering the vowel "a", and a lip print while uttering the vowel "i".
  • the database DB stores (registers) the person information, the lip print, the information on how to open the mouth when the lip print is acquired, or the information on the uttered vowels when the lip print is acquired, etc., in association with each other.
  • the network NW connects the terminal device P1 and the server S1 for wired communication or wireless communication, and enables data transmission/reception.
  • FIG. 1 shows an example in which the camera C1 and the terminal device P1 are configured as separate devices, the camera C1 may be configured integrally with the terminal device P1 and configured to be able to implement various functions of the terminal device P1.
  • FIG. 2 is a flow chart showing an example of the operation procedure of the biometric authentication system 100 according to the first embodiment.
  • FIG. 3 is a diagram illustrating an example of image processing.
  • FIG. 5 is a table showing the correspondence relationship between the operation procedure example of each embodiment shown in FIGS. 2 and 7 and the action subject of the operation procedure.
  • the terminal device P1 executes steps St11 to St17
  • the server S1 executes steps St18 to St19.
  • the terminal device P1 acquires the captured image Pt11 including the face of the person to be authenticated transmitted from the camera C1 (St11).
  • the terminal device P1 detects a face area Ar11 including a person's face from the acquired captured image Pt11, and estimates the distance between the subject (that is, the person to be authenticated) and the camera C1 based on the size (area) of the face area Ar11 with respect to the entire captured image (St12).
  • the terminal device P1 determines the zoom magnification of the camera C1 based on the estimated distance and the predetermined distance.
  • the terminal device P1 changes the currently set zoom magnification to the determined new zoom magnification, and generates a control command for re-imaging the person.
  • the terminal device P1 transmits the generated control command to the camera C1 and controls the zoom magnification of the camera C1 (St13).
  • the camera C1 After adjusting the zoom magnification based on the control command transmitted from the terminal device P1, the camera C1 executes the imaging process again and transmits the captured image to the terminal device P1.
  • the terminal device P1 acquires again the captured image Pt12 including the person's face transmitted from the camera C1 (St14).
  • the terminal device P1 detects a face area Ar12 including a person's face from the acquired captured image Pt12, and re-estimates the distance between the subject (person to be authenticated) and the camera C1 based on the size (area) of the face area Ar12 with respect to the entire captured image.
  • step St15 if the terminal device P1 determines that the re-estimated distance is a distance at which the person's lip print can be obtained from the captured image, the process proceeds to step St15. On the other hand, when the terminal device P1 determines that the re-estimated distance is not the distance at which the person's lip print can be obtained from the captured image, the terminal device P1 proceeds to the processing of step St13, and generates a control command to adjust the zoom magnification of the camera C1 again.
  • the terminal device P1 extracts the face area Ar13 of the person shown in the captured image Pt12 (St15), and further extracts the lip area Pt14 showing the lips (mouth) of the person from the extracted face area Ar13 (St16).
  • the terminal device P1 generates a lip image obtained by cutting out the extracted lip region Pt14, and corrects the contrast so that the lip print can be detected from the wrinkles of the lips reflected in the generated lip image (St17).
  • the terminal device P1 extracts lip creases (edges) from the contrast-corrected lip image, and performs binarization processing on the lip image (St17).
  • the terminal device P1 extracts the lip print of the person to be authenticated from the binarized lip image Pt15, generates lip print data, and transmits the generated lip print data to the server S1.
  • the server S1 Based on the lip print data transmitted from the terminal device P1, the server S1 extracts a feature quantity indicating the individuality of the person.
  • the server S1 collates the extracted feature amount based on the lip print of the person to be authenticated with the feature amount based on the lip print of at least one person registered in the database DB (St18).
  • the server S1 calculates a matching score between the extracted lip print feature amount and the lip print feature amount registered in the database DB.
  • the server S1 selects the highest matching score among the calculated matching scores, and determines (identifies) that the person corresponding to the selected matching score is the same person as the person to be authenticated (St19).
  • the server S1 generates an authentication result including personal information corresponding to the specified person, and transmits it to the terminal device P1.
  • the terminal device P1 generates a screen (not shown) including the authentication result transmitted from the server S1, and outputs and displays it on the display unit 16.
  • the biometric authentication system 100 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
  • the biometric authentication system 100 can further improve authentication accuracy by executing biometric authentication using this captured image.
  • the biometric authentication system 100 estimates the distance between the camera C1 and the person to be authenticated based on the size of the person's face in the captured image captured by the camera C1.
  • An example in which the biometric authentication system 100A according to the modification of the first embodiment measures the distance between the camera C1 and the person to be authenticated using the ranging sensor SS1 will be described.
  • the same reference numerals are given to the same configurations as the biometric authentication system 100 according to the first embodiment, and the description thereof will be omitted.
  • FIG. 4 is a diagram showing an overall configuration example of a biometric authentication system 100A according to a modification of the first embodiment.
  • the biometric authentication system 100A includes a camera C1, a ranging sensor SS1, a terminal device P1A, and a server S1.
  • the biometric authentication system 100A measures the distance between the person to be authenticated by the ranging sensor SS1 and the camera C1, and when the terminal device P1A determines that the measured distance is a distance at which a captured image from which a lip print can be obtained can be captured, the camera C1 captures an image of the person.
  • the biometric authentication system 100A uses the terminal device P1A to generate a lip image by extracting (cutting out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image.
  • the biometric authentication system 100A identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1A with the lip print of at least one person registered in the database DB by the server S1.
  • the camera C1 is connected to the terminal device P1A so that data can be transmitted and received.
  • the camera C1 is controlled by the terminal device P1A to capture an image of a person to be authenticated, and transmits the captured image to the terminal device P1A.
  • the ranging sensor SS1 is realized by, for example, a TOF (Time Of Flight) sensor or the like.
  • the ranging sensor SS1 is connected to the terminal device P1A so as to be able to transmit and receive data, measures the distance between the camera C1 and the person to be authenticated, and transmits the measured distance information to the terminal device P1A.
  • TOF Time Of Flight
  • the terminal device P1A is implemented using, for example, a PC, notebook PC, tablet terminal, smartphone, or the like.
  • the terminal device P1A is connected to each of the distance measuring sensor SS1, the camera C1, and the server S1 so that data can be transmitted and received.
  • a terminal device P1A in the modification of Embodiment 1 includes a communication unit 10A, a processor 11A, a memory 15, and a display unit 16.
  • the communication unit 10A is connected to enable data transmission/reception between the distance measuring sensor SS1 and the camera C1, respectively, and is connected to the communication unit 20 of the server S1 through the network NW to enable wireless or wired communication.
  • the communication unit 10A outputs distance information transmitted from the ranging sensor SS1 to the processor 11A, outputs captured images transmitted from the camera C1 to the processor 11A, and outputs matching results transmitted from the server S1 to the processor 11A. Further, the communication unit 10A transmits various control commands output from the processor 11A (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1, and transmits lip print data output from the processor 11A to the server S1.
  • various control commands output from the processor 11A for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.
  • the processor 11A is configured using, for example, a CPU, DSP or FPGA, and controls the operation of each unit.
  • the processor 11A cooperates with the memory 15 to collectively perform various processes and controls.
  • the processor 11A refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the imaging control section 12A, the lip print extraction section 14, and the like.
  • the imaging control unit 12A determines whether or not the distance between the camera C1 and the person to be authenticated is equal to or less than a predetermined distance based on the distance information transmitted from the ranging sensor SS1. When the image capturing control unit 12A determines that the distance is equal to or less than the predetermined distance, the image capturing control unit 12A generates a control command for capturing an image of the person to be authenticated, and transmits the control command to the camera C1.
  • the imaging control unit 12A detects a person's face from the captured image transmitted from the camera C1, and extracts a face area containing the detected face. Based on the information of the extracted face area, the lip print extraction unit 14 detects each part (part) constituting the face such as the eyes, nose, and mouth (lips) of the person to be authenticated from the face area, cuts (extracts) the lip area (that is, the area where the mouth is shown) from the captured image, and generates a lip image.
  • the camera C1 may be configured integrally with the terminal device P1A and configured to be able to implement various functions of the terminal device P1A.
  • the distance measuring sensor SS1 performs the process of step St12
  • the terminal device P1A performs the process of steps St13 to St17
  • the server S1 performs the processes of steps St18 to St19 (see FIG. 5).
  • the process of step St11 is omitted in the modification of the first embodiment.
  • the biometric authentication system 100A can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
  • the biometric authentication system 100A can further improve authentication accuracy by executing biometric authentication using this captured image.
  • the biometric authentication system 100 estimates the distance between the camera C1 and the person to be authenticated by the terminal device P1, and executes zoom magnification and imaging control of the camera C1.
  • An example will be described in which the biometric authentication system 200 according to the second embodiment estimates the distance between the camera C1B and the person to be authenticated using the ranging sensor SS2, and executes zoom magnification and imaging control of the camera C1B.
  • the same reference numerals are given to the same configurations as the biometric authentication system 100 according to the first embodiment or the biometric authentication system 100A according to the modification of the first embodiment, and the description thereof will be omitted.
  • FIG. 6 is a block diagram showing an internal configuration example of the biometric authentication system 200 according to the second embodiment.
  • the biometric authentication system 200 includes a camera C1B, a ranging sensor SS2, a terminal device P1B, and a server S1.
  • the biometric authentication system 200 measures the distance between the camera C1B and the person to be authenticated by the distance measuring sensor SS2, and if it is determined that the measured distance is the distance at which the captured image from which the lip print can be obtained can be captured, the camera C1B captures the person.
  • the biometric authentication system 200 uses the terminal device P1B to generate a lip image by extracting (cutting out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image.
  • the biometric authentication system 200 identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1B with the lip print of at least one person registered in the database DB by the server S1.
  • the camera C1B is connected so as to be able to transmit and receive data between the ranging sensor SS2 and the terminal device P1B.
  • the camera C1B is controlled by the ranging sensor SS2 to capture an image of the person to be authenticated, and transmits the captured image to the terminal device P1B.
  • the distance measuring sensor SS2 is implemented by, for example, a computer, device, or the like including a distance measuring unit 30 such as a TOF sensor.
  • the ranging sensor SS2 is connected to the camera C1B so as to be able to transmit and receive data.
  • the ranging sensor SS2 includes a ranging section 30, a communication section 31, a processor 32, and a memory 33.
  • FIG. Note that the distance measuring unit 30 is not essential and may be omitted. In such a case, the distance measuring sensor SS2 estimates the distance between the camera C1B and the person to be authenticated based on the size (area) of the person's face region in the captured image captured by the camera C1B.
  • the distance measurement unit 30 measures the distance between the camera C1B and the person to be authenticated, and outputs the measured distance information to the processor 32.
  • the communication unit 31 is connected to the camera C1B for wireless communication or wired communication, and executes data transmission/reception.
  • the wireless communication here is communication via a wireless LAN such as Wi-Fi (registered trademark).
  • the communication unit 31 transmits various control commands output from the processor 32 (for example, a control command for adjusting the zoom magnification, a control command for executing an imaging process, etc.) to the camera C1B, outputs the captured image transmitted from the camera C1B to the processor 32, and transmits the captured image to the terminal device P1B.
  • the processor 32 is configured using, for example, a CPU or FPGA, and cooperates with the memory 33 to perform various types of processing and control. Specifically, the processor 32 refers to the program held in the memory 33 and implements the function of each part by executing the program.
  • the processor 32 determines whether the distance between the camera C1B and the person to be authenticated is equal to or less than a predetermined distance. When the processor 32 determines that the distance between the camera C1B and the person to be authenticated is equal to or less than a predetermined distance, the processor 32 generates a control command for imaging the person to be authenticated, outputs it to the communication unit 31, and transmits it to the camera C1B.
  • the processor 32 determines that the distance between the camera C1B and the person to be authenticated is not equal to or less than the predetermined distance, it determines the zoom magnification of the camera C1B based on the difference between the measured distance and the predetermined distance.
  • the processor 32 changes the currently set zoom magnification to the determined new zoom magnification, generates a control command to image the person again, outputs it to the communication unit 31, and transmits it to the camera C1B.
  • the processor 32 acquires the captured image transmitted from the camera C1B and detects the face area of the person who is the authentication target in the acquired captured image.
  • the processor 32 estimates the distance between the camera C1B and the person to be authenticated based on the size (area, number of pixels) of the face area of the person with respect to the entire area (total number of pixels) of the captured image, and determines whether the estimated distance is equal to or less than a predetermined distance.
  • the processor 32 determines that the estimated distance is equal to or less than the predetermined distance, the processor 32 transmits the captured image transmitted from the camera C1B to the terminal device P1B.
  • the processor 32 determines the zoom magnification of the camera C1B based on the difference between the estimated distance and the predetermined distance.
  • the processor 32 changes the zoom magnification currently set for the camera C1B to the determined new zoom magnification, and generates a control command to image the person again.
  • the processor 32 outputs the generated control command to the communication unit 31 and causes it to be transmitted to the camera C1B.
  • the memory 33 has, for example, a RAM as a work memory that is used when executing each process of the processor 32, and a ROM that stores programs and data that define the operation of the processor 32. Data or information generated or obtained by the processor 32 is temporarily stored in the RAM. A program that defines the operation of the processor 32 is written in the ROM.
  • a terminal device P1B according to Embodiment 2 includes a communication unit 10B, a processor 11B, a memory 15, and a display unit 16.
  • the communication unit 10B is connected to enable data transmission/reception between the distance measuring sensor SS2 and the camera C1B, and is connected to the communication unit 20 of the server S1 via the network NW to enable wireless or wired communication.
  • the communication unit 10B outputs the captured image transmitted from the camera C1B or the ranging sensor SS2 to the processor 11B, and outputs the matching result transmitted from the server S1 to the processor 11B. Further, the communication unit 10B transmits various control commands output from the processor 11B (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1B, and transmits lip print data output from the processor 11B to the server S1.
  • various control commands output from the processor 11B for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.
  • the processor 11B is configured using, for example, a CPU, DSP or FPGA, and controls the operation of each unit.
  • the processor 11B cooperates with the memory 15 to collectively perform various processes and controls.
  • the processor 11B refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the lip print extractor 14 and other parts.
  • FIG. 7 is a flow chart showing an example of the operation procedure of the biometric authentication system 200 according to the second embodiment.
  • the distance measuring sensor SS2 performs steps St21 to St23 and steps St25
  • the terminal device P1B performs steps St24 to St27
  • the server S1 performs steps St28 to St29.
  • the operation procedure example shown in FIG. 7 shows an operation procedure example when the distance measurement unit 30 is omitted from the configuration of the distance measurement sensor SS2. If the distance measuring unit 30 is not omitted from the configuration of the distance measuring sensor SS2, the biometric authentication system 200 according to Embodiment 2 transmits the captured image of the distance measuring sensor SS2 to the terminal device P1B after the process of step St23, and the terminal device P1B executes processing for extracting a face area from the transmitted captured image.
  • the ranging sensor SS2 acquires the captured image Pt11 (see FIG. 3) including the face of the person who is the authentication target, transmitted from the camera C1B (St21).
  • the terminal device P1 detects a face area Ar11 including a person's face from the captured image Pt11, and estimates the distance between the subject (person to be authenticated) and the camera C1 based on the size (area) of the face area Ar11 (see FIG. 3) in the entire captured image (St22).
  • the ranging sensor SS2 determines whether or not the estimated distance is equal to or less than a predetermined distance (St23).
  • the ranging sensor SS2 determines in the process of step St23 that the estimated distance is equal to or less than the predetermined distance (St23, YES), it transmits the captured image to the terminal device P1B.
  • the ranging sensor SS2 determines in the processing of step St23 that the estimated distance is not equal to or less than the predetermined distance (St23, NO), it determines the zoom magnification of the camera C1B based on the difference between the estimated distance and the predetermined distance.
  • the distance measuring sensor SS2 generates a control command to change the currently set zoom magnification to the determined new zoom magnification and to image the person again.
  • Ranging sensor SS2 transmits the generated control command to camera C1B to control the zoom magnification of camera C1B.
  • the ranging sensor SS2 generates a control command for displaying an imaging support screen (not shown) requesting the person to be authenticated to approach or leave the camera C1B, and transmits it to the terminal device P1B.
  • the terminal device P1B generates an imaging support screen based on the control command transmitted from the ranging sensor SS2, and outputs and displays the generated imaging support screen on the display unit 16 (St25).
  • the camera C1B After adjusting the zoom magnification based on the control command transmitted from the ranging sensor SS2, the camera C1B executes the imaging process again and transmits the captured image to the ranging sensor SS2.
  • the terminal device P1B acquires the captured image Pt12 (see FIG. 3) including the person's face transmitted from the ranging sensor SS2.
  • the terminal device P1B extracts a face area Ar13 (see FIG. 3) of the person reflected in the acquired captured image Pt12 (see FIG. 3) (St24), and further extracts a lip area Pt14 (see FIG. 3) showing the lips (mouth) of the person from the extracted face area Ar13 (St26).
  • the terminal device P1B generates a lip image obtained by cutting out the extracted lip region Pt14, and corrects the contrast so that the lip print can be detected from the wrinkles of the lips reflected in the generated lip image (St27).
  • the terminal device P1B extracts lip wrinkles (edges) from the contrast-corrected lip image, and performs binarization processing on the lip image (St27).
  • the terminal device P1B extracts the lip print of the person to be authenticated from the binarized lip image Pt15 (see FIG. 3), generates lip print data, and transmits the generated lip print data to the server S1.
  • the server S1 Based on the lip print data transmitted from the terminal device P1B, the server S1 extracts a feature amount indicating the individuality of the person. The server S1 collates the extracted feature amount of the lip print with the feature amount of the lip print registered in the database DB (St28).
  • the server S1 calculates a matching score between the extracted lip print feature amount and the lip print feature amount registered in the database DB.
  • the server S1 selects the highest matching score among the calculated matching scores, and determines (identifies) that the person corresponding to the selected matching score is the same person as the person to be authenticated (St29).
  • the server S1 generates an authentication result including personal information corresponding to the specified person, and transmits it to the terminal device P1B.
  • the terminal device P1B generates a screen (not shown) including the authentication result transmitted from the server S1, and outputs and displays it on the display unit 16.
  • the biometric authentication system 200 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
  • the biometric authentication system 200 can further improve authentication accuracy by executing biometric authentication using this captured image.
  • the biometric authentication systems 100, 100A, and 200 include the terminal devices P1, P1A, and P1B and the server S1 that can communicate with the terminal devices P1, P1A, and P1B.
  • Terminal devices P1, P1A, and P1B output control commands (examples of imaging commands) to cameras C1 and C1B that capture images of people, acquire captured images of people captured by cameras C1 and C1B, extract the face regions of the people in the captured images, generate control commands based on the size of the face regions, output the generated control commands to camera C1, extract the lip prints of the people from the captured images, and transmit the commands to server S1.
  • the server S1 identifies a person based on the extracted lip print.
  • the biometric authentication systems 100, 100A, and 200 estimate the distance between the camera C1 and the person based on the size of the face region, and generate a control command when determining that the estimated distance is equal to or less than a predetermined distance, thereby more efficiently acquiring a captured image from which a lip print used for biometric authentication can be acquired.
  • the terminal device P1 of the biometric authentication system 100 compares the extracted lip print with the registered lip print of at least one registered person and determines that there is a registered lip print identical or similar to the extracted lip print, it identifies the person corresponding to the same or similar registered lip print as the person of the lip print.
  • the biometric authentication system 100 according to Embodiment 1 can perform biometric authentication based on the lip print appearing in the captured image captured by the camera C1.
  • the terminal device P1 of the biometric authentication system 100 according to Embodiment 1 estimates the distance between the camera C1 and the person based on the size of the face area, and generates a control command when determining that the estimated distance is equal to or less than the predetermined distance.
  • the biometric authentication system 100 according to Embodiment 1 can determine whether or not the image captured by the camera C1 is an image from which a lip print used for biometric authentication can be obtained.
  • the terminal device P1 of the biometric authentication system 100 determines that the estimated distance is not equal to or less than the predetermined distance, it generates a control command (an example of a zoom command) for changing the zoom magnification of the camera C1, and outputs the generated control command to the camera C1.
  • a control command an example of a zoom command
  • the biometric authentication system 100 according to Embodiment 1 determines that the captured image captured by the camera C1 is not a captured image from which a lip print used for biometric authentication can be obtained, the biometric authentication system 100 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be obtained by changing the zoom magnification of the camera C1.
  • the terminal device P1 of the biometric authentication system 100 determines the zoom magnification of the camera C1 based on the difference between the estimated distance and the predetermined distance, and generates a control command to change the current zoom magnification of the camera C1 to the determined zoom magnification.
  • the biometric authentication system 100 according to Embodiment 1 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired by changing the zoom magnification of the camera C1.
  • the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment acquire the distance between the cameras C1 and C1B and the person, generate a control command based on the acquired distance, and output the generated control command to the camera.
  • the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment can determine whether or not the cameras C1 and C1B are capable of capturing a captured image from which a lip print used for biometric authentication can be obtained, based on the distance between the cameras C1 and C1B and the person to be authenticated.
  • the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment generate a control command when determining that the acquired distance is equal to or less than the predetermined distance.
  • the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment can cause the cameras C1 and C1B to image the person when it is determined that the distance between the cameras C1 and C1B and the person to be authenticated is a distance at which the lip print used for biometric authentication can be obtained. Therefore, the biometric authentication systems 100A and 200 can more efficiently acquire captured images from which lip prints used for biometric authentication can be acquired.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment generate a lip image obtained by extracting a lip region including a person's lips from a captured image, and extract a person's lip print from the generated lip image.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire the lip print of the person to be authenticated from the images captured by the cameras C1 and C1B.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment binarize the lip image and extract the lip print from the binarized lip image.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire a lip print more suitable for biometric authentication.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment correct the contrast of the lip image, extract the lip edge (that is, the lip wrinkle) from the contrast-corrected lip image, and binarize the lip image based on the extracted lip edge.
  • the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire a lip print more suitable for biometric authentication.
  • the present disclosure is useful as a biometric authentication device, a biometric authentication system, and a biometric authentication method that more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.

Abstract

This biometric authentication system is configured to include a terminal device, and a server capable of communicating with the terminal device, wherein: the terminal device outputs an imaging instruction to a camera that images a person, acquires a captured image in which the person is imaged, from the camera, extracts a facial region of the person shown in the captured image, generates an imaging instruction on the basis of the size of the facial region, outputs the generated imaging instruction to the camera, extracts a lip print of the person from the captured image, and sends the lip print to the server; and the server identifies the person on the basis of the extracted lip print.

Description

生体認証システムおよび生体認証方法Biometric authentication system and biometric authentication method
 本開示は、生体認証システムおよび生体認証方法に関する。 The present disclosure relates to a biometric authentication system and a biometric authentication method.
 特許文献1には、訪問者の顔を撮像した顔画像を画像処理して特徴を抽出し、特徴によって登録済みの顔画像を検索照合して個人認証する顔認証システムが開示されている。顔認証システムは、顔画像の大きさを正規化して輪郭線を検出し、輪郭線が閉じた閉領域からなる一般特徴と、輪郭線が開端になる線分および孤立した点からなる固有特徴とを検出し、登録者の顔画像に一般特徴および固有特徴を重ね書きして登録する特徴処理済み顔画像ファイルと、特徴処理済み画像ファイルとを一般特徴および固有特徴でパターンマッチング照合して訪問者を認識する。ここで、一般特徴は、顔、頭髪、目、眉、鼻孔、口の輪郭線を指す。固有特徴は、顔の凹凸によって生じるしわ、疵、ほくろなどを指す。 Patent Document 1 discloses a face authentication system that performs image processing on a face image of a visitor's face, extracts features, searches registered face images based on the features, and performs personal authentication. The face authentication system normalizes the size of a face image to detect contours, detects general features consisting of closed regions with contours closed, and unique features consisting of line segments and isolated points where the contours are open ends, and recognizes a visitor by pattern-matching a feature-processed face image file registered by overwriting the general features and specific features on a registrant's face image and a feature-processed image file with the general features and specific features. Here, general features refer to contours of the face, hair, eyes, eyebrows, nostrils, and mouth. Unique features refer to wrinkles, blemishes, moles, etc. caused by unevenness of the face.
日本国特開2005-242432号公報Japanese Patent Application Laid-Open No. 2005-242432
 しかし、特許文献1では、顔画像の大きさを正規化するため、画質の劣化が生じる可能性があった。また、正規化により画質が劣化した顔画像から一般特徴と固有特徴とを検出する場合、顔認証システムは、顔画像から訪問者の口唇紋を抽出できない可能性があった。 However, in Patent Document 1, since the size of the face image is normalized, there is a possibility that the image quality will deteriorate. Moreover, when detecting general features and specific features from a face image whose image quality has been degraded by normalization, the face authentication system may not be able to extract the visitor's lip print from the face image.
 本発明は、上述した従来の状況に鑑みてなされたものであり、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得する生体認証システムおよび生体認証方法を提供することを目的とする。 The present invention has been made in view of the conventional situation described above, and aims to provide a biometric authentication system and a biometric authentication method that more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
 本開示は、端末装置と、前記端末装置との間で通信可能なサーバと、を含んで構成される生体認証システムであって、前記端末装置は、人物を撮像するカメラに撮像指令を出力し、前記カメラから前記人物が撮像された撮像画像を取得し、前記撮像画像に映る前記人物の顔領域を抽出し、前記顔領域の大きさに基づいて、前記撮像指令を生成し、生成された前記撮像指令を前記カメラに出力し、前記撮像画像から前記人物の口唇紋を抽出して、前記サーバに送信し、前記サーバは、抽出された前記口唇紋に基づいて、前記人物を特定する、生体認証システムを提供する。 The present disclosure is a biometric authentication system including a terminal device and a server capable of communicating with the terminal device, wherein the terminal device outputs an image capturing command to a camera that captures an image of a person, acquires a captured image of the person captured by the camera, extracts the face area of the person reflected in the captured image, generates the image capturing command based on the size of the face area, outputs the generated image capturing command to the camera, extracts the lip print of the person from the captured image, and extracts the lip print of the person from the captured image. and the server provides a biometric authentication system that identifies the person based on the extracted lip print.
 また、本開示は、1以上のコンピュータが行う生体認証方法であって、人物を撮像するカメラに撮像指令を出力し、前記カメラから前記人物が撮像された撮像画像を取得し、前記撮像画像に映る前記人物の顔領域を抽出し、前記顔領域の大きさに基づいて、前記撮像指令を生成し、生成された前記撮像指令を前記カメラに出力し、前記撮像画像から前記人物の口唇紋を抽出し、抽出された前記口唇紋に基づいて、前記人物を特定する、生体認証方法を提供する。 Also, the present disclosure is a biometric authentication method performed by one or more computers, which outputs an imaging command to a camera that captures an image of a person, acquires a captured image of the person captured by the camera, extracts a face region of the person in the captured image, generates the imaging command based on the size of the face region, outputs the generated imaging command to the camera, extracts the lip print of the person from the captured image, and identifies the living person based on the extracted lip print. Provide an authentication method.
 本発明によれば、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。 According to the present invention, it is possible to more efficiently acquire captured images from which lip prints used for biometric authentication can be acquired.
実施の形態1に係る生体認証システムの内部構成例を示す図1 is a diagram showing an internal configuration example of a biometric authentication system according to Embodiment 1; FIG. 実施の形態1に係る生体認証システムの動作手順例を示すフローチャートFlowchart showing an example of the operation procedure of the biometric authentication system according to the first embodiment 画像処理例を説明する図Diagram for explaining an example of image processing 実施の形態1の変形例に係る生体認証システムの全体構成例を示す図FIG. 2 shows an example of the overall configuration of a biometric authentication system according to a modification of Embodiment 1; 図2および図7に示す各実施の形態の動作手順例と、動作手順の動作主体との対応関係を示すテーブルA table showing the correspondence relationship between the operation procedure example of each embodiment shown in FIGS. 2 and 7 and the action subject of the operation procedure 実施の形態2に係る生体認証システムの全体構成例を示す図FIG. 2 shows an example of the overall configuration of a biometric authentication system according to Embodiment 2; 実施の形態2に係る生体認証システムの動作手順例を示すフローチャートFlowchart showing an example of the operation procedure of the biometric authentication system according to the second embodiment
 以下、適宜図面を参照しながら、本開示に係る生体認証システムおよび生体認証方法を具体的に開示した各実施の形態を詳細に説明する。但し、必要以上に詳細な説明は省略する場合がある。例えば、既によく知られた事項の詳細説明や実質的に同一の構成に対する重複説明を省略する場合がある。これは、以下の説明が不必要に冗長になるのを避け、当業者の理解を容易にするためである。なお、添付図面および以下の説明は、当業者が本開示を十分に理解するために提供されるのであって、これらにより特許請求の範囲に記載の主題を限定することは意図されていない。 Hereinafter, each embodiment specifically disclosing the biometric authentication system and biometric authentication method according to the present disclosure will be described in detail with reference to the drawings as appropriate. However, more detailed description than necessary may be omitted. For example, detailed descriptions of well-known matters and redundant descriptions of substantially the same configurations may be omitted. This is to avoid unnecessary verbosity in the following description and to facilitate understanding by those skilled in the art. It should be noted that the accompanying drawings and the following description are provided for a thorough understanding of the present disclosure by those skilled in the art and are not intended to limit the claimed subject matter.
(実施の形態1)
 まず、図1を参照して、実施の形態1に係る生体認証システム100の内部構成について説明する。図1は、実施の形態1に係る生体認証システム100の内部構成例を示す図である。
(Embodiment 1)
First, referring to FIG. 1, the internal configuration of biometric authentication system 100 according to Embodiment 1 will be described. FIG. 1 is a diagram showing an internal configuration example of a biometric authentication system 100 according to Embodiment 1. As shown in FIG.
 生体認証システム100は、カメラC1と、端末装置P1と、サーバS1とを含んで構成される。生体認証システム100は、カメラC1によって、認証対象である人物を撮像する。生体認証システム100は、端末装置P1によって、カメラC1と、撮像された撮像画像に映る人物との間の距離を推定し、推定された距離に基づいて、カメラC1のズーム倍率を制御し、生体認証に用いられる撮像画像(つまり、認証対象である人物の口唇紋を取得可能な撮像画像)を撮像する。 The biometric authentication system 100 includes a camera C1, a terminal device P1, and a server S1. The biometric authentication system 100 captures an image of a person to be authenticated using a camera C1. The biometric authentication system 100 uses the terminal device P1 to estimate the distance between the camera C1 and the person appearing in the captured image, controls the zoom magnification of the camera C1 based on the estimated distance, and captures a captured image used for biometric authentication (that is, a captured image from which the lip print of the person to be authenticated can be obtained).
 生体認証システム100は、端末装置P1によって、撮像画像から人物の唇が映る唇領域を抽出した(切り出した)画像(以降、「唇画像」と表記)を生成し、生成された唇画像に基づいて、口唇紋を取得する。生体認証システム100は、サーバS1によって、端末装置P1により取得された口唇紋と、データベースDBに登録された少なくとも1人の人物の口唇紋とを照合することで、認証対象である人物を特定したり、認証対象である人物がデータベースDBに登録された人物であるか否かを判定したりする。 The biometric authentication system 100 uses the terminal device P1 to generate an image (hereinafter referred to as a "lip image") that extracts (cuts out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image. The biometric authentication system 100 identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1 with the lip print of at least one person registered in the database DB by the server S1.
 カメラC1は、端末装置P1との間でデータ送受信可能に接続される。カメラC1は、端末装置P1から送信された制御指令に基づいて、ズーム倍率の調整処理と、撮像処理とを実行する。カメラC1は、認証対象である人物を撮像し、撮像された撮像画像を端末装置P1に送信する。 The camera C1 is connected to the terminal device P1 so that data can be transmitted and received. The camera C1 performs zoom magnification adjustment processing and imaging processing based on the control command transmitted from the terminal device P1. The camera C1 captures an image of a person to be authenticated, and transmits the captured image to the terminal device P1.
 端末装置P1は、例えば、PC(Personal Computer)、ノートPC、タブレット端末、スマートフォン等を用いて実現される。端末装置P1は、カメラC1から送信された撮像画像を取得し、取得された撮像画像から人物の顔が映る顔領域を抽出する。端末装置P1は、取得された撮像画像の全域に対する顔領域の大きさ(面積)に基づいて、カメラC1により撮像された撮像画像が、生体認証に用いられる口唇紋を取得可能な撮像画像であるか否かを判定する。 The terminal device P1 is implemented using, for example, a PC (Personal Computer), a notebook PC, a tablet terminal, a smartphone, or the like. The terminal device P1 acquires the captured image transmitted from the camera C1, and extracts a face region in which a person's face is captured from the acquired captured image. The terminal device P1 determines whether or not the captured image captured by the camera C1 is a captured image from which a lip print used for biometric authentication can be obtained, based on the size (area) of the face region for the entire captured image.
 端末装置P1は、カメラC1により撮像された撮像画像が生体認証に用いられる口唇紋を取得可能な撮像画像であると判定した場合には、撮像画像から唇領域を抽出した(切り出した)唇画像を生成する。端末装置P1は、生成された唇画像から口唇紋を抽出し、口唇紋データを生成して、サーバS1に送信する。一方、端末装置P1は、カメラC1により撮像された撮像画像が生体認証に用いられる口唇紋を取得可能な撮像画像でないと判定した場合、カメラC1への接近または離反を人物に要求する撮像支援画面(不図示)を表示部16に出力したり、カメラC1のズーム倍率を調整する制御を実行して、人物の撮像を再度実行させたりする。 When the terminal device P1 determines that the captured image captured by the camera C1 is a captured image from which a lip print used for biometric authentication can be obtained, the terminal device P1 generates a lip image by extracting (cutting out) the lip region from the captured image. The terminal device P1 extracts a lip print from the generated lip image, generates lip print data, and transmits the generated lip print data to the server S1. On the other hand, when the terminal device P1 determines that the captured image captured by the camera C1 is not a captured image from which a lip print used for biometric authentication can be obtained, the terminal device P1 outputs to the display unit 16 an image capture support screen (not shown) that requests the person to approach or leave the camera C1, or executes control to adjust the zoom magnification of the camera C1 to recapture the person.
 端末装置P1は、通信部10と、プロセッサ11と、メモリ15と、表示部16とを含んで構成される。 The terminal device P1 includes a communication unit 10, a processor 11, a memory 15, and a display unit 16.
 通信部10は、カメラC1との間でデータ送受信可能に接続され、サーバS1の通信部20との間でネットワークNWを介して無線通信または有線通信可能に接続される。なお、ここでいう無線通信は、例えばWi-Fi(登録商標)等の無線LAN(Local Area Network)を介した通信である。 The communication unit 10 is connected to the camera C1 so that data can be transmitted and received, and is connected to the communication unit 20 of the server S1 through the network NW so as to be able to communicate wirelessly or by wire. The wireless communication here is communication via a wireless LAN (Local Area Network) such as Wi-Fi (registered trademark).
 通信部10は、カメラC1から送信された撮像画像をプロセッサ11に出力したり、サーバS1から送信された照合結果をプロセッサ11に出力したりする。また、通信部10は、プロセッサ11から出力された各種制御指令(例えば、ズーム倍率を調整させる制御指令、撮像処理を実行させる制御指令等)をカメラC1に送信したり、プロセッサ11から出力された口唇紋データをサーバS1に送信したりする。 The communication unit 10 outputs the captured image transmitted from the camera C1 to the processor 11, and outputs the matching result transmitted from the server S1 to the processor 11. The communication unit 10 also transmits various control commands output from the processor 11 (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1, and transmits lip print data output from the processor 11 to the server S1.
 プロセッサ11は、例えばCPU(Central Processing Unit)、DSP(Digital Signal Processor)またはFPGA(Field Programmable Gate Array)を用いて構成され、各部の動作を制御する。プロセッサ11は、メモリ15と協働して、各種の処理および制御を統括的に行う。具体的には、プロセッサ11は、メモリ15に保持されたプログラムおよびデータを参照し、そのプログラムを実行することにより、撮像制御部12および口唇紋抽出部14等の各部の機能を実現する。 The processor 11 is configured using, for example, a CPU (Central Processing Unit), a DSP (Digital Signal Processor) or an FPGA (Field Programmable Gate Array), and controls the operation of each part. The processor 11 cooperates with the memory 15 to collectively perform various processes and controls. Specifically, the processor 11 refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the imaging control unit 12, the lip print extraction unit 14, and the like.
 撮像制御部12は、カメラC1の撮像制御を実行する。撮像制御部12は、カメラC1のズーム倍率を変更させる制御指令を生成し、生成された制御指令をカメラC1に送信することで、カメラC1のズーム倍率の調整処理を実行させる。また、撮像制御部12は、測距部13の機能を実現可能に構成される。 The imaging control unit 12 executes imaging control of the camera C1. The imaging control unit 12 generates a control command for changing the zoom magnification of the camera C1, and transmits the generated control command to the camera C1 to execute processing for adjusting the zoom magnification of the camera C1. Also, the imaging control unit 12 is configured to be able to implement the function of the distance measurement unit 13 .
 測距部13は、カメラC1から送信された撮像画像に画像解析して人物の顔領域を検出する。測距部13は、撮像画像の全領域(すべての画素数)に対する人物の顔領域の大きさ(面積,画素数)に基づいて、カメラC1と撮像された人物との間の距離を推定し、推定された距離の情報を撮像制御部12に出力する。 The distance measuring unit 13 performs image analysis on the captured image transmitted from the camera C1 to detect a person's face area. The distance measurement unit 13 estimates the distance between the camera C1 and the captured person based on the size (area, number of pixels) of the face region of the person with respect to the entire area (total number of pixels) of the captured image, and outputs information about the estimated distance to the imaging control unit 12.
 撮像制御部12は、測距部13から出力された距離が所定距離以下であるか否かを判定する。なお、ここでいう所定距離は、カメラC1が備えるイメージセンサ(不図示)の解像度に基づいて決定される距離であって、撮像された人物の口唇紋を取得可能な距離が設定されてよい。 The imaging control unit 12 determines whether or not the distance output from the distance measuring unit 13 is equal to or less than a predetermined distance. The predetermined distance referred to here is a distance determined based on the resolution of an image sensor (not shown) provided in the camera C1, and may be set to a distance at which the lip print of the captured person can be obtained.
 撮像制御部12は、測距部13により推定された距離が所定距離以下であると判定した場合には、カメラC1から送信された撮像画像を口唇紋抽出部14に出力する。一方、撮像制御部12は、推定された距離が所定距離以下でないと判定した場合には、推定された距離と、所定距離との差に基づいて、カメラC1のズーム倍率を決定する。撮像制御部12は、現在設定されているズーム倍率を、決定された新たなズーム倍率に変更させ、人物を再度撮像させる制御指令を生成して通信部10に出力し、カメラC1に送信させる。 When the imaging control unit 12 determines that the distance estimated by the distance measuring unit 13 is equal to or less than the predetermined distance, the imaging control unit 12 outputs the captured image transmitted from the camera C1 to the lip print extraction unit 14 . On the other hand, when the imaging control unit 12 determines that the estimated distance is not equal to or less than the predetermined distance, it determines the zoom magnification of the camera C1 based on the difference between the estimated distance and the predetermined distance. The imaging control part 12 changes the currently set zoom magnification to the determined new zoom magnification, generates a control command for imaging the person again, outputs it to the communication part 10, and transmits it to the camera C1.
 口唇紋抽出部14は、カメラC1により撮像された撮像画像から人物の顔を検出し、検出された顔が映る顔領域を抽出する。口唇紋抽出部14は、抽出された顔領域の情報に基づいて、顔領域から認証対象である人物の目,鼻,口(唇)等の顔を構成する各パーツ(部位)を検出し、撮像画像から唇領域(つまり、口が映る領域)を切り出して(抽出して)唇画像を生成する。 The lip print extraction unit 14 detects a person's face from the captured image captured by the camera C1, and extracts a face area containing the detected face. Based on the information of the extracted face area, the lip print extraction unit 14 detects each part (part) constituting the face such as the eyes, nose, and mouth (lips) of the person to be authenticated from the face area, cuts (extracts) the lip area (that is, the area where the mouth is shown) from the captured image, and generates a lip image.
 口唇紋抽出部14は、生成された唇画像から唇の皺(エッジ)を検出しやすくするために、唇画像のコントラストを補正する。ここで、口唇紋抽出部14は、例えば、唇の皺部分のコントラストと、唇の皺以外の部分とのコントラストとのコントラスト比、またはコントラスト差が所定値以上となるように唇画像のコントラストを補正する。 The lip print extraction unit 14 corrects the contrast of the lip image in order to make it easier to detect lip wrinkles (edges) from the generated lip image. Here, the lip pattern extracting unit 14 corrects the contrast of the lip image so that the contrast ratio or the contrast difference between the contrast of the lip wrinkle portion and the contrast of the lip portion other than the lip wrinkle is equal to or greater than a predetermined value.
 口唇紋抽出部14は、コントラスト補正後の唇画像から唇の皺(エッジ)を抽出する。また、口唇紋抽出部14は、抽出された唇の皺(エッジ)の情報に基づいて、コントラスト補正後の唇画像に2値化処理を実行する。これにより、口唇紋抽出部14は、唇画像から唇の皺に基づく口唇紋をより容易に取得可能な唇画像を生成できる。口唇紋抽出部14は、2値化処理後の唇画像から口唇紋を抽出する。口唇紋抽出部14は、抽出された口唇紋データを通信部10に出力し、サーバS1に送信させる。 The lip print extraction unit 14 extracts lip creases (edges) from the contrast-corrected lip image. Further, the lip pattern extraction unit 14 performs binarization processing on the contrast-corrected lip image based on the extracted lip wrinkle (edge) information. As a result, the lip print extraction unit 14 can generate a lip image from which a lip print based on lip wrinkles can be more easily obtained from the lip image. A lip print extraction unit 14 extracts a lip print from the lip image after the binarization process. The lip print extraction unit 14 outputs the extracted lip print data to the communication unit 10 and causes it to be transmitted to the server S1.
 メモリ15は、例えばプロセッサ11の各処理を実行する際に用いられるワークメモリとしてのRAM(Random Access Memory)と、プロセッサ11の動作を規定したプログラムおよびデータを格納するROM(Read Only Memory)と、を有する。RAMには、プロセッサ11により生成あるいは取得されたデータもしくは情報が一時的に保存される。ROMには、プロセッサ11の動作を規定するプログラムが書き込まれている。 The memory 15 has, for example, a RAM (Random Access Memory) as a work memory used when executing each process of the processor 11, and a ROM (Read Only Memory) for storing programs and data that define the operation of the processor 11. The RAM temporarily stores data or information generated or acquired by the processor 11 . A program that defines the operation of the processor 11 is written in the ROM.
 表示部16は、例えばLCD(Liquid Crystal Display)または有機EL(Electroluminescence)等のディスプレイを用いて構成される。表示部16、プロセッサ11から出力された撮像支援画面(不図示),認証結果を通知する画面(不図示)等を表示する。 The display unit 16 is configured using a display such as an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence). The display unit 16 displays an imaging support screen (not shown) output from the processor 11, a screen (not shown) for notifying the authentication result, and the like.
 サーバS1は、一般的なサーバあるいはクラウドサーバにより実現される。サーバS1は、端末装置P1から送信された口唇紋のデータを取得し、取得された口唇紋の特徴量と、データベースDBに登録された少なくとも1人の人物の口唇紋の特徴量とを照合して照合スコアを算出する。サーバS1は、算出された照合スコアに基づいて、認証対象である人物を特定し、特定された人物情報、認証成功または認証失敗を示す情報等を含む認証結果を生成して、端末装置P1に送信する。 The server S1 is realized by a general server or a cloud server. A server S1 acquires lip print data transmitted from a terminal device P1, compares the acquired lip print feature quantity with the lip print feature quantity of at least one person registered in a database DB, and calculates a collation score. The server S1 identifies a person to be authenticated based on the calculated matching score, generates an authentication result including information about the identified person, information indicating authentication success or authentication failure, etc., and transmits the authentication result to the terminal device P1.
 サーバS1は、通信部20と、プロセッサ21と、メモリ25と、データベースDBとを含んで構成される。なお、図1に示すサーバS1は、データベースDBを含んで構成される例を示すが、これに限定されない。例えば、データベースDBは、サーバS1と別体で構成され、サーバS1との間で通信可能に接続された外部装置により実現されてよい。また、データベースDBは、複数あってよい。 The server S1 includes a communication unit 20, a processor 21, a memory 25, and a database DB. Note that the server S1 shown in FIG. 1 shows an example including a database DB, but is not limited to this. For example, the database DB may be configured separately from the server S1 and implemented by an external device communicably connected to the server S1. Also, there may be a plurality of databases DB.
 通信部20は、ネットワークNWを介して、端末装置P1の通信部10との間で無線通信あるいは有線通信可能に接続され、データの送受信を実行する。なお、ここでいう無線通信は、例えばWi-Fi(登録商標)などの無線LANを介した通信である。 The communication unit 20 is connected to the communication unit 10 of the terminal device P1 via the network NW so as to be capable of wireless communication or wired communication, and transmits and receives data. The wireless communication here is communication via a wireless LAN such as Wi-Fi (registered trademark).
 通信部20は、端末装置P1の通信部10から送信された口唇紋データをプロセッサ21に出力する。また、通信部20は、プロセッサ21から出力された認証結果の情報を端末装置P1の通信部10に送信する。 The communication unit 20 outputs to the processor 21 the lip print data transmitted from the communication unit 10 of the terminal device P1. In addition, the communication unit 20 transmits the authentication result information output from the processor 21 to the communication unit 10 of the terminal device P1.
 プロセッサ21は、例えばCPUまたはFPGAを用いて構成され、メモリ25と協働して、各種の処理および制御を行う。具体的には、プロセッサ21は、メモリ25に保持されたプログラムを参照し、そのプログラムを実行することにより照合部22,スコア判定部24等の各部の機能を実現する。 The processor 21 is configured using, for example, a CPU or FPGA, and cooperates with the memory 25 to perform various types of processing and control. Specifically, the processor 21 refers to the program held in the memory 25 and implements the functions of each unit such as the matching unit 22 and the score determination unit 24 by executing the program.
 照合部22は、端末装置P1から送信された口唇紋データから認証対象である人物の個人性を示す特徴量を抽出し、抽出された口唇紋の特徴量と、データベースDBに登録された少なくとも1人の人物の口唇紋の特徴量とを照合して、抽出された特徴量と、データベースDBに登録された口唇紋の特徴量との照合の度合いを示す照合スコアを算出する。照合部22は、特徴量抽出部23の機能を実現可能に構成される。 The matching unit 22 extracts a feature amount indicating the individuality of the person to be authenticated from the lip print data transmitted from the terminal device P1, compares the extracted feature amount of the lip print with the feature amount of the lip print of at least one person registered in the database DB, and calculates a matching score indicating the degree of matching between the extracted feature amount and the feature amount of the lip print registered in the database DB. The collation unit 22 is configured to be able to implement the function of the feature amount extraction unit 23 .
 なお、ここでいう照合スコアは、認証対象である人物の口唇紋の特徴量と、データベースDBに登録された人物の口唇紋の特徴量との間の距離が小さいほど(つまり、類似しているほど)高く算出されるスコアである。また、ここでいう距離は、ユークリッド距離であってよい。照合スコアは、取得された口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量との距離,ユークリッド距離が小さいほどスコアが高く算出され、距離,ユークリッド距離が大きいほどスコアが低く算出される。 The matching score here is a score that is calculated higher as the distance between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print of the person registered in the database DB is smaller (that is, the closer they are to each other). Also, the distance here may be the Euclidean distance. The collation score is calculated such that the smaller the Euclidean distance between the acquired lip print feature amount and the lip print feature amount registered in the database DB, the higher the score calculated, and the larger the Euclidean distance, the lower the score calculated.
 また、照合スコアは、認証対象である人物の口唇紋の特徴量と、データベースDBに登録された人物の口唇紋の特徴量との類似度を示すスコアであってもよい。このような場合、照合スコアは、類似度を示すスコアである場合、類似度が高いほど照合スコアが高く、類似度が低いほど照合スコアが低く算出されてよい。 Also, the matching score may be a score indicating the degree of similarity between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print of the person registered in the database DB. In such a case, if the matching score is a score indicating similarity, the higher the similarity, the higher the matching score, and the lower the similarity, the lower the matching score.
 特徴量抽出部23は、端末装置P1から送信された口唇紋データから認証対象である人物の口唇紋の特徴量を抽出する。特徴量抽出部23は、抽出された口唇紋の特徴量を照合部22に出力する。 The feature quantity extraction unit 23 extracts the feature quantity of the lip print of the person to be authenticated from the lip print data transmitted from the terminal device P1. The feature amount extraction unit 23 outputs the extracted feature amount of the lip print to the matching unit 22 .
 照合部22は、特徴量抽出部23から出力された口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量とを照合し、照合スコアを算出する。照合部22は、算出された照合スコアのうち最も高い照合スコアを選択する。 The collation unit 22 collates the feature amount of the lip print output from the feature amount extraction unit 23 with the feature amount of the lip print registered in the database DB, and calculates a collation score. The matching unit 22 selects the highest matching score among the calculated matching scores.
 照合部22は、選択された照合スコアに対応する人物の人物情報をデータベースDBから抽出し、選択された照合スコアと抽出された人物情報とを対応付けてスコア判定部24に出力する。 The matching unit 22 extracts the personal information of the person corresponding to the selected matching score from the database DB, associates the selected matching score with the extracted personal information, and outputs them to the score determination unit 24 .
 スコア判定部24は、照合部22から出力された照合スコアに基づいて、認証対象である人物と、この照合スコアに対応する人物とが同一人物であるか否かを判定する。スコア判定部24は、照合部22から出力された照合スコアが所定の照合スコア以下であると判定した場合には、認証対象である人物がデータベースDBに登録されていない人物である(つまり、認証失敗である)と判定してもよい。スコア判定部24は、認証対象である人物と、この照合スコアに対応する人物とが同一人物であるか否かを判定した判定結果に基づく認証結果を生成して、通信部20を介して端末装置P1に送信する。なお、認証結果は、特定された人物情報、認証成功または認証失敗を示す情報等を含んで生成される。 Based on the matching score output from the matching unit 22, the score determining unit 24 determines whether the person to be authenticated and the person corresponding to this matching score are the same person. When the score determination unit 24 determines that the matching score output from the matching unit 22 is equal to or less than a predetermined matching score, the score determining unit 24 may determine that the person to be authenticated is not registered in the database DB (that is, authentication has failed). The score determination unit 24 generates an authentication result based on the determination result of determining whether or not the person to be authenticated and the person corresponding to the matching score are the same person, and transmits the authentication result to the terminal device P1 via the communication unit 20. It should be noted that the authentication result is generated including the specified person information, information indicating authentication success or authentication failure, and the like.
 例えば、スコア判定部24は、照合スコアが「0(ゼロ)~100」の値で算出される場合、照合スコアが80以下であるか否かを判定する。スコア判定部24は、照合スコアが80以下でないと判定した場合には、認証対象である人物と、この照合スコアに対応する人物とが同一人物である(つまり、認証成功)と判定する。 For example, the score determination unit 24 determines whether the matching score is 80 or less when the matching score is calculated as a value of "0 (zero) to 100". When the score determination unit 24 determines that the matching score is not equal to or less than 80, it determines that the person to be authenticated and the person corresponding to this matching score are the same person (that is, authentication is successful).
 一方、スコア判定部24は、照合スコアが80以下であると判定した場合には、認証対象である人物と、この照合スコアに対応する人物とが同一人物でない(つまり、認証失敗)と判定する。これにより、スコア判定部24は、照合スコアが低い(つまり、認証対象である人物の口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量との類似度が低い)場合に発生しうる誤認証をより効果的に抑制できる。 On the other hand, when the score determination unit 24 determines that the matching score is 80 or less, it determines that the person to be authenticated and the person corresponding to this matching score are not the same person (that is, authentication failure). As a result, the score determination unit 24 can more effectively suppress authentication errors that can occur when the matching score is low (that is, the similarity between the feature amount of the lip print of the person to be authenticated and the feature amount of the lip print registered in the database DB is low).
 メモリ25は、例えばプロセッサ21の処理を実行する際に用いられるワークメモリとしてのRAMと、プロセッサ21の処理を規定したプログラムを格納するROMとを有する。RAMには、プロセッサ21により生成あるいは取得されたデータが一時的に保存される。ROMには、プロセッサ21の処理を規定するプログラムが書き込まれている。 The memory 25 has, for example, a RAM as a work memory that is used when executing the processing of the processor 21, and a ROM that stores a program that defines the processing of the processor 21. Data generated or acquired by the processor 21 is temporarily stored in the RAM. A program that defines the processing of the processor 21 is written in the ROM.
 データベースDBは、所謂ストレージであって、例えばフラッシュメモリ、HDD(Hard Disk Drive)あるいはSSD(Solid State Drive)等の記憶媒体を用いて構成される。データベースDBは、人物ごとの人物情報(例えば、氏名、年齢、性別等)と、口唇紋あるいは口唇紋の特徴量とを対応付けて格納(登録)する。 The database DB is a so-called storage, and is configured using a storage medium such as flash memory, HDD (Hard Disk Drive), or SSD (Solid State Drive). The database DB stores (registers) personal information (for example, name, age, gender, etc.) for each person in association with a lip print or a feature amount of the lip print.
 なお、データベースDBは、口の開き方がそれぞれ異なる複数の口唇紋のそれぞれを人物ごとに格納(登録)してもよい。例えば、データベースDBに登録される口唇紋は、口を閉じている時の口唇紋、母音「あ」を発話中の口唇紋、母音「い」を発話中の口唇紋等が人物ごとに登録されていてよい。このような場合、データベースDBは、人物情報と、口唇紋と、この口唇紋が取得された時の口の開き方に関する情報、あるいは口唇紋が取得された時の発話母音の情報等とを対応付けて、人物ごとに格納(登録)する。 It should be noted that the database DB may store (register) each of a plurality of lip prints with different mouth openings for each person. For example, the lip prints registered in the database DB may include, for each person, a lip print with the mouth closed, a lip print while uttering the vowel "a", and a lip print while uttering the vowel "i". In such a case, the database DB stores (registers) the person information, the lip print, the information on how to open the mouth when the lip print is acquired, or the information on the uttered vowels when the lip print is acquired, etc., in association with each other.
 ネットワークNWは、端末装置P1とサーバS1との間を有線通信または無線通信可能に接続し、データの送受信を可能にする。 The network NW connects the terminal device P1 and the server S1 for wired communication or wireless communication, and enables data transmission/reception.
 なお、図1に示す例では、カメラC1と、端末装置P1とがそれぞれ別の装置として構成される例を示すが、カメラC1は、端末装置P1と一体的に構成され、端末装置P1の各種機能を実現可能に構成されてもよい。 Although the example shown in FIG. 1 shows an example in which the camera C1 and the terminal device P1 are configured as separate devices, the camera C1 may be configured integrally with the terminal device P1 and configured to be able to implement various functions of the terminal device P1.
 次に、図2および図3を参照して、実施の形態1に係る生体認証システム100の動作手順について説明する。図2は、実施の形態1に係る生体認証システム100の動作手順例を示すフローチャートである。図3は、画像処理例を説明する図である。図5は、図2および図7に示す各実施の形態の動作手順例と、動作手順の動作主体との対応関係を示すテーブルである。なお、実施の形態1に係る生体認証システム100は、図5に示すように、端末装置P1がステップSt11~ステップSt17の処理、サーバS1がステップSt18~ステップSt19の処理をそれぞれ実行する。 Next, the operation procedure of the biometric authentication system 100 according to Embodiment 1 will be described with reference to FIGS. 2 and 3. FIG. FIG. 2 is a flow chart showing an example of the operation procedure of the biometric authentication system 100 according to the first embodiment. FIG. 3 is a diagram illustrating an example of image processing. FIG. 5 is a table showing the correspondence relationship between the operation procedure example of each embodiment shown in FIGS. 2 and 7 and the action subject of the operation procedure. In the biometric authentication system 100 according to the first embodiment, as shown in FIG. 5, the terminal device P1 executes steps St11 to St17, and the server S1 executes steps St18 to St19.
 端末装置P1は、カメラC1から送信された認証対象である人物の顔を含む撮像画像Pt11を取得する(St11)。端末装置P1は、取得された撮像画像Pt11から人物の顔を含む顔領域Ar11を検出し、撮像画像の全域に対する顔領域Ar11の大きさ(面積)に基づいて、被写体(つまり、認証対象である人物)とカメラC1との間の距離を推定する(St12)。 The terminal device P1 acquires the captured image Pt11 including the face of the person to be authenticated transmitted from the camera C1 (St11). The terminal device P1 detects a face area Ar11 including a person's face from the acquired captured image Pt11, and estimates the distance between the subject (that is, the person to be authenticated) and the camera C1 based on the size (area) of the face area Ar11 with respect to the entire captured image (St12).
 端末装置P1は、推定された距離と所定距離とに基づいて、カメラC1のズーム倍率を決定する。端末装置P1は、現在設定されているズーム倍率を、決定された新たなズーム倍率に変更させ、人物を再度撮像させる制御指令を生成する。端末装置P1は、生成された制御指令をカメラC1に送信し、カメラC1のズーム倍率を制御する(St13)。 The terminal device P1 determines the zoom magnification of the camera C1 based on the estimated distance and the predetermined distance. The terminal device P1 changes the currently set zoom magnification to the determined new zoom magnification, and generates a control command for re-imaging the person. The terminal device P1 transmits the generated control command to the camera C1 and controls the zoom magnification of the camera C1 (St13).
 カメラC1は、端末装置P1から送信された制御指令に基づいて、ズーム倍率を調整した後、再度撮像処理を実行し、撮像された撮像画像を端末装置P1に送信する。 After adjusting the zoom magnification based on the control command transmitted from the terminal device P1, the camera C1 executes the imaging process again and transmits the captured image to the terminal device P1.
 端末装置P1は、カメラC1から送信された人物の顔を含む撮像画像Pt12を再度取得する(St14)。端末装置P1は、取得された撮像画像Pt12から人物の顔を含む顔領域Ar12を検出し、撮像画像の全域に対する顔領域Ar12の大きさ(面積)に基づいて、被写体(認証対象である人物)とカメラC1との間の距離を再度推定する。 The terminal device P1 acquires again the captured image Pt12 including the person's face transmitted from the camera C1 (St14). The terminal device P1 detects a face area Ar12 including a person's face from the acquired captured image Pt12, and re-estimates the distance between the subject (person to be authenticated) and the camera C1 based on the size (area) of the face area Ar12 with respect to the entire captured image.
 なお、ここで端末装置P1は、再度推定された距離が、撮像画像から人物の口唇紋を取得可能な距離であると判定した場合には、ステップSt15の処理に移行する。一方、端末装置P1は、再度推定された距離が、撮像画像から人物の口唇紋を取得可能な距離でないと判定した場合には、ステップSt13の処理に移行し、再度カメラC1のズーム倍率を調整させる制御指令を生成する。 Here, if the terminal device P1 determines that the re-estimated distance is a distance at which the person's lip print can be obtained from the captured image, the process proceeds to step St15. On the other hand, when the terminal device P1 determines that the re-estimated distance is not the distance at which the person's lip print can be obtained from the captured image, the terminal device P1 proceeds to the processing of step St13, and generates a control command to adjust the zoom magnification of the camera C1 again.
 端末装置P1は、撮像画像Pt12に映る人物の顔領域Ar13を抽出し(St15)、抽出された顔領域Ar13から人物の唇(口)が映る唇領域Pt14をさらに抽出する(St16)。端末装置P1は、抽出された唇領域Pt14を切り出した唇画像を生成し、生成された唇画像に映る唇の皺から口唇紋を検出可能なコントラストに補正する(St17)。端末装置P1は、コントラスト補正後の唇画像から唇の皺(エッジ)を抽出し、唇画像に2値化処理を実行する(St17)。端末装置P1は、2値化処理後の唇画像Pt15から認証対象である人物の口唇紋を抽出して口唇紋データを生成し、サーバS1に送信する。 The terminal device P1 extracts the face area Ar13 of the person shown in the captured image Pt12 (St15), and further extracts the lip area Pt14 showing the lips (mouth) of the person from the extracted face area Ar13 (St16). The terminal device P1 generates a lip image obtained by cutting out the extracted lip region Pt14, and corrects the contrast so that the lip print can be detected from the wrinkles of the lips reflected in the generated lip image (St17). The terminal device P1 extracts lip creases (edges) from the contrast-corrected lip image, and performs binarization processing on the lip image (St17). The terminal device P1 extracts the lip print of the person to be authenticated from the binarized lip image Pt15, generates lip print data, and transmits the generated lip print data to the server S1.
 サーバS1は、端末装置P1から送信された口唇紋データに基づいて、この人物の個人性を示す特徴量を抽出する。サーバS1は、抽出された認証対象である人物の口唇紋に基づく特徴量と、データベースDBに登録された少なくとも1人の人物の口唇紋に基づく特徴量とを照合する(St18)。 Based on the lip print data transmitted from the terminal device P1, the server S1 extracts a feature quantity indicating the individuality of the person. The server S1 collates the extracted feature amount based on the lip print of the person to be authenticated with the feature amount based on the lip print of at least one person registered in the database DB (St18).
 サーバS1は、抽出された口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量との照合スコアを算出する。サーバS1は、算出された照合スコアのそれぞれのうち最も高い照合スコアを選択し、選択された照合スコアに対応する人物を、認証対象である人物と同一人物であると判定(特定)する(St19)。サーバS1は、特定された人物に対応する人物情報を含む認証結果を生成して、端末装置P1に送信する。端末装置P1は、サーバS1から送信された認証結果を含む画面(不図示)を生成して、表示部16に出力して表示する。 The server S1 calculates a matching score between the extracted lip print feature amount and the lip print feature amount registered in the database DB. The server S1 selects the highest matching score among the calculated matching scores, and determines (identifies) that the person corresponding to the selected matching score is the same person as the person to be authenticated (St19). The server S1 generates an authentication result including personal information corresponding to the specified person, and transmits it to the terminal device P1. The terminal device P1 generates a screen (not shown) including the authentication result transmitted from the server S1, and outputs and displays it on the display unit 16. FIG.
 以上により、実施の形態1に係る生体認証システム100は、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。また、これにより、生体認証システム100は、この撮像画像を用いて生体認証を実行することで、認証精度をより向上させることができる。 As described above, the biometric authentication system 100 according to Embodiment 1 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired. In addition, the biometric authentication system 100 can further improve authentication accuracy by executing biometric authentication using this captured image.
(実施の形態1の変形例)
 上述した実施の形態1に係る生体認証システム100は、カメラC1により撮像された撮像画像に映る人物の顔の大きさに基づいて、カメラC1と認証対象である人物との間の距離を推定する例について示した。実施の形態1の変形例に係る生体認証システム100Aは、測距センサSS1を用いてカメラC1と認証対象である人物との間の距離を測定する例について説明する。
(Modification of Embodiment 1)
The biometric authentication system 100 according to the first embodiment described above estimates the distance between the camera C1 and the person to be authenticated based on the size of the person's face in the captured image captured by the camera C1. An example in which the biometric authentication system 100A according to the modification of the first embodiment measures the distance between the camera C1 and the person to be authenticated using the ranging sensor SS1 will be described.
 なお、実施の形態1の変形例に係る生体認証システム100Aの説明において、実施の形態1に係る生体認証システム100と同様の構成には同一の符号を付与し、その説明を省略する。 In addition, in the description of the biometric authentication system 100A according to the modified example of the first embodiment, the same reference numerals are given to the same configurations as the biometric authentication system 100 according to the first embodiment, and the description thereof will be omitted.
 まず、図4を参照して、実施の形態1の変形例に係る生体認証システム100Aの全体構成について説明する。図4は、実施の形態1の変形例に係る生体認証システム100Aの全体構成例を示す図である。 First, referring to FIG. 4, the overall configuration of a biometric authentication system 100A according to the modification of Embodiment 1 will be described. FIG. 4 is a diagram showing an overall configuration example of a biometric authentication system 100A according to a modification of the first embodiment.
 生体認証システム100Aは、カメラC1と、測距センサSS1と、端末装置P1Aと、サーバS1とを含んで構成される。生体認証システム100Aは、測距センサSS1によって認証対象である人物とカメラC1との間の距離を測定し、端末装置P1Aによって、測定された距離が口唇紋を取得可能な撮像画像を撮像可能な距離であると判定した場合、カメラC1で人物を撮像する。 The biometric authentication system 100A includes a camera C1, a ranging sensor SS1, a terminal device P1A, and a server S1. The biometric authentication system 100A measures the distance between the person to be authenticated by the ranging sensor SS1 and the camera C1, and when the terminal device P1A determines that the measured distance is a distance at which a captured image from which a lip print can be obtained can be captured, the camera C1 captures an image of the person.
 生体認証システム100Aは、端末装置P1Aによって、撮像画像から人物の唇が映る唇領域を抽出した(切り出した)唇画像を生成し、生成された唇画像に基づいて、口唇紋を取得する。生体認証システム100Aは、サーバS1によって、端末装置P1Aにより取得された口唇紋と、データベースDBに登録された少なくとも1人の人物の口唇紋とを照合することで、認証対象である人物を特定したり、認証対象である人物がデータベースDBに登録された人物であるか否かを判定したりする。 The biometric authentication system 100A uses the terminal device P1A to generate a lip image by extracting (cutting out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image. The biometric authentication system 100A identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1A with the lip print of at least one person registered in the database DB by the server S1.
 カメラC1は、端末装置P1Aとの間でデータ送受信可能に接続される。カメラC1は、端末装置P1Aにより制御されて認証対象である人物を撮像し、撮像された撮像画像を端末装置P1Aに送信する。 The camera C1 is connected to the terminal device P1A so that data can be transmitted and received. The camera C1 is controlled by the terminal device P1A to capture an image of a person to be authenticated, and transmits the captured image to the terminal device P1A.
 測距センサSS1は、例えば、TOF(Time Of Flight)センサ等により実現される。測距センサSS1は、端末装置P1Aとの間でデータ送受信可能に接続され、カメラC1と認証対象である人物との間の距離を測定し、測定された距離情報を端末装置P1Aに送信する。 The ranging sensor SS1 is realized by, for example, a TOF (Time Of Flight) sensor or the like. The ranging sensor SS1 is connected to the terminal device P1A so as to be able to transmit and receive data, measures the distance between the camera C1 and the person to be authenticated, and transmits the measured distance information to the terminal device P1A.
 端末装置P1Aは、例えば、PC、ノートPC、タブレット端末、スマートフォン等を用いて実現される。端末装置P1Aは、測距センサSS1と、カメラC1と、サーバS1との間でそれぞれデータ送受信可能に接続される。 The terminal device P1A is implemented using, for example, a PC, notebook PC, tablet terminal, smartphone, or the like. The terminal device P1A is connected to each of the distance measuring sensor SS1, the camera C1, and the server S1 so that data can be transmitted and received.
 実施の形態1の変形例における端末装置P1Aは、通信部10Aと、プロセッサ11Aと、メモリ15と、表示部16とを含んで構成される。 A terminal device P1A in the modification of Embodiment 1 includes a communication unit 10A, a processor 11A, a memory 15, and a display unit 16.
 通信部10Aは、測距センサSS1と、カメラC1との間でそれぞれデータ送受信可能に接続され、サーバS1の通信部20との間でネットワークNWを介して無線通信または有線通信可能に接続される。 The communication unit 10A is connected to enable data transmission/reception between the distance measuring sensor SS1 and the camera C1, respectively, and is connected to the communication unit 20 of the server S1 through the network NW to enable wireless or wired communication.
 通信部10Aは、測距センサSS1から送信された距離情報をプロセッサ11Aに出力したり、カメラC1から送信された撮像画像をプロセッサ11Aに出力したり、サーバS1から送信された照合結果をプロセッサ11Aに出力したりする。また、通信部10Aは、プロセッサ11Aから出力された各種制御指令(例えば、ズーム倍率を調整させる制御指令、撮像処理を実行させる制御指令等)をカメラC1に送信したり、プロセッサ11Aから出力された口唇紋データをサーバS1に送信したりする。 The communication unit 10A outputs distance information transmitted from the ranging sensor SS1 to the processor 11A, outputs captured images transmitted from the camera C1 to the processor 11A, and outputs matching results transmitted from the server S1 to the processor 11A. Further, the communication unit 10A transmits various control commands output from the processor 11A (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1, and transmits lip print data output from the processor 11A to the server S1.
 プロセッサ11Aは、例えばCPU、DSPまたはFPGAを用いて構成され、各部の動作を制御する。プロセッサ11Aは、メモリ15と協働して、各種の処理および制御を統括的に行う。具体的には、プロセッサ11Aは、メモリ15に保持されたプログラムおよびデータを参照し、そのプログラムを実行することにより、撮像制御部12A、口唇紋抽出部14等の各部の機能を実現する。 The processor 11A is configured using, for example, a CPU, DSP or FPGA, and controls the operation of each unit. The processor 11A cooperates with the memory 15 to collectively perform various processes and controls. Specifically, the processor 11A refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the imaging control section 12A, the lip print extraction section 14, and the like.
 撮像制御部12Aは、測距センサSS1から送信された距離情報に基づいて、カメラC1と認証対象である人物との間の距離が所定距離以下であるか否かを判定する。撮像制御部12Aは、距離が所定距離以下であると判定した場合には、認証対象である人物を撮像させる制御指令を生成して、カメラC1に送信する。 The imaging control unit 12A determines whether or not the distance between the camera C1 and the person to be authenticated is equal to or less than a predetermined distance based on the distance information transmitted from the ranging sensor SS1. When the image capturing control unit 12A determines that the distance is equal to or less than the predetermined distance, the image capturing control unit 12A generates a control command for capturing an image of the person to be authenticated, and transmits the control command to the camera C1.
 撮像制御部12Aは、カメラC1から送信された撮像画像から人物の顔を検出し、検出された顔が映る顔領域を抽出する。口唇紋抽出部14は、抽出された顔領域の情報に基づいて、顔領域から認証対象である人物の目,鼻,口(唇)等の顔を構成する各パーツ(部位)を検出し、撮像画像から唇領域(つまり、口が映る領域)を切り出して(抽出して)唇画像を生成する。 The imaging control unit 12A detects a person's face from the captured image transmitted from the camera C1, and extracts a face area containing the detected face. Based on the information of the extracted face area, the lip print extraction unit 14 detects each part (part) constituting the face such as the eyes, nose, and mouth (lips) of the person to be authenticated from the face area, cuts (extracts) the lip area (that is, the area where the mouth is shown) from the captured image, and generates a lip image.
 なお、図4に示す例では、カメラC1と、端末装置P1Aとがそれぞれ別の装置として構成される例を示すが、カメラC1は、端末装置P1Aと一体的に構成され、端末装置P1Aの各種機能を実現可能に構成されてもよい。 Although the example shown in FIG. 4 shows an example in which the camera C1 and the terminal device P1A are configured as separate devices, the camera C1 may be configured integrally with the terminal device P1A and configured to be able to implement various functions of the terminal device P1A.
 ここで、実施の形態1に変形例に係る生体認証システム100Aは、図2に示す動作手順において、測距センサSS1がステップSt12の処理、端末装置P1AがステップSt13~ステップSt17の処理、サーバS1がステップSt18~ステップSt19の処理をそれぞれ実行する(図5参照)。なお、実施の形態1に変形例においてステップSt11の処理は、省略される。 Here, in the biometric authentication system 100A according to the modification of the first embodiment, in the operation procedure shown in FIG. 2, the distance measuring sensor SS1 performs the process of step St12, the terminal device P1A performs the process of steps St13 to St17, and the server S1 performs the processes of steps St18 to St19 (see FIG. 5). Note that the process of step St11 is omitted in the modification of the first embodiment.
 以上により、実施の形態1の変形例に係る生体認証システム100Aは、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。また、これにより、生体認証システム100Aは、この撮像画像を用いて生体認証を実行することで、認証精度をより向上させることができる。 As described above, the biometric authentication system 100A according to the modification of the first embodiment can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired. In addition, the biometric authentication system 100A can further improve authentication accuracy by executing biometric authentication using this captured image.
(実施の形態2)
 上述した実施の形態1に係る生体認証システム100は、端末装置P1によってカメラC1と認証対象である人物との間の距離を推定し、カメラC1のズーム倍率および撮像制御を実行する例について示した。実施の形態2に係る生体認証システム200は、測距センサSS2によってカメラC1Bと認証対象である人物との間の距離を推定し、カメラC1Bのズーム倍率および撮像制御を実行する例について説明する。
(Embodiment 2)
The biometric authentication system 100 according to the first embodiment described above estimates the distance between the camera C1 and the person to be authenticated by the terminal device P1, and executes zoom magnification and imaging control of the camera C1. An example will be described in which the biometric authentication system 200 according to the second embodiment estimates the distance between the camera C1B and the person to be authenticated using the ranging sensor SS2, and executes zoom magnification and imaging control of the camera C1B.
 なお、実施の形態2に係る生体認証システム200の説明において、実施の形態1に係る生体認証システム100、または実施の形態1の変形例に係る生体認証システム100Aと同様の構成には同一の符号を付与し、その説明を省略する。 In addition, in the description of the biometric authentication system 200 according to the second embodiment, the same reference numerals are given to the same configurations as the biometric authentication system 100 according to the first embodiment or the biometric authentication system 100A according to the modification of the first embodiment, and the description thereof will be omitted.
 図6を参照して、実施の形態2に係る生体認証システム200の内部構成について説明する。図6は、実施の形態2に係る生体認証システム200の内部構成例を示すブロック図である。 The internal configuration of the biometric authentication system 200 according to Embodiment 2 will be described with reference to FIG. FIG. 6 is a block diagram showing an internal configuration example of the biometric authentication system 200 according to the second embodiment.
 実施の形態2に係る生体認証システム200は、カメラC1Bと、測距センサSS2と、端末装置P1Bと、サーバS1とを含んで構成される。生体認証システム200は、測距センサSS2によってカメラC1Bと、認証対象である人物との間の距離を測定し、測定された距離が口唇紋を取得可能な撮像画像を撮像可能な距離であると判定した場合、カメラC1Bで人物を撮像する。 The biometric authentication system 200 according to Embodiment 2 includes a camera C1B, a ranging sensor SS2, a terminal device P1B, and a server S1. The biometric authentication system 200 measures the distance between the camera C1B and the person to be authenticated by the distance measuring sensor SS2, and if it is determined that the measured distance is the distance at which the captured image from which the lip print can be obtained can be captured, the camera C1B captures the person.
 生体認証システム200は、端末装置P1Bによって、撮像画像から人物の唇が映る唇領域を抽出した(切り出した)唇画像を生成し、生成された唇画像に基づいて、口唇紋を取得する。生体認証システム200は、サーバS1によって、端末装置P1Bにより取得された口唇紋と、データベースDBに登録された少なくとも1人の人物の口唇紋とを照合することで、認証対象である人物を特定したり、認証対象である人物がデータベースDBに登録された人物であるか否かを判定したりする。 The biometric authentication system 200 uses the terminal device P1B to generate a lip image by extracting (cutting out) a lip region in which a person's lips appear from the captured image, and acquires a lip print based on the generated lip image. The biometric authentication system 200 identifies a person to be authenticated and determines whether or not the person to be authenticated is a person registered in the database DB by collating the lip print acquired by the terminal device P1B with the lip print of at least one person registered in the database DB by the server S1.
 カメラC1Bは、測距センサSS2と、端末装置P1Bとの間でデータ送受信可能に接続される。カメラC1Bは、測距センサSS2により制御されて認証対象である人物を撮像し、撮像された撮像画像を端末装置P1Bに送信する。 The camera C1B is connected so as to be able to transmit and receive data between the ranging sensor SS2 and the terminal device P1B. The camera C1B is controlled by the ranging sensor SS2 to capture an image of the person to be authenticated, and transmits the captured image to the terminal device P1B.
 測距センサSS2は、例えば、TOFセンサ等の測距部30を含んで構成されたコンピュータ、装置等により実現される。測距センサSS2は、カメラC1Bとの間でデータ送受信可能に接続される。測距センサSS2は、測距部30と、通信部31と、プロセッサ32と、メモリ33とを含んで構成される。なお、測距部30は必須でなく、省略されてもよい。このような場合、測距センサSS2は、カメラC1Bにより撮像された撮像画像に映る人物の顔領域の大きさ(面積)に基づいて、カメラC1Bと、認証対象である人物との間の距離を推定する。 The distance measuring sensor SS2 is implemented by, for example, a computer, device, or the like including a distance measuring unit 30 such as a TOF sensor. The ranging sensor SS2 is connected to the camera C1B so as to be able to transmit and receive data. The ranging sensor SS2 includes a ranging section 30, a communication section 31, a processor 32, and a memory 33. FIG. Note that the distance measuring unit 30 is not essential and may be omitted. In such a case, the distance measuring sensor SS2 estimates the distance between the camera C1B and the person to be authenticated based on the size (area) of the person's face region in the captured image captured by the camera C1B.
 測距部30は、カメラC1Bと、認証対象である人物との間の距離を測定し、測定された距離情報をプロセッサ32に出力する。 The distance measurement unit 30 measures the distance between the camera C1B and the person to be authenticated, and outputs the measured distance information to the processor 32.
 通信部31は、カメラC1Bとの間で無線通信あるいは有線通信可能に接続され、データの送受信を実行する。なお、ここでいう無線通信は、例えばWi-Fi(登録商標)などの無線LANを介した通信である。通信部31は、プロセッサ32から出力された各種制御指令(例えば、ズーム倍率を調整させる制御指令、撮像処理を実行させる制御指令等)をカメラC1Bに送信したり、カメラC1Bから送信された撮像画像をプロセッサ32に出力したり、端末装置P1Bに撮像画像を送信したりする。 The communication unit 31 is connected to the camera C1B for wireless communication or wired communication, and executes data transmission/reception. The wireless communication here is communication via a wireless LAN such as Wi-Fi (registered trademark). The communication unit 31 transmits various control commands output from the processor 32 (for example, a control command for adjusting the zoom magnification, a control command for executing an imaging process, etc.) to the camera C1B, outputs the captured image transmitted from the camera C1B to the processor 32, and transmits the captured image to the terminal device P1B.
 プロセッサ32は、例えばCPUまたはFPGAを用いて構成され、メモリ33と協働して、各種の処理および制御を行う。具体的には、プロセッサ32は、メモリ33に保持されたプログラムを参照し、そのプログラムを実行することにより各部の機能を実現する。 The processor 32 is configured using, for example, a CPU or FPGA, and cooperates with the memory 33 to perform various types of processing and control. Specifically, the processor 32 refers to the program held in the memory 33 and implements the function of each part by executing the program.
 プロセッサ32は、測距部30から出力された距離情報に基づいて、カメラC1Bと、認証対象である人物との間の距離が所定距離以下であるか否かを判定する。プロセッサ32は、カメラC1Bと認証対象である人物との間の距離が所定距離以下であると判定した場合には、認証対象である人物を撮像させる制御指令を生成して通信部31に出力し、カメラC1Bに送信させる。 Based on the distance information output from the distance measuring unit 30, the processor 32 determines whether the distance between the camera C1B and the person to be authenticated is equal to or less than a predetermined distance. When the processor 32 determines that the distance between the camera C1B and the person to be authenticated is equal to or less than a predetermined distance, the processor 32 generates a control command for imaging the person to be authenticated, outputs it to the communication unit 31, and transmits it to the camera C1B.
 一方、プロセッサ32は、カメラC1Bと認証対象である人物との間の距離が所定距離以下でないと判定した場合には、測定された距離と、所定距離との差に基づいて、カメラC1Bのズーム倍率を決定する。プロセッサ32は、現在設定されているズーム倍率を、決定された新たなズーム倍率に変更させ、人物を再度撮像させる制御指令を生成して通信部31に出力し、カメラC1Bに送信させる。 On the other hand, when the processor 32 determines that the distance between the camera C1B and the person to be authenticated is not equal to or less than the predetermined distance, it determines the zoom magnification of the camera C1B based on the difference between the measured distance and the predetermined distance. The processor 32 changes the currently set zoom magnification to the determined new zoom magnification, generates a control command to image the person again, outputs it to the communication unit 31, and transmits it to the camera C1B.
 また、測距部30が省略された構成である場合、プロセッサ32は、カメラC1Bから送信された撮像画像を取得し、取得された撮像画像に映る認証対象である人物の顔領域を検出する。プロセッサ32は、撮像画像の全領域(すべての画素数)に対する人物の顔領域の大きさ(面積,画素数)に基づいて、カメラC1Bと、認証対象である人物との間の距離を推定し、推定された距離が所定距離以下であるか否かを判定する。 In addition, when the distance measurement unit 30 is omitted, the processor 32 acquires the captured image transmitted from the camera C1B and detects the face area of the person who is the authentication target in the acquired captured image. The processor 32 estimates the distance between the camera C1B and the person to be authenticated based on the size (area, number of pixels) of the face area of the person with respect to the entire area (total number of pixels) of the captured image, and determines whether the estimated distance is equal to or less than a predetermined distance.
 プロセッサ32は、推定された距離が所定距離以下であると判定した場合には、カメラC1Bから送信された撮像画像を、端末装置P1Bに送信する。一方、プロセッサ32は、推定された距離が所定距離以下でないと判定した場合には、推定された距離と、所定距離との差に基づいて、カメラC1Bのズーム倍率を決定する。プロセッサ32は、現在カメラC1Bに設定されているズーム倍率を、決定された新たなズーム倍率に変更させて、人物を再度撮像させる制御指令を生成する。プロセッサ32は、生成された制御指令を通信部31に出力し、カメラC1Bに送信させる。 When the processor 32 determines that the estimated distance is equal to or less than the predetermined distance, the processor 32 transmits the captured image transmitted from the camera C1B to the terminal device P1B. On the other hand, when the processor 32 determines that the estimated distance is not equal to or less than the predetermined distance, the processor 32 determines the zoom magnification of the camera C1B based on the difference between the estimated distance and the predetermined distance. The processor 32 changes the zoom magnification currently set for the camera C1B to the determined new zoom magnification, and generates a control command to image the person again. The processor 32 outputs the generated control command to the communication unit 31 and causes it to be transmitted to the camera C1B.
 メモリ33は、例えばプロセッサ32の各処理を実行する際に用いられるワークメモリとしてのRAMと、プロセッサ32の動作を規定したプログラムおよびデータを格納するROMと、を有する。RAMには、プロセッサ32により生成あるいは取得されたデータもしくは情報が一時的に保存される。ROMには、プロセッサ32の動作を規定するプログラムが書き込まれている。 The memory 33 has, for example, a RAM as a work memory that is used when executing each process of the processor 32, and a ROM that stores programs and data that define the operation of the processor 32. Data or information generated or obtained by the processor 32 is temporarily stored in the RAM. A program that defines the operation of the processor 32 is written in the ROM.
 実施の形態2に係る端末装置P1Bは、通信部10Bと、プロセッサ11Bと、メモリ15と、表示部16とを含んで構成される。 A terminal device P1B according to Embodiment 2 includes a communication unit 10B, a processor 11B, a memory 15, and a display unit 16.
 通信部10Bは、測距センサSS2と、カメラC1Bとの間でそれぞれデータ送受信可能に接続され、サーバS1の通信部20との間でネットワークNWを介して無線通信または有線通信可能に接続される。 The communication unit 10B is connected to enable data transmission/reception between the distance measuring sensor SS2 and the camera C1B, and is connected to the communication unit 20 of the server S1 via the network NW to enable wireless or wired communication.
 通信部10Bは、カメラC1Bあるいは測距センサSS2から送信された撮像画像をプロセッサ11Bに出力したり、サーバS1から送信された照合結果をプロセッサ11Bに出力したりする。また、通信部10Bは、プロセッサ11Bから出力された各種制御指令(例えば、ズーム倍率を調整させる制御指令、撮像処理を実行させる制御指令等)をカメラC1Bに送信したり、プロセッサ11Bから出力された口唇紋データをサーバS1に送信したりする。 The communication unit 10B outputs the captured image transmitted from the camera C1B or the ranging sensor SS2 to the processor 11B, and outputs the matching result transmitted from the server S1 to the processor 11B. Further, the communication unit 10B transmits various control commands output from the processor 11B (for example, a control command for adjusting the zoom magnification, a control command for executing imaging processing, etc.) to the camera C1B, and transmits lip print data output from the processor 11B to the server S1.
 プロセッサ11Bは、例えばCPU、DSPまたはFPGAを用いて構成され、各部の動作を制御する。プロセッサ11Bは、メモリ15と協働して、各種の処理および制御を統括的に行う。具体的には、プロセッサ11Bは、メモリ15に保持されたプログラムおよびデータを参照し、そのプログラムを実行することにより、口唇紋抽出部14等の各部の機能を実現する。 The processor 11B is configured using, for example, a CPU, DSP or FPGA, and controls the operation of each unit. The processor 11B cooperates with the memory 15 to collectively perform various processes and controls. Specifically, the processor 11B refers to the programs and data held in the memory 15 and executes the programs, thereby implementing the functions of the lip print extractor 14 and other parts.
 次に、図7を参照して、実施の形態2に係る生体認証システム200の動作手順について説明する。図7は、実施の形態2に係る生体認証システム200の動作手順例を示すフローチャートである。実施の形態2に係る生体認証システム200は、図5に示すように、測距センサSS2がステップSt21~ステップSt23,ステップSt25の処理、端末装置P1BがステップSt24~ステップSt27の処理、サーバS1がステップSt28~ステップSt29の処理をそれぞれ実行する。 Next, referring to FIG. 7, the operation procedure of biometric authentication system 200 according to Embodiment 2 will be described. FIG. 7 is a flow chart showing an example of the operation procedure of the biometric authentication system 200 according to the second embodiment. In the biometric authentication system 200 according to the second embodiment, as shown in FIG. 5, the distance measuring sensor SS2 performs steps St21 to St23 and steps St25, the terminal device P1B performs steps St24 to St27, and the server S1 performs steps St28 to St29.
 なお、図7に示す動作手順例は、測距センサSS2の構成から測距部30が省略された場合の動作手順例を示す。測距センサSS2の構成から測距部30が省略されない場合、実施の形態2に係る生体認証システム200は、ステップSt23の処理の後に、測距センサSS2が端末装置P1Bに撮像画像を送信し、端末装置P1Bが送信された撮像画像から顔領域を抽出する処理を実行する。 It should be noted that the operation procedure example shown in FIG. 7 shows an operation procedure example when the distance measurement unit 30 is omitted from the configuration of the distance measurement sensor SS2. If the distance measuring unit 30 is not omitted from the configuration of the distance measuring sensor SS2, the biometric authentication system 200 according to Embodiment 2 transmits the captured image of the distance measuring sensor SS2 to the terminal device P1B after the process of step St23, and the terminal device P1B executes processing for extracting a face area from the transmitted captured image.
 測距センサSS2は、カメラC1Bから送信された認証対象である人物の顔を含む撮像画像Pt11(図3参照)を取得する(St21)。端末装置P1は、取得された撮像画像Pt11から人物の顔を含む顔領域Ar11を検出し、撮像画像の全域に対する顔領域Ar11(図3参照)の大きさ(面積)に基づいて、被写体(認証対象である人物)とカメラC1との間の距離を推定する(St22)。測距センサSS2は、推定された距離が所定距離以下であるか否かを判定する(St23)。 The ranging sensor SS2 acquires the captured image Pt11 (see FIG. 3) including the face of the person who is the authentication target, transmitted from the camera C1B (St21). The terminal device P1 detects a face area Ar11 including a person's face from the captured image Pt11, and estimates the distance between the subject (person to be authenticated) and the camera C1 based on the size (area) of the face area Ar11 (see FIG. 3) in the entire captured image (St22). The ranging sensor SS2 determines whether or not the estimated distance is equal to or less than a predetermined distance (St23).
 測距センサSS2は、ステップSt23の処理において、推定された距離が所定距離以下であると判定した場合(St23,YES)、撮像画像を端末装置P1Bに送信する。 When the ranging sensor SS2 determines in the process of step St23 that the estimated distance is equal to or less than the predetermined distance (St23, YES), it transmits the captured image to the terminal device P1B.
 一方、測距センサSS2は、ステップSt23の処理において、推定された距離が所定距離以下でないと判定した場合(St23,NO)、推定された距離と所定距離との差に基づいて、カメラC1Bのズーム倍率を決定する。測距センサSS2は、現在設定されているズーム倍率を、決定された新たなズーム倍率に変更させ、人物を再度撮像させる制御指令を生成する。測距センサSS2は、生成された制御指令をカメラC1Bに送信し、カメラC1Bのズーム倍率を制御する。 On the other hand, when the ranging sensor SS2 determines in the processing of step St23 that the estimated distance is not equal to or less than the predetermined distance (St23, NO), it determines the zoom magnification of the camera C1B based on the difference between the estimated distance and the predetermined distance. The distance measuring sensor SS2 generates a control command to change the currently set zoom magnification to the determined new zoom magnification and to image the person again. Ranging sensor SS2 transmits the generated control command to camera C1B to control the zoom magnification of camera C1B.
 また、測距センサSS2は、認証対象である人物に、カメラC1Bへの接近または離反を人物に要求する撮像支援画面(不図示)を表示させる制御指令を生成して、端末装置P1Bに送信する。端末装置P1Bは、測距センサSS2から送信された制御指令に基づいて、撮像支援画面を生成し、生成された撮像支援画面を表示部16に出力して表示する(St25)。 In addition, the ranging sensor SS2 generates a control command for displaying an imaging support screen (not shown) requesting the person to be authenticated to approach or leave the camera C1B, and transmits it to the terminal device P1B. The terminal device P1B generates an imaging support screen based on the control command transmitted from the ranging sensor SS2, and outputs and displays the generated imaging support screen on the display unit 16 (St25).
 カメラC1Bは、測距センサSS2から送信された制御指令に基づいて、ズーム倍率を調整した後、再度撮像処理を実行し、撮像された撮像画像を測距センサSS2に送信する。 After adjusting the zoom magnification based on the control command transmitted from the ranging sensor SS2, the camera C1B executes the imaging process again and transmits the captured image to the ranging sensor SS2.
 端末装置P1Bは、測距センサSS2から送信された人物の顔を含む撮像画像Pt12(図3参照)を取得する。端末装置P1Bは、取得された撮像画像Pt12(図3参照)に映る人物の顔領域Ar13(図3参照)を抽出し(St24)、抽出された顔領域Ar13から人物の唇(口)が映る唇領域Pt14(図3参照)をさらに抽出する(St26)。 The terminal device P1B acquires the captured image Pt12 (see FIG. 3) including the person's face transmitted from the ranging sensor SS2. The terminal device P1B extracts a face area Ar13 (see FIG. 3) of the person reflected in the acquired captured image Pt12 (see FIG. 3) (St24), and further extracts a lip area Pt14 (see FIG. 3) showing the lips (mouth) of the person from the extracted face area Ar13 (St26).
 端末装置P1Bは、抽出された唇領域Pt14を切り出した唇画像を生成し、生成された唇画像に映る唇の皺から口唇紋を検出可能なコントラストに補正する(St27)。端末装置P1Bは、コントラスト補正後の唇画像から唇の皺(エッジ)を抽出し、唇画像に2値化処理を実行する(St27)。端末装置P1Bは、2値化処理後の唇画像Pt15(図3参照)から認証対象である人物の口唇紋を抽出して口唇紋データを生成し、サーバS1に送信する。 The terminal device P1B generates a lip image obtained by cutting out the extracted lip region Pt14, and corrects the contrast so that the lip print can be detected from the wrinkles of the lips reflected in the generated lip image (St27). The terminal device P1B extracts lip wrinkles (edges) from the contrast-corrected lip image, and performs binarization processing on the lip image (St27). The terminal device P1B extracts the lip print of the person to be authenticated from the binarized lip image Pt15 (see FIG. 3), generates lip print data, and transmits the generated lip print data to the server S1.
 サーバS1は、端末装置P1Bから送信された口唇紋データに基づいて、この人物の個人性を示す特徴量を抽出する。サーバS1は、抽出された口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量とを照合する(St28)。 Based on the lip print data transmitted from the terminal device P1B, the server S1 extracts a feature amount indicating the individuality of the person. The server S1 collates the extracted feature amount of the lip print with the feature amount of the lip print registered in the database DB (St28).
 サーバS1は、抽出された口唇紋の特徴量と、データベースDBに登録された口唇紋の特徴量との照合スコアを算出する。サーバS1は、算出された照合スコアのそれぞれのうち最も高い照合スコアを選択し、選択された照合スコアに対応する人物を、認証対象である人物と同一人物であると判定(特定)する(St29)。サーバS1は、特定された人物に対応する人物情報を含む認証結果を生成して、端末装置P1Bに送信する。端末装置P1Bは、サーバS1から送信された認証結果を含む画面(不図示)を生成して、表示部16に出力して表示する。 The server S1 calculates a matching score between the extracted lip print feature amount and the lip print feature amount registered in the database DB. The server S1 selects the highest matching score among the calculated matching scores, and determines (identifies) that the person corresponding to the selected matching score is the same person as the person to be authenticated (St29). The server S1 generates an authentication result including personal information corresponding to the specified person, and transmits it to the terminal device P1B. The terminal device P1B generates a screen (not shown) including the authentication result transmitted from the server S1, and outputs and displays it on the display unit 16. FIG.
 以上により、実施の形態3に係る生体認証システム200は、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。また、これにより、生体認証システム200は、この撮像画像を用いて生体認証を実行することで、認証精度をより向上させることができる。 As described above, the biometric authentication system 200 according to Embodiment 3 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired. In addition, the biometric authentication system 200 can further improve authentication accuracy by executing biometric authentication using this captured image.
 以上により、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100,100A,200は、端末装置P1,P1A,P1Bと、端末装置P1,P1A,P1Bとの間で通信可能なサーバS1と、を含んで構成される。端末装置P1,P1A,P1Bは、人物を撮像するカメラC1,C1Bに制御指令(撮像指令の一例)を出力し、カメラC1,C1Bから人物が撮像された撮像画像を取得し、撮像画像に映る人物の顔領域を抽出し、顔領域の大きさに基づいて、制御指令を生成し、生成された制御指令をカメラC1に出力し、撮像画像から人物の口唇紋を抽出して、サーバS1に送信する。サーバS1は、抽出された口唇紋に基づいて、人物を特定する。 As described above, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment include the terminal devices P1, P1A, and P1B and the server S1 that can communicate with the terminal devices P1, P1A, and P1B. Terminal devices P1, P1A, and P1B output control commands (examples of imaging commands) to cameras C1 and C1B that capture images of people, acquire captured images of people captured by cameras C1 and C1B, extract the face regions of the people in the captured images, generate control commands based on the size of the face regions, output the generated control commands to camera C1, extract the lip prints of the people from the captured images, and transmit the commands to server S1. The server S1 identifies a person based on the extracted lip print.
 これにより、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100,100A,200は、顔領域の大きさに基づいて、カメラC1と人物との間の距離を推定し、推定された距離が所定距離以下であると判定した場合、制御指令を生成することで、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。 Thus, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment estimate the distance between the camera C1 and the person based on the size of the face region, and generate a control command when determining that the estimated distance is equal to or less than a predetermined distance, thereby more efficiently acquiring a captured image from which a lip print used for biometric authentication can be acquired.
 また、以上により、実施の形態1に係る生体認証システム100の端末装置P1は、抽出された口唇紋と、登録された少なくとも1人の人物の登録口唇紋との照合に基づいて、抽出された口唇紋に同一あるいは類似する登録口唇紋があると判定した場合、同一あるいは類似する登録口唇紋に対応する人物を、口唇紋の人物であると特定する。これにより、実施の形態1に係る生体認証システム100は、カメラC1により撮像された撮像画像に映る口唇紋に基づいて、生体認証を実行できる。 As described above, when the terminal device P1 of the biometric authentication system 100 according to the first embodiment compares the extracted lip print with the registered lip print of at least one registered person and determines that there is a registered lip print identical or similar to the extracted lip print, it identifies the person corresponding to the same or similar registered lip print as the person of the lip print. Thereby, the biometric authentication system 100 according to Embodiment 1 can perform biometric authentication based on the lip print appearing in the captured image captured by the camera C1.
 また、以上により、実施の形態1に係る生体認証システム100の端末装置P1は、顔領域の大きさに基づいて、カメラC1と人物との間の距離を推定し、推定された距離が所定距離以下であると判定した場合、制御指令を生成する。これにより、実施の形態1に係る生体認証システム100は、カメラC1により撮像された撮像画像が、生体認証に用いられる口唇紋を取得可能な撮像画像であるか否かを判定できる。 As described above, the terminal device P1 of the biometric authentication system 100 according to Embodiment 1 estimates the distance between the camera C1 and the person based on the size of the face area, and generates a control command when determining that the estimated distance is equal to or less than the predetermined distance. Thus, the biometric authentication system 100 according to Embodiment 1 can determine whether or not the image captured by the camera C1 is an image from which a lip print used for biometric authentication can be obtained.
 また、以上により、実施の形態1に係る生体認証システム100の端末装置P1は、推定された距離が所定距離以下でないと判定した場合、カメラC1のズーム倍率を変更させる制御指令(ズーム指令の一例)を生成し、生成された制御指令をカメラC1に出力する。これにより、実施の形態1に係る生体認証システム100は、カメラC1により撮像された撮像画像が、生体認証に用いられる口唇紋を取得可能な撮像画像でないと判定した場合には、カメラC1のズーム倍率を変更することで、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。 Further, as described above, when the terminal device P1 of the biometric authentication system 100 according to the first embodiment determines that the estimated distance is not equal to or less than the predetermined distance, it generates a control command (an example of a zoom command) for changing the zoom magnification of the camera C1, and outputs the generated control command to the camera C1. As a result, when the biometric authentication system 100 according to Embodiment 1 determines that the captured image captured by the camera C1 is not a captured image from which a lip print used for biometric authentication can be obtained, the biometric authentication system 100 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be obtained by changing the zoom magnification of the camera C1.
 また、以上により、実施の形態1に係る生体認証システム100の端末装置P1は、推定された距離と所定距離との差に基づいて、カメラC1のズーム倍率を決定し、カメラC1の現在のズーム倍率を決定されたズーム倍率に変更させる制御指令を生成する。これにより、実施の形態1に係る生体認証システム100は、カメラC1のズーム倍率を変更することで、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。 As described above, the terminal device P1 of the biometric authentication system 100 according to the first embodiment determines the zoom magnification of the camera C1 based on the difference between the estimated distance and the predetermined distance, and generates a control command to change the current zoom magnification of the camera C1 to the determined zoom magnification. Thus, the biometric authentication system 100 according to Embodiment 1 can more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired by changing the zoom magnification of the camera C1.
 また、以上により、実施の形態1の変形例、および実施の形態2に係る生体認証システム100A,200は、カメラC1,C1Bと人物との間の距離を取得し、取得された距離に基づいて、制御指令を生成し、生成された制御指令をカメラに出力する。これにより、実施の形態1の変形例、および実施の形態2に係る生体認証システム100A,200は、カメラC1,C1Bと認証対象である人物との間の距離に基づいて、カメラC1,C1Bが生体認証に用いられる口唇紋を取得可能な撮像画像を撮像可能であるか否かを判定できる。 Also, as described above, the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment acquire the distance between the cameras C1 and C1B and the person, generate a control command based on the acquired distance, and output the generated control command to the camera. As a result, the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment can determine whether or not the cameras C1 and C1B are capable of capturing a captured image from which a lip print used for biometric authentication can be obtained, based on the distance between the cameras C1 and C1B and the person to be authenticated.
 また、以上により、実施の形態1の変形例、および実施の形態2に係る生体認証システム100A,200は、取得された距離が所定距離以下であると判定した場合、制御指令を生成する。これにより、実施の形態1の変形例、および実施の形態2に係る生体認証システム100A,200は、カメラC1,C1Bと認証対象である人物との間の距離が生体認証に用いられる口唇紋を取得可能な距離であると判定した場合に、カメラC1,C1Bに人物を撮像させることができる。したがって、生体認証システム100A,200は、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得できる。 Also, as described above, the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment generate a control command when determining that the acquired distance is equal to or less than the predetermined distance. As a result, the biometric authentication systems 100A and 200 according to the modification of the first embodiment and the second embodiment can cause the cameras C1 and C1B to image the person when it is determined that the distance between the cameras C1 and C1B and the person to be authenticated is a distance at which the lip print used for biometric authentication can be obtained. Therefore, the biometric authentication systems 100A and 200 can more efficiently acquire captured images from which lip prints used for biometric authentication can be acquired.
 また、以上により、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100,100A,200は、撮像画像から人物の唇を含む唇領域を抽出した唇画像を生成し、生成された唇画像から人物の口唇紋を抽出する。これにより、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100、100A,200は、カメラC1,C1Bにより撮像された撮像画像から、認証対象である人物の口唇紋を取得できる。 Also, as described above, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment generate a lip image obtained by extracting a lip region including a person's lips from a captured image, and extract a person's lip print from the generated lip image. As a result, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire the lip print of the person to be authenticated from the images captured by the cameras C1 and C1B.
 また、以上により、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100,100A,200は、唇画像を2値化して、2値化された唇画像から口唇紋を抽出する。これにより、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100、100A,200は、生体認証により適した口唇紋を取得できる。 Also, as described above, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment binarize the lip image and extract the lip print from the binarized lip image. As a result, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire a lip print more suitable for biometric authentication.
 また、以上により、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100,100A,200は、唇画像のコントラストを補正して、コントラストが補正された唇画像から唇のエッジ(つまり、唇の皺)を抽出し、抽出された唇のエッジに基づいて、唇画像を2値化する。これにより、実施の形態1、実施の形態1の変形例、および実施の形態2に係る生体認証システム100、100A,200は、生体認証により適した口唇紋を取得できる。 Further, as described above, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment correct the contrast of the lip image, extract the lip edge (that is, the lip wrinkle) from the contrast-corrected lip image, and binarize the lip image based on the extracted lip edge. As a result, the biometric authentication systems 100, 100A, and 200 according to the first embodiment, the modified example of the first embodiment, and the second embodiment can acquire a lip print more suitable for biometric authentication.
 以上、添付図面を参照しながら各種の実施の形態について説明したが、本開示はかかる例に限定されない。当業者であれば、特許請求の範囲に記載された範疇内において、各種の変更例、修正例、置換例、付加例、削除例、均等例に想到し得ることは明らかであり、それらについても本開示の技術的範囲に属すると了解される。また、発明の趣旨を逸脱しない範囲において、上述した各種の実施の形態における各構成要素を任意に組み合わせてもよい。 Although various embodiments have been described above with reference to the accompanying drawings, the present disclosure is not limited to such examples. It is clear that a person skilled in the art can conceive of various modifications, modifications, substitutions, additions, deletions, and equivalents within the scope of the claims, and it is understood that they also belong to the technical scope of the present disclosure. In addition, the constituent elements of the various embodiments described above may be combined arbitrarily without departing from the gist of the invention.
 なお、本出願は、2022年1月19日出願の日本特許出願(特願2022-006567)に基づくものであり、その内容は本出願の中に参照として援用される。 This application is based on a Japanese patent application (Japanese Patent Application No. 2022-006567) filed on January 19, 2022, the contents of which are incorporated herein by reference.
 本開示は、生体認証に用いられる口唇紋を取得可能な撮像画像をより効率的に取得する生体認証装置、生体認証システムおよび生体認証方法として有用である。 The present disclosure is useful as a biometric authentication device, a biometric authentication system, and a biometric authentication method that more efficiently acquire a captured image from which a lip print used for biometric authentication can be acquired.
10,10A,10B,20,31 通信部
11,11A,11B,21,32 プロセッサ
12,12A 撮像制御部
13,30 測距部
14 口唇紋抽出部
15,25,33 メモリ
22 照合部
23 特徴量抽出部
24 スコア判定部
100,100A,200 生体認証システム
C1,C1B カメラ
DB データベース
P1,P1A,P1B 端末装置
S1 サーバ
SS1,SS2 測距センサ
10, 10A, 10B, 20, 31 communication units 11, 11A, 11B, 21, 32 processors 12, 12A imaging control units 13, 30 distance measurement unit 14 lip print extraction units 15, 25, 33 memory 22 matching unit 23 feature amount extraction unit 24 score determination units 100, 100A, 200 biometric authentication systems C1, C1B camera DB databases P1, P 1A, P1B terminal device S1 server SS1, SS2 ranging sensor

Claims (11)

  1.  端末装置と、
     前記端末装置との間で通信可能なサーバと、を含んで構成される生体認証システムであって、
     前記端末装置は、
     人物を撮像するカメラに撮像指令を出力し、
     前記カメラから前記人物が撮像された撮像画像を取得し、
     前記撮像画像に映る前記人物の顔領域を抽出し、
     前記顔領域の大きさに基づいて、前記撮像指令を生成し、
     生成された前記撮像指令を前記カメラに出力し、
     前記撮像画像から前記人物の口唇紋を抽出して、前記サーバに送信し、
     前記サーバは、
     抽出された前記口唇紋に基づいて、前記人物を特定する、
     生体認証システム。
    a terminal device;
    A biometric authentication system comprising a server communicable with the terminal device,
    The terminal device
    Output an imaging command to a camera that captures a person,
    Acquiring a captured image in which the person is captured from the camera,
    extracting a face region of the person reflected in the captured image;
    generating the imaging command based on the size of the face area;
    outputting the generated imaging command to the camera;
    extracting the lip print of the person from the captured image and transmitting it to the server;
    The server is
    identifying the person based on the extracted lip print;
    biometric authentication system.
  2.  前記サーバは、
     抽出された前記口唇紋と、登録された少なくとも1人の人物の登録口唇紋との照合に基づいて、抽出された前記口唇紋に同一あるいは類似する登録口唇紋があると判定した場合、前記同一あるいは類似する登録口唇紋に対応する人物を、前記口唇紋の前記人物であると特定する、
     請求項1に記載の生体認証システム。
    The server is
    When it is determined that there is a registered lip print identical or similar to the extracted lip print based on collation of the extracted lip print and registered lip prints of at least one registered person, identifying a person corresponding to the same or similar registered lip print as the person of the lip print;
    The biometric authentication system according to claim 1.
  3.  前記端末装置は、
     前記顔領域の大きさに基づいて、前記カメラと前記人物との間の距離を推定し、
     推定された前記距離が所定距離以下であると判定した場合、前記撮像指令を生成する、
     請求項2に記載の生体認証システム。
    The terminal device
    estimating a distance between the camera and the person based on the size of the face area;
    generating the imaging command when it is determined that the estimated distance is equal to or less than a predetermined distance;
    The biometric authentication system according to claim 2.
  4.  前記端末装置は、
     推定された前記距離が所定距離以下でないと判定した場合、前記カメラのズーム倍率を変更させるズーム指令を生成し、
     生成された前記ズーム指令を前記カメラに出力する、
     請求項3に記載の生体認証システム。
    The terminal device
    generating a zoom command for changing the zoom magnification of the camera when it is determined that the estimated distance is not equal to or less than a predetermined distance;
    outputting the generated zoom command to the camera;
    The biometric authentication system according to claim 3.
  5.  前記端末装置は、
     推定された前記距離と前記所定距離との差に基づいて、前記カメラのズーム倍率を決定し、
     前記カメラの現在のズーム倍率を決定されたズーム倍率に変更させる前記ズーム指令を生成する、
     請求項4に記載の生体認証システム。
    The terminal device
    determining a zoom magnification of the camera based on the difference between the estimated distance and the predetermined distance;
    generating the zoom command to change the current zoom factor of the camera to the determined zoom factor;
    The biometric authentication system according to claim 4.
  6.  前記端末装置は、
     前記カメラと前記人物との間の距離を取得し、
     取得された前記距離に基づいて、前記撮像指令を生成し、
     生成された前記撮像指令を前記カメラに出力する、
     請求項1に記載の生体認証システム。
    The terminal device
    obtaining the distance between the camera and the person;
    generating the imaging command based on the obtained distance;
    outputting the generated imaging command to the camera;
    The biometric authentication system according to claim 1.
  7.  前記端末装置は、
     取得された前記距離が所定距離以下であると判定した場合、前記撮像指令を生成する、
     請求項6に記載の生体認証システム。
    The terminal device
    generating the imaging command when it is determined that the obtained distance is equal to or less than a predetermined distance;
    The biometric authentication system according to claim 6.
  8.  前記端末装置は、
     前記撮像画像から前記人物の唇を含む唇領域を抽出した唇画像を生成し、
     生成された前記唇画像から前記口唇紋を抽出する、
     請求項1に記載の生体認証システム。
    The terminal device
    generating a lip image by extracting a lip region including the lips of the person from the captured image;
    extracting the lip print from the generated lip image;
    The biometric authentication system according to claim 1.
  9.  前記端末装置は、
     前記唇画像を2値化して、2値化された唇画像から前記口唇紋を抽出する、
     請求項8に記載の生体認証システム。
    The terminal device
    binarizing the lip image and extracting the lip print from the binarized lip image;
    The biometric authentication system according to claim 8.
  10.  前記端末装置は、
     前記唇画像のコントラストを補正して、前記コントラストが補正された唇画像から前記唇のエッジを抽出し、
     抽出された前記唇のエッジに基づいて、前記唇画像を2値化する、
     請求項9に記載の生体認証システム。
    The terminal device
    correcting the contrast of the lip image and extracting the edge of the lip from the contrast-corrected lip image;
    binarizing the lip image based on the extracted edge of the lip;
    The biometric authentication system according to claim 9.
  11.  1以上のコンピュータが行う生体認証方法であって、
     人物を撮像するカメラに撮像指令を出力し、
     前記カメラから前記人物が撮像された撮像画像を取得し、
     前記撮像画像に映る前記人物の顔領域を抽出し、
     前記顔領域の大きさに基づいて、前記撮像指令を生成し、
     生成された前記撮像指令を前記カメラに出力し、
     前記撮像画像から前記人物の口唇紋を抽出し、
     抽出された前記口唇紋に基づいて、前記人物を特定する、
     生体認証方法。
    A biometric authentication method performed by one or more computers,
    Output an imaging command to a camera that captures a person,
    Acquiring a captured image in which the person is captured from the camera,
    extracting a face region of the person reflected in the captured image;
    generating the imaging command based on the size of the face area;
    outputting the generated imaging command to the camera;
    extracting a lip print of the person from the captured image;
    identifying the person based on the extracted lip print;
    biometric method.
PCT/JP2022/048688 2022-01-19 2022-12-28 Biometric authentication system and biometric authentication method WO2023140099A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022006567A JP2023105624A (en) 2022-01-19 2022-01-19 Biometric authentication system and biometric authentication method
JP2022-006567 2022-01-19

Publications (1)

Publication Number Publication Date
WO2023140099A1 true WO2023140099A1 (en) 2023-07-27

Family

ID=87348659

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/048688 WO2023140099A1 (en) 2022-01-19 2022-12-28 Biometric authentication system and biometric authentication method

Country Status (2)

Country Link
JP (1) JP2023105624A (en)
WO (1) WO2023140099A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0694968A (en) * 1992-09-14 1994-04-08 Nikon Corp Automatic photographing camera
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
JP2001346090A (en) * 2000-06-01 2001-12-14 Olympus Optical Co Ltd Electronic camera system and electronic camera
JP2003075717A (en) * 2001-09-06 2003-03-12 Nikon Corp Distance detecting device
WO2011068395A2 (en) * 2009-12-02 2011-06-09 Mimos Berhad A method for identity recognition based on lip image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0694968A (en) * 1992-09-14 1994-04-08 Nikon Corp Automatic photographing camera
US6219639B1 (en) * 1998-04-28 2001-04-17 International Business Machines Corporation Method and apparatus for recognizing identity of individuals employing synchronized biometrics
JP2001346090A (en) * 2000-06-01 2001-12-14 Olympus Optical Co Ltd Electronic camera system and electronic camera
JP2003075717A (en) * 2001-09-06 2003-03-12 Nikon Corp Distance detecting device
WO2011068395A2 (en) * 2009-12-02 2011-06-09 Mimos Berhad A method for identity recognition based on lip image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
T. WAKASUGI ; M. NISHIURA ; K. FUKUI: "Robust lip contour extraction using separability of multi-dimensional distributions", AUTOMATIC FACE AND GESTURE RECOGNITION, 2004. PROCEEDINGS. SIXTH IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 17 May 2004 (2004-05-17), Piscataway, NJ, USA , pages 415 - 420, XP010949468, ISBN: 978-0-7695-2122-0 *

Also Published As

Publication number Publication date
JP2023105624A (en) 2023-07-31

Similar Documents

Publication Publication Date Title
US20220343677A1 (en) Image processing device, image processing method, face recognition system, program, and storage medium
WO2019232866A1 (en) Human eye model training method, human eye recognition method, apparatus, device and medium
WO2019232862A1 (en) Mouth model training method and apparatus, mouth recognition method and apparatus, device, and medium
US8929595B2 (en) Dictionary creation using image similarity
US20160210493A1 (en) Mobility empowered biometric appliance a tool for real-time verification of identity through fingerprints
US9792484B2 (en) Biometric information registration apparatus and biometric information registration method
JP2011100229A (en) Image processing apparatus and image processing method
US8238604B2 (en) System and method for validation of face detection in electronic images
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
US10423817B2 (en) Latent fingerprint ridge flow map improvement
JP7151875B2 (en) Image processing device, image processing method, and program
US20060120578A1 (en) Minutiae matching
JP2872776B2 (en) Face image matching device
JP2005275935A (en) Terminal device
JP2018151833A (en) Identifier learning device and identifier learning method
JP2008257329A (en) Face recognition apparatus
KR102366777B1 (en) Apparatus and method for domain adaptation-based object recognition
US9846807B1 (en) Detecting eye corners
JP6795243B1 (en) Nose print matching device and method and program
WO2023140099A1 (en) Biometric authentication system and biometric authentication method
JP2015219681A (en) Face image recognition device and face image recognition program
KR20230007250A (en) UBT system using face contour recognition AI and method thereof
WO2023140098A1 (en) Biometric authentication device, biometric authentication method, and biometric authentication system
JP2007213582A (en) Method and device for specifying place of object part in digital image data
KR102497805B1 (en) System and method for companion animal identification based on artificial intelligence

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22922249

Country of ref document: EP

Kind code of ref document: A1