WO2023084667A1 - 認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 - Google Patents
認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 Download PDFInfo
- Publication number
- WO2023084667A1 WO2023084667A1 PCT/JP2021/041473 JP2021041473W WO2023084667A1 WO 2023084667 A1 WO2023084667 A1 WO 2023084667A1 JP 2021041473 W JP2021041473 W JP 2021041473W WO 2023084667 A1 WO2023084667 A1 WO 2023084667A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- subject
- image
- thermal
- living body
- person
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 43
- 238000003384 imaging method Methods 0.000 claims description 127
- 238000009826 distribution Methods 0.000 claims description 57
- 238000010801 machine learning Methods 0.000 claims description 26
- 230000004048 modification Effects 0.000 claims description 22
- 238000012986 modification Methods 0.000 claims description 22
- 238000004590 computer program Methods 0.000 claims description 19
- 230000008859 change Effects 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 4
- 230000033001 locomotion Effects 0.000 claims description 2
- 238000004891 communication Methods 0.000 description 40
- 238000000605 extraction Methods 0.000 description 25
- 239000000284 extract Substances 0.000 description 15
- 101100018857 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) IMH1 gene Proteins 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 10
- 230000000052 comparative effect Effects 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000007639 printing Methods 0.000 description 4
- 239000000470 constituent Substances 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000007613 environmental effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 208000029152 Small face Diseases 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000001931 thermography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
Definitions
- This disclosure includes, for example, an authentication device, an authentication method, and a recording medium capable of authenticating a subject appearing in a human image, and a method capable of determining whether or not the subject appearing in the human image is a living body.
- the present invention relates to a technical field of an engine generation device, an engine generation method, and a recording medium capable of generating a determination engine.
- Patent Document 1 describes an example of an authentication device capable of authenticating a subject appearing in a person image.
- a subject is authenticated using a facial image of the subject obtained from a camera, and a temperature distribution of the subject's face obtained from thermography is used to determine whether or not the subject is a living body.
- a device is described for
- Patent Documents 2 to 5 are listed as prior art documents related to this disclosure.
- the object of this disclosure is to provide an authentication device, an engine generation device, an authentication method, an engine generation method, and a recording medium aimed at improving the technology described in prior art documents.
- One aspect of the authentication device includes authentication means for authenticating the subject using a person image generated by imaging the subject with a visible camera at a first time, and a thermal camera imaging the subject. Using a plurality of thermal images generated by imaging the subject with the thermal camera at a second time closest to the first time and at a third time before or after the second time among a plurality of times and determination means for determining whether or not the subject is a living body.
- One aspect of the engine generation device is an engine generation device that generates a determination engine for determining whether or not the subject is a living body using a thermal image generated by imaging the subject with a thermal camera.
- a learning data set including a plurality of sample images showing the body surface temperature distribution of a sample person and having a region of interest to be focused on for determining whether or not the sample person is a living body
- Extracting means for extracting one sample image as an extracted image, and whether the attention area and the sample person set in the extracted image are living organisms based on an imaging environment in which the thermal camera images the subject.
- Image generation means for generating a learning image by changing the positional relationship with the part of interest of the sample person to be focused in order to determine whether or not, and by performing machine learning using the learning image
- an engine generation means for generating a determination engine.
- One aspect of the authentication method is to authenticate the target person using a person image generated by imaging the target person with a visible camera at a first time; Using a plurality of thermal images generated by imaging the subject with the thermal camera at a second time closest to the first time and a third time before and after the second time, and determining whether the subject is a living body.
- One aspect of the engine generation method is an engine generation method for generating a determination engine for determining whether or not the subject is a living body using a thermal image generated by imaging the subject with a thermal camera.
- a learning data set including a plurality of sample images showing the body surface temperature distribution of a sample person and having a region of interest to be focused on for determining whether or not the sample person is a living body, at least Extracting one sample image as an extracted image, and determining whether the attention area and the sample person set in the extracted image are living organisms based on the imaging environment in which the thermal camera images the subject.
- a learning image is generated by changing the positional relationship with the target part of the sample person to be focused in order to determine whether or not, and by performing machine learning using the learning image, the determination engine is generating.
- One aspect of the recording medium is to authenticate the subject using a person image generated by imaging the subject with a visible camera at a first time; Using a plurality of thermal images generated by imaging the subject with the thermal camera at a second time closest to the first time and a third time before and after the second time, The recording medium stores a computer program for causing a computer to execute an authentication method including determining whether the subject is a living body.
- Another aspect of the recording medium is an engine generation method for generating a determination engine for determining whether or not the subject is a living body using a thermal image generated by imaging the subject with a thermal camera.
- At least one learning data set including a plurality of sample images showing the body surface temperature distribution of a sample person and in which a region of interest to be noticed for determining whether the sample person is a living body is set extracting two sample images as extracted images; and determining whether the attention area and the sample person set in the extracted images are living organisms based on an imaging environment in which the thermal camera images the subject.
- the determination engine is generated by generating a learning image by changing the positional relationship with the target part of the sample person to be focused in order to determine, and performing machine learning using the learning image.
- a recording medium recording a computer program for causing a computer to execute an engine generation method including:
- FIG. 1 is a block diagram showing the configuration of an authentication device according to the first embodiment.
- FIG. 2 is a block diagram showing the configuration of the engine generation device in the second embodiment.
- FIG. 3 is a block diagram showing the configuration of an authentication system according to the third embodiment.
- FIG. 4 is a block diagram showing the configuration of an authentication device according to the third embodiment.
- FIG. 5 is a flow chart showing the flow of the authentication operation performed by the authentication device according to the third embodiment.
- FIG. 6 shows an example of a person image.
- FIG. 7 is a timing chart showing the relationship between the authentication time and the time of interest (in particular, the closest time).
- FIG. 8 shows the relationship between the face area of the person image and the attention area of the thermal image.
- FIG. 9 is a timing chart showing the relationship between the authentication time and the time of interest (especially before and after) in the first modified example.
- FIG. 10 shows the relationship between the face area of the person image and the attention area of the thermal image.
- FIG. 11 is a flow chart showing the flow of the authentication operation in the second modified example.
- FIG. 12 shows how a region of interest moves within a thermal image.
- FIGS. 13A and 13B is a graph showing temperature distribution in a pixel row of a thermal image.
- FIG. 14 shows a plurality of thermal images respectively corresponding to a plurality of times of interest.
- FIG. 15 is a block diagram showing the configuration of an authentication system according to the fourth embodiment.
- FIG. 16 is a block diagram showing the configuration of the engine generation device in the fourth embodiment.
- FIG. 17 is a flow chart showing the flow of the engine generation operation performed by the engine generation device in the fourth embodiment.
- FIG. 18 shows an example of the data structure of the learning data set.
- FIGS. 19A and 19B show examples of learning images generated from extracted images.
- FIGS. 20(a) and 20(b) show examples of learning images generated from extracted images.
- FIG. 21 shows an example of a learning image.
- Embodiments of an authentication device, an engine generation device, an authentication method, an engine generation method, and a recording medium will be described below.
- FIG. 1 is a block diagram showing the configuration of an authentication device 1000 according to the first embodiment.
- the authentication device 1000 includes an authentication unit 1001 and a determination unit 1002.
- the authentication unit 1001 authenticates a target person using a human image generated by capturing an image of the target person with a visible camera at a first time.
- the determination unit 1002 is generated by the thermal camera capturing an image of the subject at a second time closest to the first time and at a third time before or after the second time among a plurality of times at which the thermal camera captured an image of the subject. A plurality of thermal images obtained are used to determine whether or not the subject is a living body.
- the authentication device 1000 compared with the authentication device of the comparative example, which determines whether or not the target person is a living body without considering the first time when the visible camera captures the image of the target person. Whether or not the person is a living body can be determined with higher accuracy.
- FIG. 2 is a block diagram showing the configuration of an engine generator 2000 according to the second embodiment.
- the engine generation device 2000 is a device capable of generating a determination engine for determining whether or not a subject is a living body using a thermal image generated by imaging the subject with a thermal camera.
- the determination engine may be used, for example, by an authentication device that uses a thermal image to determine whether a subject is living.
- the engine generation device 2000 includes an extraction unit 2001, an image generation unit 2002, and an engine generation unit 2003, as shown in FIG.
- the extraction unit 2001 extracts at least One sample image is extracted as an extracted image.
- the image generator 2002 uses the extracted images to generate learning images. Specifically, the image generating unit 2002 determines whether or not the target area and the sample person set in the extracted image are living bodies based on the imaging environment in which the thermal camera captures the target person. A learning image is generated by changing the positional relationship with the target part of the sample person to be trained.
- the engine generation unit 2003 generates a determination engine by performing machine learning using learning images.
- the engine generation device 2000 it is possible to generate a determination engine capable of highly accurately determining whether or not the subject is a living body.
- the learning image reflects information about the imaging environment in which the thermal camera images the subject. Therefore, the engine generation device 2000 can generate a determination engine that reflects information about the imaging environment by performing machine learning using learning images that reflect information about the imaging environment. For example, the engine generation device 2000 performs machine learning using a learning image that reflects information about a specific imaging environment, thereby generating a A judgment engine ENG for judging whether or not the subject is a living body can be generated using the thermal image.
- the authentication apparatus uses a determination engine that does not reflect information about a specific imaging environment. Whether or not the subject is a living body can be determined with high accuracy from a thermal image generated by imaging the subject with a thermal camera in an imaging environment. In this way, the engine generating device 2000 can generate a determination engine capable of highly accurately determining whether or not the subject is a living body.
- FIG. 3 is a block diagram showing the configuration of the authentication system SYS3 in the third embodiment.
- the authentication system SYS3 includes a visible camera 1, a thermal camera 2, and an authentication device 3.
- the visible camera 1 and the authentication device 3 may be able to communicate with each other via the communication network NW.
- the thermal camera 2 and the authentication device 3 may be able to communicate with each other via the communication network NW.
- the communication network NW may include a wired communication network.
- Communication network NW may include a wireless communication network.
- the visible camera 1 is an imaging device capable of optically imaging a subject positioned within the imaging range of the visible camera 1 .
- the visible camera 1 is an imaging device capable of optically imaging a subject by detecting visible light from the subject.
- the visible camera 1 captures an image of the subject, thereby generating a person image IMG_P representing the subject captured by the visible camera 1 .
- the person image IMG_P representing the subject is typically an image in which the subject P is captured.
- the “human image IMG_P in which the target person is captured” is an image generated by the visible camera 1 capturing an image of the target person who does not have the intention of wanting the visible camera 1 to capture the image of the target person. may contain.
- the “human image IMG_P in which the target person is captured” includes an image generated by the visible camera 1 capturing an image of the target person who has the intention of wanting the visible camera 1 to capture an image of the target person. You can The visible camera 1 transmits the generated person image IMG_P to the authentication device 3 via the communication network NW.
- the thermal camera 2 is an image capturing device capable of capturing an image of a target person located within the image capturing range of the thermal camera 2.
- the thermal camera 2 generates a thermal image IMG_T representing the body surface temperature distribution of the subject captured by the thermal camera 2 by capturing an image of the subject.
- the thermal image IMG_T may be an image showing the subject's body surface temperature distribution in color or gradation.
- the thermal image IMG_T representing the body surface temperature of the subject may typically be an image in which the subject P is substantially reflected due to the body surface temperature distribution of the subject.
- the “thermal image IMG_T in which the subject is captured” is an image generated by the thermal camera 2 capturing the subject who does not have the intention of wanting the thermal camera 2 to image the subject. may contain.
- Thermal image IMG_T in which the subject is captured includes an image generated by the thermal camera 2 capturing the subject who has the intention of wanting the thermal camera 2 to capture the subject. You can The thermal camera 2 transmits the generated thermal image IMG_T to the authentication device 3 via the communication network NW.
- the visible camera 1 and the thermal camera 2 are aligned so that the visible camera 1 and the thermal camera 2 can image the same subject. That is, the visible camera 1 and the thermal camera 2 are aligned so that the imaging range of the visible camera 1 and the imaging range of the thermal camera 2 at least partially overlap. For this reason, the subject who appears in the person image IMG_P generated by the visible camera 1 during a certain time period is normally captured in the thermal image IMG_T generated by the thermal camera 2 during the same time period. That is, the person image IMG_P generated by the visible camera 1 and the thermal image IMG_T generated by the thermal camera 2 in a certain time period include the same subject.
- the authentication device 3 acquires the person image IMG_P from the visible camera 1.
- the authentication device 3 uses the acquired person image IMG_P to perform an authentication operation for authenticating the subject appearing in the person image IMG_P. That is, the authentication device 3 uses the acquired person image IMG_P to determine whether or not the target person appearing in the person image IMG_P is the same as a pre-registered person (hereinafter referred to as a “registered person”). judge. If it is determined that the subject appearing in the person image IMG_P is the same as the registered person, it is determined that the subject has been successfully authenticated. On the other hand, when it is determined that the target person appearing in the person image IMG_P is not the same as the registered person, it is determined that the authentication of the target person has failed.
- the authentication device 3 determines that the target person has been successfully authenticated, as in the case where the registered person is present in front of the visible camera 1, even though the registered person is not present in front of the visible camera 1. There is a possibility that In other words, a person with malicious intent may impersonate a registered person. Therefore, as part of the authentication operation, the authentication device 3 determines whether or not the subject appearing in the person image IMG_P is a living body.
- the authentication device 3 acquires the thermal image IMG_T from the thermal camera 2 .
- the authentication device 3 uses the acquired thermal image IMG_T to determine whether or not the subject appearing in the thermal image IMG_T is a living body.
- the person image IMG_P generated by the visible camera 1 and the thermal image IMG_T generated by the thermal camera 2 in a certain time period include the same subject. Therefore, the operation of determining whether or not the subject appearing in the thermal image IMG_T is a living body is equivalent to the operation of determining whether or not the subject appearing in the person image IMG_P is a living body. be.
- Such an authentication system SYS3 may be used, for example, to manage entry/exit of a subject to/from a restricted area.
- the restricted area is an area in which persons who satisfy predetermined admission conditions are permitted to enter, but persons who do not satisfy predetermined admission conditions are not permitted to enter (that is, prohibited).
- the authentication device 3 determines whether the target person appearing in the person image IMG_P is the same as the person permitted to enter the restricted area (for example, the person registered in advance as the person who satisfies the entry conditions).
- the subject may be authenticated by determining whether or not.
- the authentication device 3 does not allow the subject to enter the restricted area. may be allowed.
- the authentication device 3 sets the state of an entrance/exit restriction device (for example, a gate device or a door device) capable of restricting passage of the subject to an open state in which the subject can pass through the entrance/exit restriction device. good too.
- the authentication device 3 prohibits the target person from entering the restricted area. may be prohibited.
- the authentication device 3 may set the state of the entrance/exit restriction device to a closed state in which the target person cannot pass through the entrance/exit restriction device. Furthermore, even if the target person is successfully authenticated, if it is determined that the target person reflected in the person image IMG_P is not a living body, the authentication device 3 prohibits the target person from entering the restricted area. may be prohibited.
- each of the visible camera 1 and the thermal camera 2 may capture images of the subject attempting to enter the restricted area.
- each of the visible camera 1 and the thermal camera 2 may be arranged near the entrance/exit restriction device, and may image a subject positioned near the entrance/exit restriction device in order to enter the restricted area.
- each of the visible camera 1 and the thermal camera 2 may capture an image of the subject moving toward the entrance/exit restriction device.
- the visible camera 1 and the thermal camera 2 may capture images of a subject moving toward the visible camera 1 and the thermal camera 2 placed near the entrance/exit restriction device, respectively.
- each of the visible camera 1 and the thermal camera 2 may capture an image of the subject standing still in front of the entrance/exit restriction device.
- Each of the visible camera 1 and the thermal camera 2 may capture an image of a subject standing still in front of the visible camera 1 and the thermal camera 2 arranged near the entrance/exit restriction device.
- FIG. 3 is a block diagram showing the configuration of the authentication device 3. As shown in FIG.
- the authentication device 3 includes an arithmetic device 31, a storage device 32, and a communication device 33. Furthermore, the authentication device 3 may comprise an input device 34 and an output device 35 . However, the authentication device 3 does not have to include at least one of the input device 34 and the output device 35 . Arithmetic device 31 , storage device 32 , communication device 33 , input device 34 , and output device 35 may be connected via data bus 36 .
- the computing device 31 includes, for example, at least one of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and an FPGA (Field Programmable Gate Array).
- Arithmetic device 31 reads a computer program.
- arithmetic device 31 may read a computer program stored in storage device 32 .
- the computing device 31 may read a computer program stored in a computer-readable non-temporary recording medium using a recording medium reading device (not shown) included in the authentication device 3 .
- the computing device 31 may acquire (that is, download) a computer program from a device (not shown) arranged outside the authentication device 3 via the communication device 33 (or other communication device). or read).
- Arithmetic device 31 executes the read computer program.
- a logical functional block for executing the operation for example, the authentication operation described above
- the computing device 31 can function as a controller for realizing logical functional blocks for executing the operation (in other words, processing) that the authentication device 3 should perform.
- FIG. 4 shows an example of logical functional blocks implemented within the computing device 31 to perform authentication operations. As shown in FIG. 4 , an authentication unit 311 , a biometric determination unit 312 , and an entrance/exit management unit 313 are realized in the computing device 31 .
- the authentication unit 311 uses the communication device 33 to acquire the person image IMG_P from the visible camera 1 via the communication network NW. Furthermore, using the acquired person image IMG_P, the authentication unit 311 determines whether or not the target person appearing in the person image IMG_P is the same as the registered person. Information about registered persons may be stored in the storage device 32 as a registered person DB 321 .
- the living body determination unit 312 uses the communication device 33 to acquire the thermal image IMG_T from the thermal camera 2 via the communication network NW. Furthermore, using the acquired thermal image IMG_T, the living body determination unit 312 determines whether or not the subject appearing in the thermal image IMG_T (that is, the subject appearing in the person image IMG_P) is a living body. do. For example, the living body determination unit 312 determines the body surface temperature distribution of the target person reflected in the thermal image IMG_T and the body surface temperature distribution (hereinafter referred to as “registered When the degree of similarity with the body surface temperature distribution is higher than a predetermined threshold, it may be determined that the subject reflected in the thermal image IMG_T is a living body. Note that this threshold value may be a fixed value. Alternatively, the threshold may be changeable. For example, the threshold may be changeable by the user of the authentication system SYS3.
- Information about the registered body surface temperature distribution may be stored in the storage device 32 as the registered body surface temperature distribution DB 322 .
- the information on registered body surface temperature distribution may include information on general body surface temperature distribution (especially human) body surface temperature distribution (for example, average human body surface temperature distribution).
- the registered body surface temperature distribution information relates to the body surface temperature distribution of a registered person (that is, a registered person pre-registered in the registration DB 321) used for face authentication (that is, the body surface temperature distribution of a specific person). May contain information.
- the entry/exit management unit 313 controls the state of an entry/exit restriction device capable of restricting the passage of the target person who is about to enter the restricted area based on the determination result of the authentication unit 311 and the determination result of the biometric determination unit 312.
- the authentication device 3 does not have to include the entrance/exit manager 313 .
- the authentication device 3 does not have to include the entrance/exit manager 313 .
- the storage device 32 can store desired data.
- the storage device 32 may temporarily store computer programs executed by the arithmetic device 31 .
- the storage device 32 may temporarily store data that is temporarily used by the arithmetic device 31 while the arithmetic device 31 is executing a computer program.
- the storage device 32 may store data that the authentication device 3 saves for a long time.
- the storage device 32 may include at least one of RAM (Random Access Memory), ROM (Read Only Memory), hard disk device, magneto-optical disk device, SSD (Solid State Drive), and disk array device. good. That is, the storage device 32 may include non-transitory recording media.
- the storage device 32 mainly includes the registered person DB 321 to which the authentication unit 311 refers in order to authenticate the subject, and the A registered body surface temperature distribution DB 322 that the living body determination unit 312 refers to is stored.
- the communication device 33 can communicate with each of the visible camera 1 and the thermal camera 2 via the communication network NW.
- the communication device 33 receives (that is, acquires) the person image IMG_P from the visible camera 1 via the communication network NW. Further, the communication device 33 receives (that is, acquires) the thermal image IMG_T from the thermal camera 2 via the communication network NW.
- the input device 34 is a device that accepts input of information to the authentication device 3 from the outside of the authentication device 3 .
- the input device 34 may include an operating device (for example, at least one of a keyboard, a mouse and a touch panel) that can be operated by the operator of the authentication device 3 .
- the input device 34 may include a reading device capable of reading information recorded as data on a recording medium that can be externally attached to the authentication device 3 .
- the output device 35 is a device that outputs information to the outside of the authentication device 3 .
- the output device 35 may output information as an image. That is, the output device 35 may include a display device (so-called display) capable of displaying an image showing information to be output.
- the output device 35 may output information as audio.
- the output device 35 may include an audio device capable of outputting audio (so-called speaker).
- the output device 35 may output information on paper.
- the output device 35 may include a printing device (so-called printer) capable of printing desired information on paper.
- FIG. 5 is a flow chart showing the flow of the authentication operation performed by the authentication device 3. As shown in FIG.
- the communication device 33 acquires the person image IMG_P from the visible camera 1 via the communication network NW (step S10).
- the visible camera 1 normally continues to capture images of the imaging range at a constant imaging rate.
- the visible camera 1 continues imaging the imaging range at an imaging rate of imaging the imaging range N1 times per second (N1 is an integer equal to or greater than 1). Therefore, the communication device 33 may acquire a plurality of person images IMG_P that are time-series data.
- a plurality of person images IMG_P acquired by the communication device 33 may be stored in the storage device 32 .
- the communication device 33 acquires the thermal image IMG_T from the thermal camera 2 via the communication network NW (step S11).
- the thermal camera 2 normally continues to image the imaging range at a constant imaging rate.
- the thermal camera 2 continues imaging the imaging range at an imaging rate of imaging the imaging range N2 times per second (N2 is an integer equal to or greater than 1). Therefore, the communication device 33 may acquire a plurality of thermal images IMG_T that are time-series data.
- a plurality of thermal images IMG_T acquired by the communication device 33 may be stored in the storage device 32 .
- the authentication unit 311 uses the person image IMG_P acquired in step S10 to authenticate the subject appearing in the person image IMG_P (step S12).
- the authentication unit 311 authenticates the target person using the face of the target person. That is, an example in which the authentication unit 311 performs face authentication will be described.
- the authentication unit 311 may authenticate the subject using another authentication method using the person image IMG_P.
- the authentication unit 311 may authenticate the subject using the subject's iris.
- the authentication unit 311 detects a face area FA in which the subject's face is captured in the person image IMG_P, as shown in FIG. 6 showing an example of the person image IMG_P. After that, the authentication unit 311 may extract feature points of the subject's face included in the face area FA. After that, the authentication unit 311 may calculate the degree of similarity between the feature points of the target person's face included in the face area FA and the feature points of the registered person's face. If the degree of similarity between the facial feature points of the target person and the facial feature points of the registered person is higher than a predetermined authentication threshold, the authentication unit 311 determines that the target person is the same as the registered person. good. If the degree of similarity between the facial feature points of the target person and the registered person's facial feature points is lower than a predetermined authentication threshold, the authentication unit 311 may determine that the target person is not the same as the registered person. .
- step S12 if the result of the authentication in step S12 is that the authentication is not successful (that is, it is determined that the target person is not the same as the registered person) (step S13: No), the entrance/exit manager 313 , the target person is prohibited from entering the restricted area (step S19).
- step S12 if the authentication is successful (that is, it is determined that the subject is the same as the registered person) (step S13: Yes), then the biometric determination unit 312 It is determined whether or not the subject determined to be the same as the registered person in step S12 is a living body (steps S14 to S16).
- the biometric determination unit 312 acquires the authentication time ta (step S14).
- the authentication time ta indicates the time when one person image IMG_P actually used for authenticating the subject person in step S12 among the plurality of person images IMG_P acquired in step S10 was captured. That is, at the authentication time ta, the feature points determined to have a higher degree of similarity with the facial feature points of the registered person than the predetermined authentication threshold among the plurality of person images IMG_P acquired in step S10 are extracted. It indicates the time when one person image IMG_P was captured.
- the living body determination unit 312 acquires the thermal image IMG_T captured at the target time tb determined based on the authentication time ta acquired in step S14 from among the plurality of thermal images IMG_T acquired in step S11 (step S15). In other words, the living body determination unit 312 acquires the thermal image IMG_T captured at the time of interest tb, which is at least one of the multiple times at which the multiple thermal images IMG_T acquired in step S11 were captured ( step S15). A plurality of thermal images IMG_T acquired in step S11 are stored in the storage device 32, for example. In this case, the living body determination unit 312 may acquire the thermal image IMG_T captured at the time of interest tb from the storage device 32 .
- the time of interest tb includes the closest time tb1 closest to the authentication time ta among the multiple times at which the multiple thermal images IMG_T acquired in step S11 were captured.
- a specific example of the time of interest tb (in particular, the closest time tb1) determined based on the authentication time ta will be described below with reference to FIG.
- FIG. 7 is a timing chart showing the relationship between the authentication time ta and the attention time tb (in particular, the nearest time tb1).
- the visible camera 1 images the subject at each of time t11, time t12, time t13, time t14 and time t15.
- the thermal camera 2 captures images of the subject at time t21, time t22, time t23, and time t24.
- the visible camera 1 and the thermal camera 2 do not always capture images of the subject at the timing synchronized with each other.
- the times t11, t12, t13, t14, and t15 at which the visible camera 1 images the subject are different from the times t21, t22, t23, and t24 at which the thermal camera 2 images the subject. Not necessarily in sync.
- the authentication unit 311 has authenticated the subject using the person image IMG_P generated by imaging the subject at time t13.
- the authentication time ta is time t13.
- the time t23 closest to the time t13 (that is, the difference from the time t13 is the smallest) becomes the closest time tb1.
- the living body determination unit 312 acquires the thermal image IMG_T captured at time t23, which is the closest time tb1.
- the living body determination unit 312 uses the thermal image IMG_T acquired in step S15 to determine whether or not the subject appearing in the thermal image IMG_T is a living body (step S16).
- the thermal image IMG_T acquired in step S15 may include the target person appearing in the person image IMG_P (that is, the target person determined to be the same as the registered person in step S12). is high.
- the thermal image IMG_T acquired in step S15 is generated by imaging the subject with the thermal camera 2 at the time of interest tb (particularly, the closest time tb1) determined based on the authentication time ta. Therefore, the operation of determining whether or not the subject appearing in the thermal image IMG_T acquired in step S15 is a living body is performed when the subject determined to be the same as the registered person in step S12 is a living body. This is equivalent to the operation of determining whether or not
- the biometric determination unit 312 In order to determine whether or not the target person is a living body, the biometric determination unit 312, as shown in FIG. is specified as a region of interest TA to which attention should be paid in order to determine whether or not the subject is a living body.
- visible camera 1 and thermal camera 2 are aligned such that visible camera 1 and thermal camera 2 can image the same subject. That is, the visible camera 1 and the thermal camera 2 are aligned so that the imaging range of the visible camera 1 and the imaging range of the thermal camera 2 at least partially overlap.
- the first area in the person image IMG_P and the second area in the thermal image IMG_T in which the same scene as the first area is captured correspond to each other.
- the living body determination unit 312 uses a projective transformation matrix or the like based on the positional relationship between the visible camera 1 and the thermal camera 2 to match the face area FA of the person image IMG_P with the face area FA in the thermal image IMG_T ( That is, it is possible to specify the attention area TA, which is assumed to include the same scene as the scene reflected in the face area FA.
- the living body determination unit 312 determines whether or not the subject is a living body based on the temperature distribution within the attention area TA.
- the operation of determining whether or not the subject is a living body based on the temperature distribution in the attention area TA is the body surface temperature distribution of the subject (in particular, determining whether or not the subject is a living body). This is equivalent to the operation of determining whether or not the subject is a living body based on the body surface temperature distribution of the face, which is an example of the attention part of the subject to which attention should be paid.
- step S16 if it is determined that the subject is not a living body (step S17: No), the entrance/exit management section 313 prohibits the subject from entering the restricted area (step S19).
- step S16 if it is determined that the subject is a living body (step S17: Yes), the entrance/exit management unit 313 permits the subject to enter the restricted area (step S18).
- the authentication device 3 detects that the thermal camera 2 is the target person at the time of interest tb (particularly, the closest time tb1) determined based on the authentication time ta. It is determined whether or not the subject is a living body using the thermal image IMG_T generated by imaging the . For this reason, the authentication apparatus of the comparative example that determines whether or not the subject is a living body using the thermal image IMG_T generated by capturing the subject with the thermal camera 2 at an arbitrary time without considering the authentication time ta. , the authentication device 3 can more accurately determine whether or not the subject is a living body.
- the authentication device of the comparative example uses a thermal image IMG_T generated by imaging the subject with the thermal camera 2 at a time significantly different from the authentication time ta, and determines whether the subject is a living body. may be determined.
- the thermal image IMG_T generated by imaging the subject with the thermal camera 2 at a time significantly different from the authentication time ta the subject's face may not appear in the attention area TA.
- the authentication device of the comparative example uses the temperature distribution of the attention area TA in the thermal image IMG_T in which the target person is not appropriately captured (that is, the temperature distribution different from the body surface temperature distribution of the target person).
- the authentication device of the comparative example uses the temperature distribution of the attention area TA in the thermal image IMG_T in which the target person is not appropriately captured (that is, the temperature distribution different from the body surface temperature distribution of the target person).
- the authentication device of the comparative example uses the temperature distribution of the attention area TA in the thermal image IMG_T in which the target person is not appropriately captured (that is, the temperature distribution different from the body surface temperature distribution of the target person).
- the accuracy of determining whether or not the target person is a living body deteriorates.
- the authentication device 3 captures the thermal image IMG_T generated by the thermal camera 2 capturing the subject at the closest time tb1 closest to the authentication time ta at which the visible camera 1 captured the subject. is used to determine whether or not the subject is a living body. That is, the authentication device 3 cannot determine whether or not the subject is a living body using the thermal image IMG_T generated by the thermal camera 2 capturing an image of the subject at a time significantly different from the authentication time ta. do not have. As a result, the time at which the visible camera 1 images the subject to authenticate the subject (that is, the authentication time) and the thermal camera 2 images the subject to determine whether the subject is a living body The time (that is, the closest time tb1) becomes closer.
- the authentication device 3 determines whether or not the target person is a living body based on the temperature distribution (i.e., the body surface temperature distribution of the target person) of the attention area TA in which the target person is appropriately captured in the thermal image IMG_T. can be determined appropriately. As a result, in the authentication device 3, the possibility that the accuracy of determining whether or not the target person is a living body is lowered. In other words, the authentication device 3 can determine whether or not the subject is a living body with higher accuracy than the authentication device of the comparative example.
- the closest time tb1 closest to the authentication time ta is used as the time of interest tb.
- at least one time tb2 before and after the closest time tb1 is used as the time of interest tb. That is, in the first modification, the time of interest tb is the closest time tb1 and the closest time among the multiple times at which the multiple thermal images IMG_T acquired in step S11 of FIG. 5 were captured. At least one time tb2 before and after tb1 may be included.
- the living body determination unit 312 acquires a plurality of thermal images IMG_T including the thermal image IMG_T captured at the closest time tb1 and the thermal image IMG_T captured at the time tb2 before and after. .
- the time before and after the nearest time tb1 means at least one of the time after the nearest time tb1 and the time before the nearest time tb1. Further, when both the closest time tb1 and the preceding and following times tb2 are used as the time of interest tb, the closest time tb1 and at least one of the preceding and following times tb2 are the respective thermal images IMG_T acquired in step S11. constitutes at least two successive times among the multiple times. That is, the thermal image IMG_T captured at the closest time tb1 and at least two thermal images IMG_T captured at at least one time before and after time tb2 are the thermal images IMG_T captured at step S11 in FIG. At least two thermal images IMG_T in a sequential relationship are constructed.
- FIG. 9 is a timing chart showing the relationship between authentication time ta and attention time tb.
- the visible camera 1 images the subject at each of time t11, time t12, time t13, time t14, and time t15. Also, the thermal camera 2 captures images of the subject at time t21, time t22, time t23, and time t24.
- the authentication unit 311 has authenticated the subject using the person image IMG_P generated by imaging the subject at time t13.
- the authentication time ta is time t13.
- the time t23 closest to the time t13 (that is, the difference from the time t13 is the smallest) becomes the closest time tb1.
- time t22 before time t23 may be used as time tb2 before and after time t23.
- a time t24 after the time t23 may be used as the before-and-after time tb2.
- the living body determination unit 312 acquires the thermal image IMG_T captured at time t23, which is the closest time tb1. Further, the living body determination unit 312 acquires at least one of the thermal image IMG_T captured at time t22, which is before and after time tb2, and the thermal image IMG_T captured at time t24, which is before and after time tb2.
- the living body determination unit 312 uses at least one of the plurality of thermal images IMG_T acquired in step S15 to determine the It may be determined whether or not the subject appearing in the thermal image IMG_T is a living body.
- the plurality of thermal images IMG_T acquired in step S15 are stored in the same way as when a single thermal image IMG_T is acquired in step S15. It is highly probable that each of them includes the subject (that is, the subject determined to be the same as the registered person in step S12) appearing in the person image IMG_P.
- the thermal image IMG_T acquired in step S15 is captured by the thermal camera 2 at the target time tb determined based on the authentication time ta (specifically, the closest time tb1 and the time tb2 before and after the authentication time ta). This is because it is generated by imaging the . Therefore, the living body determination unit 312 can appropriately determine whether or not the subject is a living body using at least one of the plurality of thermal images IMG_T acquired in step S15.
- not all of the plurality of thermal images IMG_T are images in which the target person's face is appropriately reflected in the attention area TA.
- the thermal image IMG_T generated by the thermal camera 2 capturing an image of the subject at time t22, which is the time before and after time tb2
- the subject's face is appropriately located near the center of the attention area TA.
- the thermal image IMG_T generated by the thermal camera 2 capturing the subject at time t23, which is the closest time tb1, and at time t24, which is the before and after time tb2
- the subject is out of the attention area TA.
- the living body determination unit 312 selects at least one thermal image IMG_T in which the subject's face is appropriately reflected near the center of the attention area TA from among the plurality of thermal images IMG_T, and selects at least one thermal image IMG_T.
- a thermal image IMG_T may be used to determine whether or not the subject is a living body.
- the living body determination unit 312 calculates the degree of similarity between the body surface temperature distribution of the subject reflected in the thermal image IMG_T and the registered body surface temperature distribution for each of the plurality of thermal images IMG_T.
- a plurality of similarity statistical values may be used to determine whether or not the subject is living.
- the living body determination unit 312 may determine that the subject is a living body when the average value, mode value, median value, or maximum value of a plurality of similarities is higher than a threshold.
- the authentication device 3 uses not only the thermal image IMG_T captured at the closest time tb1, but also the thermal image IMG_T captured at the time tb2 before and after the target person. It is possible to determine whether Therefore, in a situation where at least a part of the target person's face is reflected in a position outside the attention area TA in the thermal image IMG_T captured at the nearest time tb1, the authentication device 3 Whether there is or not can be determined with higher accuracy.
- FIG. 11 is a flow chart showing the flow of the authentication operation in the second modified example. It should be noted that the same step numbers are given to the processes that have already been explained, and the detailed explanation thereof will be omitted.
- the authentication device 3 also performs operations from step S10 to step S15 in the second modification.
- the living body determination unit 312 uses the thermal image IMG_T acquired in step S15 to determine whether or not the subject appearing in the thermal image IMG_T is a living body (step S16b). Specifically, first, as described above, the biometric determination unit 312 determines whether or not the target person is a living body, in the area corresponding to the face area FA detected for authenticating the target person, in the thermal image IMG_T. TA is identified as the attention area TA to be focused on in order to determine whether or not (step S161b). After that, the living body determination unit 312 adjusts the position of the attention area TA specified from the position of the face area FA within the thermal image IMG_T (step S162b).
- the living body determination unit 312 determines whether or not the subject is a living body based on the temperature distribution within the attention area TA whose position has been adjusted (step S163b). Note that the processing of steps S161b and 163b may be the same as the operation of step S16 in FIG. 5 described above.
- the living body determination unit 312 may adjust the position of the attention area TA by moving the attention area TA within the thermal image IMG_T, as shown in FIG.
- the living body determination unit 312 may adjust the position of the attention area TA in the vertical direction by moving the attention area TA along the vertical direction of the thermal image IMG_T.
- the living body determination unit 312 moves the attention area TA along the horizontal direction of the thermal image IMG_T.
- the position of the attention area TA in the direction may be adjusted. Note that FIG. 12 shows an example in which the living body determination unit 312 moves the attention area TA along the horizontal direction of the thermal image IMG_T.
- the living body determination unit 312 may adjust the position of the attention area TA based on the thermal image IMG_T acquired in step S15.
- the living body determination unit 312 may adjust the position of the attention area TA based on the temperature distribution indicated by the thermal image IMG_T acquired in step S15.
- the temperature indicated by the image portion in which the target person is captured is the temperature of the image portion in which the target person is not captured (for example, an image in which the target person's background is captured). part) is different from the temperature indicated by
- the temperature indicated by an image portion in which the target person is captured is higher than the temperature indicated by an image portion in which the target person is not captured.
- the temperature distribution indicated by the thermal image IMG_T indirectly indicates the position in which the subject is captured in the thermal image IMG_T. Therefore, based on the thermal image IMG_T acquired in step S15, the living body determination unit 312 adjusts the position of the attention area TA so that the attention area TA moves toward the position where the subject is captured in the thermal image IMG_T. may be adjusted.
- FIG. 13(a) shows the temperature distribution in a pixel row including a plurality of pixels arranged in a row in the horizontal direction among the plurality of images forming the thermal image IMG_T
- FIG. It shows the temperature distribution in a pixel row including a plurality of pixels aligned in the vertical direction among the plurality of images forming the thermal image IMG_T.
- the temperature indicated by the image portion in which the subject is captured is different from the temperature indicated by the image portion in which the subject is not captured.
- the position of the subject (for example, the center of the face) can be estimated from the temperature distribution of .
- the living body determination unit 312 calculates the temperature distribution of the pixel array, and based on the temperature distribution of the pixel array, the thermal image IMG_T is directed toward the face of the subject in the thermal image IMG_T.
- the position of the attention area TA may be adjusted so that the attention area TA moves with the
- the living body determination unit 312 may adjust the position of the attention area TA so that the center of the attention area TA moves toward the center of the subject's face in the thermal image IMG_T.
- the living body determination unit 312 may adjust the position of the attention area TA so that the center of the target person's face and the center of the attention area TA match in the thermal image IMG_T.
- the authentication device 3 can adjust the temperature distribution (i.e., , body surface temperature distribution of the subject), it is possible to appropriately determine whether or not the subject is a living body.
- the authentication device 3 can more accurately determine whether or not the subject is a living body.
- the temperature distribution indicated by the thermal image IMG_T indirectly indicates the position in the thermal image IMG_T where the target person appears. .
- the authentication device 3 determines whether the subject's face is appropriately reflected in the attention area TA in the thermal image IMG_T used for determining whether the subject is a living body, or whether the subject's It may be determined whether or not at least part of the face is captured outside the attention area TA.
- the authentication device 3 determines whether the subject's face is appropriately reflected in the attention area TA.
- Thermal image IMG_T may be used to determine whether or not the subject is a living body.
- the authentication device 3 uses another thermal image IMG_T in which the subject's face is reflected at the center of the attention area TA or at a position relatively close to the center to determine whether the subject is a living body. may be determined.
- the authentication device 3 acquires a plurality of thermal images IMG_T respectively corresponding to a plurality of attention times tb. In this case, the authentication device 3 determines whether the target person's face is appropriately reflected in the attention area TA in each of the plurality of thermal images IMG_T, or whether at least part of the target person's face is out of the attention area TA.
- the authentication device 3 selects one thermal image IMG_T in which the subject's face is appropriately reflected in the attention area TA from among the plurality of thermal images IMG_T, and uses the selected one thermal image IMG_T to , it may be determined whether or not the subject is a living body. For example, in the example shown in FIG.
- the thermal image IMG_T generated by the thermal camera 2 capturing an image of the subject at time t22 shows that the subject While the face is appropriately captured in the attention area TA, in the thermal image IMG_T generated by the thermal camera 2 capturing images of the subject at times t23 and t24, at least part of the subject's face is shown. is captured at a position outside the attention area TA.
- the authentication device 3 captures one thermal image IMG_T in which the subject's face is appropriately captured in the attention area TA, and the thermal camera 2 captures the image at time t22.
- a thermal image IMG_T generated by imaging the subject may be selected.
- the authentication device 3 uses one thermal image IMG_T in which the subject's face is appropriately reflected in the attention area TA to determine whether the subject is a living body. be able to.
- the authentication device 3 can more accurately determine whether or not the subject is a living body.
- the authentication device 3 that authenticates the subject reflected in the person image IMG_P uses the thermal image IMG_T to determine whether the subject is a living body. Judging. However, an arbitrary spoofing determination device that does not authenticate the target person reflected in the person image IMG_P uses the thermal image IMG_T to verify that the target person reflected in the thermal image IMG_T is a living body, similar to the authentication device 3 described above. It may be determined whether there is In other words, any impersonation determination device may determine whether or not a living body is captured in the thermal image IMG_T. Even in this case, any impersonation determination device can relatively accurately determine whether or not the target person is a living body, like the authentication device 3 described above.
- a subject whose body surface temperature is normal is permitted to stay, while a subject whose body surface temperature is not normal is prohibited from staying (e.g.
- a thermal camera 2 may be installed to measure the body surface temperature of the subject staying at the facility.
- Examples of such facilities include office buildings, public facilities, restaurants and/or hospitals.
- the facility uses a thermal image IMG_T generated by imaging the subject who is about to enter the facility with the thermal camera 2 to determine whether the body surface temperature of the subject staying at the facility is normal.
- a stay management device may be installed that determines whether or not the body surface temperature of the subject is outside the normal range and requests the subject to leave. This stay management device, like the authentication device 3 described above, may determine whether or not the subject reflected in the thermal image IMG_T is a living body.
- a fourth embodiment of the authentication device, the engine generation device, the authentication method, the engine generation method, and the recording medium will be described.
- the authentication device, the engine generation device, the authentication method, the engine generation method, and the authentication system SYS4 to which the recording medium in the fourth embodiment is applied will be used to describe the authentication device, the engine generation device, the authentication in the fourth embodiment.
- a method, an engine generation method, and a recording medium are described.
- FIG. 15 is a block diagram showing the configuration of the authentication system SYS4 in the fourth embodiment.
- the same reference numerals are assigned to the components that have already been described, and detailed description thereof will be omitted.
- the authentication system SYS4 in the fourth embodiment differs from the authentication system SYS3 in the third embodiment in that the authentication system SYS4 further includes an engine generating device 4.
- Other features of authentication system SYS4 may be identical to other features of authentication system SYS3.
- the engine generation device 4 can perform an engine generation operation for generating a determination engine ENG for determining whether or not the subject is a living body using the thermal image IMG_T.
- the determination engine ENG may be any engine as long as it can determine whether or not the subject is a living body using the thermal image IMG_T.
- the determination engine ENG outputs a determination result as to whether or not the subject is a living body, based on at least a portion of the thermal image IMG_T (for example, an image portion included in the attention area TA of the thermal image IMG_T).
- the determination engine ENG determines whether or not the subject is a living body when at least a portion of the thermal image IMG_T (for example, an image portion included in the attention area TA of the thermal image IMG_T) is input. It may be an engine that outputs results. For example, the determination engine ENG determines whether or not the subject is a living body based on the feature amount of at least a portion of the thermal image IMG_T (for example, the image portion included in the attention area TA of the thermal image IMG_T). It may be an engine that outputs results.
- the determination engine ENG determines whether or not the subject is a living body when the feature amount of at least a portion of the thermal image IMG_T (for example, the image portion included in the attention area TA of the thermal image IMG_T) is input. It may be an engine that outputs the determination result.
- the engine generation device 4 generates the determination engine ENG by performing machine learning using an image showing the body surface temperature distribution of a person, similar to the thermal image IMG_T.
- the determination engine ENG is an engine that can be generated by machine learning (a so-called learnable learning model).
- An example of an engine that can be generated by machine learning is an engine using a neural network (eg, a learning model).
- the engine generation device 4 may transmit the generated determination engine ENG to the authentication device 3 via the communication network NW.
- the authentication device 3 may determine whether or not the subject is a living body using the thermal image IMG_T and the determination engine ENG.
- FIG. 16 is a block diagram showing the configuration of the engine generating device 4 in the fourth embodiment.
- the engine generation device 4 includes an arithmetic device 41, a storage device 42, and a communication device 43. Furthermore, the engine generation device 4 may comprise an input device 44 and an output device 45 . However, the authentication device 3 does not have to include at least one of the input device 44 and the output device 45 . Arithmetic device 41 , storage device 42 , communication device 43 , input device 44 , and output device 45 may be connected via data bus 46 .
- the computing device 41 includes, for example, at least one of a CPU, GPU and FPGA. Arithmetic device 41 reads a computer program. For example, arithmetic device 41 may read a computer program stored in storage device 42 . For example, the computing device 41 may read a computer program stored in a computer-readable non-temporary recording medium using a recording medium reading device (not shown) included in the engine generation device 4 . The computing device 41 may acquire (that is, download) a computer program from a device (not shown) arranged outside the engine generation device 4 via the communication device 43 (or other communication device). may be read). Arithmetic device 41 executes the read computer program.
- the arithmetic unit 41 implements logical functional blocks for executing the operations to be performed by the engine generation device 4 (for example, the above-described engine generation operation).
- the arithmetic unit 41 can function as a controller for realizing logical functional blocks for executing operations (in other words, processing) that should be performed by the engine generator 4 .
- FIG. 16 shows an example of logical functional blocks implemented within the arithmetic unit 41 for executing engine generation operations.
- an image extraction unit 411 an image generation unit 412 and an engine generation unit 413 are realized in the arithmetic unit 41 .
- the operations of the image extraction unit 411, the image generation unit 412, and the engine generation unit 413 will be described in detail later with reference to FIG.
- the storage device 42 can store desired data.
- the storage device 42 may temporarily store computer programs executed by the arithmetic device 41 .
- the storage device 42 may temporarily store data temporarily used by the arithmetic device 41 while the arithmetic device 41 is executing a computer program.
- the storage device 42 may store data that the engine generation device 4 saves over a long period of time.
- the storage device 42 may include at least one of RAM, ROM, hard disk device, magneto-optical disk device, SSD and disk array device. That is, the storage device 42 may include non-transitory recording media.
- the communication device 43 can communicate with each of the visible camera 1, the thermal camera 2, and the authentication device 3 via the communication network NW. In the fourth embodiment, the communication device 43 transmits the generated determination engine ENG to the authentication device 3 via the communication network NW.
- the input device 44 is a device that receives input of information to the engine generation device 4 from the outside of the engine generation device 4 .
- the input device 44 may include an operation device (for example, at least one of a keyboard, a mouse and a touch panel) operable by an operator of the engine generation device 4 .
- the input device 44 may include a reading device capable of reading information recorded as data on a recording medium that can be externally attached to the engine generation device 4 .
- the output device 45 is a device that outputs information to the outside of the engine generating device 4 .
- the output device 45 may output information as an image.
- the output device 45 may include a display device (so-called display) capable of displaying an image showing information to be output.
- the output device 45 may output the information as voice.
- the output device 45 may include an audio device capable of outputting audio (so-called speaker).
- the output device 45 may output information on paper.
- the output device 45 may include a printing device (so-called printer) capable of printing desired information on paper.
- FIG. 17 is a flow chart showing the flow of the engine generation operation performed by the engine generation device 4. As shown in FIG.
- the image extraction unit 411 extracts at least one extraction image IMG_E from the learning data set 420 (step S41).
- the learning data set 420 may be stored, for example, in the storage device 42 (see FIG. 16).
- the image extraction unit 411 may acquire the learning data set 420 from a device external to the engine generation device 4 using the communication device 43 .
- the learning data set 420 includes multiple pieces of unit data 421 .
- Each unit data 421 includes a sample image IMG_S, attention area information 422 and correct label 423 .
- the sample image IMG_S is an image showing the body surface temperature distribution of a sample person.
- an image generated by imaging a sample person with the thermal camera 2 or another thermal camera different from the thermal camera 2 may be used as the sample image IMG_S.
- an image simulating an image generated by imaging a sample person with the thermal camera 2 or another thermal camera different from the thermal camera 2 may be used as the sample image IMG_S.
- an attention area TA is set in advance in the sample image IMG_S.
- the attention area TA set in the sample image IMG_S is an area to be focused on in order to determine whether or not the sample person is a living body.
- the attention area TA may be an area in which a part (for example, the above-described face) of the sample person to be focused for determining whether or not the sample person is a living body is reflected.
- Information about the attention area TA preset in the sample image IMG_S is included in the unit data 421 as attention area information 422 .
- the correct label 423 indicates whether or not the sample person appearing in the sample image IMG_S is a living body.
- the learning data set 420 may include a plurality of unit data 421 each including a plurality of sample images IMG_S representing body surface temperature distributions of a plurality of different sample persons.
- the learning data set 420 may include a plurality of unit data 421 each including a plurality of sample images IMG_S representing the body surface temperature distribution of the same sample person.
- the image extraction unit 411 may randomly extract at least one sample image IMG_S from the learning data set 420 as the extracted image IMG_E. In this case, the image extraction unit 411 may extract all of the multiple sample images IMG_S included in the learning data set 420 as the extraction image IMG_E. Alternatively, image extraction unit 411 extracts a part of multiple sample images IMG_S included in learning data set 420 as extracted image IMG_E, while extracting another one of multiple sample images IMG_S included in learning data set 420. part may not be extracted as the extracted image IMG_E.
- the image extraction unit 411 may extract at least one sample image IMG_S that satisfies a predetermined extraction condition from the learning data set 420 as the extracted image IMG_E.
- the extraction conditions may include imaging environment conditions determined based on the imaging environment in which at least one of the visible camera 1 and the thermal camera 2 images the subject. That is, the extraction conditions may include imaging environment conditions that reflect the actual imaging environment in which at least one of the visible camera 1 and the thermal camera 2 images the subject. In this case, the image extraction unit 411 may extract at least one sample image IMG_S that satisfies the imaging environment conditions from the learning data set 420 as the extraction image IMG_E.
- the image extraction unit 411 extracts a sample image IMG_S having characteristics similar to those of the thermal image IMG_T generated by the thermal camera 2 under a predetermined imaging environment indicated by the imaging environment condition as an extracted image IMG_E. may be extracted.
- the image extraction unit 411 extracts a sample image IMG_S having characteristics similar to those of the thermal image IMG_T generated by the thermal camera 2 that captures an image of the subject under the imaging environment indicated by the imaging environment condition, and extracts an extracted image IMG_E from the learning data set 420 . may be extracted.
- the imaging environment may include the positional relationship between the visible camera 1 and the thermal camera 2.
- the imaging environment may include the positional relationship between the visible camera 1 and the subject.
- the imaging environment may include the positional relationship between the visible camera 1 and the subject at the timing when the visible camera 1 images the subject.
- the positional relationship between the visible camera 1 and the subject may include the distance between the visible camera 1 and the subject.
- the positional relationship between the visible camera 1 and the subject is the direction in which the visible camera 1 faces (for example, the direction in which the optical axis of an optical system such as a lens provided in the visible camera 1 extends) and the direction in which the subject faces (for example, , the direction in which the subject's face is facing, and the direction extending in front of the subject).
- the imaging environment may include the positional relationship between the thermal camera 2 and the subject.
- the imaging environment may include the positional relationship between the thermal camera 2 and the subject at the timing when the thermal camera 2 images the subject.
- the positional relationship between the thermal camera 2 and the subject may include the distance between the thermal camera 2 and the subject.
- the positional relationship between the thermal camera 2 and the subject is the direction in which the thermal camera 2 faces (for example, the direction in which the optical axis of an optical system such as a lens provided in the thermal camera 2 extends) and the direction in which the subject faces.
- the imaging environment may include optical properties of the visible camera 1 (for example, optical properties of an optical system such as a lens included in the visible camera 1).
- the imaging environment may include optical characteristics of the thermal camera 2 (for example, optical characteristics of an optical system such as a lens included in the thermal camera 2).
- Each of the visible camera 1 and the thermal camera 2 may image a subject moving toward the visible camera 1 and the thermal camera 2, or a subject stationary in front of the visible camera 1 and the thermal camera 2.
- the person may be imaged.
- the imaging environment in which the visible camera 1 and the thermal camera 2 capture an image of a moving subject, and the imaging environment in which the visible camera 1 and the thermal camera 2 capture an image of a stationary subject are: Generally different. Therefore, at least one of the condition that the visible camera 1 and the thermal camera 2 capture images of a moving subject and the condition that the visible camera 1 and the thermal camera 2 capture images of a stationary subject is It may be used as an environmental condition.
- the state of the subject reflected in the thermal image IMG_T changes according to the imaging environment.
- the state of a target person reflected in a thermal image IMG_T generated by imaging a moving target person and the target person reflected in a thermal image IMG_T generated by imaging a stationary target person is generally different from the state of Therefore, the operation of extracting at least one extraction image IMG_E that satisfies the imaging environmental conditions is the same state as the state of the subject appearing in the thermal image IMG_T generated under the predetermined imaging environment indicated by the imaging environmental conditions. may be regarded as equivalent to the operation of extracting the sample image IMG_S in which the sample person is captured in , as the extracted image IMG_E.
- the visible camera 1 and the thermal camera 2 capture images of a moving subject
- the visible camera 1 and the thermal camera 2 are more likely to capture the subject from an oblique direction.
- the possibility that the visible camera 1 and the thermal camera 2 capture images of the subject from the front direction is relatively high.
- the condition that the visible camera 1 and the thermal camera 2 image the subject from the front direction is used as the imaging environment condition that the visible camera 1 and the thermal camera 2 image the stationary subject, good.
- the condition that the visible camera 1 and the thermal camera 2 capture an image of the subject from an oblique direction may be used as the imaging environment condition that the visible camera 1 and the thermal camera 2 capture an image of the moving subject. good.
- the image extraction unit 411 extracts a sample image IMG_S in which the sample person facing the front is captured. You may extract as image IMG_E.
- the image extraction unit 411 extracts a sample image IMG_S in which the sample person facing the oblique direction is captured. You may extract as image IMG_E.
- the image generator 412 uses the extracted image IMG_E extracted in step S41 to generate a learning image IMG_L actually used for machine learning (step S42).
- the image generator 412 generates a plurality of learning images IMG_L (step S42).
- the image generating unit 412 changes the positional relationship between the attention area TA set in the extracted image IMG_E and the sample person's face (i.e., attention part) reflected in the extracted image IMG_E. , a learning image IMG_L that is an extracted image IMG_E in which the positional relationship between the attention area TA and the face of the sample person is changed.
- the image generator 412 may change the positional relationship between the attention area TA and the sample person's face in one extracted image IMG_E in a plurality of different modes.
- the image generating unit 412 can generate a plurality of learning images IMG_L each having a different mode of changing the positional relationship between the attention area TA and the sample person's face from one extracted image IMG_E.
- the image generator 412 can further increase the number of learning images IMG_L used for machine learning. This is a great advantage for machine learning, in which learning efficiency improves as the number of data used for samples increases.
- the image generation unit 412 may change the positional relationship between the attention area TA and the sample person's face based on the imaging environment in which at least one of the visible camera 1 and the thermal camera 2 described above captures the target person. Specifically, as described above, the state of the subject reflected in the thermal image IMG_T changes according to the imaging environment. In this case, the image generation unit 412 generates the image of the subject in the thermal image IMG_T generated under the actual imaging environment in which at least one of the visible camera 1 and the thermal camera 2 described above captures the subject. The positional relationship between the attention area TA and the sample person's face may be changed so as to generate a learning image IMG_L in which the sample person is captured in the state of .
- visible camera 1 and thermal camera 2 may image a subject moving towards visible camera 1 and thermal camera 2, or a subject standing still in front of visible camera 1 and thermal camera 2.
- the image generating unit 412 generates a learning image in which the sample person is captured in the same state as the target person captured in the thermal image IMG_T generated by imaging the moving target person with the thermal camera 2.
- the positional relationship between the attention area TA and the sample person's face may be changed so as to generate IMG_L.
- the image generating unit 412 generates a learning image IMG_L in which the sample person is captured in the same state as the target person captured in the thermal image IMG_T generated by imaging the target person at rest with the thermal camera 2.
- the positional relationship between the attention area TA and the sample person's face may be changed so as to generate .
- the center of the attention area TA is closer to the center of the attention area TA than in the thermal image IMG_T generated by imaging a stationary subject.
- the image generator 412 when the thermal camera 2 captures an image of a moving subject, the image generator 412 generates a learning image IMG_L in which the amount of deviation between the center of the attention area TA and the center of the face of the sample person is relatively large.
- the positional relationship between the attention area TA and the sample person's face may be changed so as to generate .
- the image generator 412 when the thermal camera 2 captures an image of a stationary subject, the image generator 412 generates a learning image IMG_L in which the amount of deviation between the center of the attention area TA and the center of the face of the sample person is relatively small.
- the positional relationship between the attention area TA and the sample person's face may be changed so as to generate .
- the area of interest TA is larger than that in the thermal image IMG_T generated by imaging a stationary subject. It is more likely that the subject's face will be shifted in more directions with respect to the center. Therefore, when the thermal camera 2 captures an image of a moving target person, the image generation unit 412 performs a plurality of learning processes in which the directions in which the sample person's face is displaced relative to the center of the attention area TA are relatively large. The positional relationship between the attention area TA and the sample person's face may be changed so as to generate the image IMG_L.
- the image generation unit 412 generates a plurality of learning images IMG_L in which the sample person's face is shifted in four directions (for example, upward, downward, rightward, and leftward directions) with respect to the center of the attention area TA. may be generated.
- the image generation unit 412 when the thermal camera 2 captures an image of a subject who is stationary, the image generation unit 412 generates a plurality of learning images in which the sample person's face is shifted relatively little from the center of the attention area TA. The positional relationship between the attention area TA and the sample person's face may be changed so as to generate IMG_L.
- the image generation unit 412 generates a plurality of learning images IMG_L in which the face of the sample person is shifted only in one direction or two directions (for example, upward and downward) with respect to the center of the attention area TA. may be generated.
- the visible camera 1 and the thermal camera 2 capture images of a moving subject
- the visible camera 1 and the thermal camera 2 capture images of the subject from a position relatively far from the subject.
- the visible camera 1 and the thermal camera 2 may capture an image of the subject from a position relatively close to the subject.
- the image generation unit 412 sets the attention area TA so as to generate a learning image IMG_L in which a sample person with a relatively small face is captured.
- the image generation unit 412 sets the attention area TA so as to generate a learning image IMG_L in which a sample person with a relatively large face is captured. and the face of the sample person may be changed.
- the image generation unit 412 may change the positional relationship between the attention area TA and the sample person's face by changing the characteristics of the attention area TA in the extracted image IMG_E.
- the characteristics of the attention area TA may include the position of the attention area TA.
- the image generator 412 changes the position of the attention area TA in the extracted image IMG_E, thereby changing the positional relationship between the attention area TA and the face of the sample person.
- the image generator 412 may change the positional relationship between the attention area TA and the face of the sample person by moving the attention area TA within the extracted image IMG_E.
- the characteristics of the attention area TA may include the size of the attention area TA.
- the image generator 412 changes the size of the attention area TA in the extracted image IMG_E, thereby changing the positional relationship between the attention area TA and the face of the sample person. may That is, the image generator 412 may change the positional relationship between the attention area TA and the face of the sample person by enlarging or reducing the attention area TA in the extracted image IMG_E.
- the image generation unit 412 may change the positional relationship between the attention area TA and the sample person's face by changing the characteristics of the extracted image IMG_E in which the attention area TA is set.
- the properties of the extracted image IMG_E may include the position of the extracted image IMG_E (eg, the position relative to the area of interest TA). In this case, as shown in FIG. 20A, the image generator 412 changes the position of the extracted image IMG_E with respect to the attention area TA, thereby changing the positional relationship between the attention area TA and the face of the sample person. good too.
- the image generation unit 412 may change the positional relationship between the attention area TA and the face of the sample person by moving (for example, translating) the extracted image IMG_E with respect to the attention area TA.
- the characteristics of the extracted image IMG_E may include the size of the extracted image IMG_E.
- the image generator 412 may change the positional relationship between the attention area TA and the face of the sample person by changing the size of the extracted image IMG_E. That is, the image generator 412 may change the positional relationship between the attention area TA and the sample person's face by enlarging or reducing the extracted image IMG_E.
- the engine generation unit 413 then generates the determination engine ENG using the plurality of learning images IMG_L generated in step S42 (step S43). That is, the engine generator 413 generates the determination engine ENG by performing machine learning using the plurality of learning images IMG_L generated in step S42 (step S43). Specifically, the engine generator 413 inputs each of the plurality of learning images IMG_L generated in step S42 to the determination engine ENG. As a result, the determination engine ENG outputs a determination result as to whether or not the sample person appearing in each learning image IMG_L is a living body.
- the engine generation unit 413 updates the parameters of the determination engine ENG using a loss function based on the error between the determination result of the determination engine ENG and the correct label 423 corresponding to each learning image IMG_L.
- the engine generation unit 413 sets the parameters of the determination engine ENG so that the error between the determination result of the determination engine ENG and the correct label 423 corresponding to each learning image IMG_L is small (preferably minimized). to update.
- the determination engine ENG performs so-called binary classification, the engine generation unit 413 generates index values (for example, accuracy, recall, specificity, precision, etc.) based on the confusion matrix. At least one of (Precision) and F-measure (F-measure)) may be used to update the parameters of the determination engine ENG. As a result, decision engine ENG is generated.
- the engine generation device 4 generates the learning image IMG_L from the extraction image IMG_E based on the imaging environment in which the thermal camera 2 images the subject. do.
- the learning image IMG_L reflects information about the imaging environment in which the thermal camera 2 images the subject. Therefore, the engine generation device 4 can generate the determination engine ENG that reflects the information about the imaging environment by performing machine learning using the learning image IMG_L that reflects the information about the imaging environment. For example, the engine generation device 4 performs machine learning using a learning image IMG_L in which information about a specific imaging environment is reflected.
- a determination engine ENG for determining whether or not the subject is a living body can be generated using the generated thermal image IMG_T.
- the authentication device 3 uses the determination engine ENG in which information about a specific imaging environment is reflected. Whether or not the subject is a living body can be determined with high accuracy from the thermal image IMG_T generated by imaging the subject with the thermal camera 2 under a specific imaging environment.
- the engine generation device 4 can generate the determination engine ENG that can determine whether or not the subject is a living body with high accuracy.
- the engine generation device 4 generates a plurality of different determination engines ENG, the authentication device 3 selects one determination engine ENG from among the plurality of determination engines ENG, and It may be determined whether or not the subject is a living body using the determination engine ENG of . In this case, the authentication device 3 may change the determination engine ENG used for determining whether or not the subject is a living body during the authentication period during which the authentication operation is performed.
- the amount of deviation between the center of the attention area TA in the thermal image IMG_T and the center of the subject's face (hereinafter simply referred to as "the amount of deviation between the attention area TA and the face") and the attention area in the thermal image IMG_T
- the direction of deviation between the center of TA and the center of the subject's face (hereinafter simply referred to as "the direction of deviation between the attention area TA and the face”) may change depending on the imaging environment. is as described above.
- the engine generating device 4 generates a plurality of types of learning images IMG_L in which at least one of the amount of deviation between the attention area TA and the face and the direction of deviation between the attention area TA and the face is different, and generates a plurality of types of learning images IMG_L.
- the engine generation device 4 (i) changes the positional relationship between the attention area TA and the face in the first modification mode so that the amount of deviation between the attention area TA and the face is within the first range and the attention area is within the first range.
- the engine generation device 4 may generate the first determination engine ENG using the first learning image IMG_L, and generate the second determination engine ENG using the second learning image IMG_L. .
- the engine generation device 4 generates a learning image IMG_L#1 in which the face is not displaced from the attention area TA, and a learning image IMG_L in which the face is displaced upward from the attention area TA.
- a learning image IMG_L#3 in which the face is shifted downward with respect to the attention area TA a learning image IMG_L#4 in which the face is shifted leftward from the attention area TA
- a training image IMG_L#5 in which the face is shifted to the right may be generated.
- the engine generation device 4 After that, the engine generation device 4 generates the determination engine ENG#1 using the learning image IMG_L#1, generates the determination engine ENG#2 using the learning image IMG_L#2, and uses the learning image IMG_L#3 to generate the determination engine ENG#2.
- the determination engine ENG#3 may be generated
- the learning image IMG_L#4 may be used to generate the determination engine ENG#4
- the learning image IMG_L#5 may be used to generate the determination engine ENG#5.
- Judgment engine ENG#1 uses a thermal image IMG_T in which the face is not out of alignment with respect to the attention area TA to judge whether the subject is a living body or not, compared to judgment engines ENG#2 to ENG#5. Accurate determination is possible.
- Judgment engine ENG#2 compares judgment engines ENG#1 and ENG#3 to ENG#5 with thermal image IMG_T in which the face is displaced upward with respect to attention area TA. It is possible to determine whether or not with higher accuracy.
- Judgment engine ENG#3 compares with judgment engines ENG#1 to ENG#2 and ENG#4 to ENG#5, using thermal image IMG_T in which the face is shifted downward with respect to attention area TA. It is possible to more accurately determine whether or not the object is a living body.
- Judgment engine ENG#4 compares with judgment engines ENG#1 to ENG#3 and ENG#5, and uses thermal image IMG_T in which the face is displaced to the left with respect to attention area TA.
- Judgment engine ENG#5 compares with judgment engines ENG#1 to ENG#4 and uses thermal image IMG_T in which the face is shifted to the right with respect to attention area TA to determine whether the subject is a living body. can be determined with higher accuracy.
- the authentication device 3 calculates the amount of deviation between the attention area TA and the face and the direction of deviation between the attention area TA and the face based on the imaging environment during the authentication period. may be estimated, and one determination engine ENG corresponding to at least one of the estimated deviation amount and deviation direction may be selected from among the plurality of determination engines ENG. That is, the authentication device 3 may select the determination engine ENG generated using the learning image IMG_L corresponding to at least one of the estimated displacement amount and displacement direction. After that, the authentication device 3 may determine whether or not the subject is a living body using the selected determination engine ENG. As a result, the authentication device 3 can more accurately determine whether or not the subject is a living body, compared to the case where the determination engine ENG used by the authentication device 3 is not selectable.
- the imaging environment used for estimating at least one of the amount of deviation between the attention area TA and the face and the direction of deviation between the attention area TA and the face is, for example, the timing at which the visible camera 1 captures the image of the subject (that is, , the positional relationship (typically the distance between the visible camera 1 and the subject) between the visible camera 1 and the subject (at the authentication time ta described above).
- the imaging environment used for estimating at least one of the amount of deviation between the attention area TA and the face and the direction of deviation between the attention area TA and the face includes the positional relationship between the visible camera 1 and the thermal camera 2.
- the authentication device 3 determines the positional relationship between the visible camera 1 and the subject, the positional relationship between the visible camera 1 and the thermal camera 2, the amount of deviation between the attention area TA and the face, and the distance between the attention area TA and the face.
- the system SYS4 may include a measuring device for measuring the positional relationship between the visible camera 1 and the subject (typically, the distance between the visible camera 1 and the subject).
- the determining means specifies, based on the human image, a region of interest to be focused on in at least one thermal image among the plurality of thermal images to determine whether the subject is a living body, adjusting the position of the region of interest within the at least one thermal image based on the at least one thermal image;
- the authentication device according to appendix 1.
- the determination means specifies, based on the human image, a region of interest to be focused on in each of the plurality of thermal images to determine whether the subject is a living body, and At least one thermal image in which a region of interest of the subject to be focused is reflected in the region of interest for determining whether or not the subject is a living body is selected from the plurality of thermal images based on the 3.
- the authentication device determines whether or not the subject is a living body based on the at least one selected thermal image.
- the determination means determines whether or not the subject is a living body using a determination engine capable of determining whether or not the subject is a living body from the plurality of thermal images, The determination engine (from a learning data set including a plurality of sample images showing the body surface temperature distribution of a sample person and having a region of interest set to determine whether or not the sample person is a living body) , a first operation of extracting at least one sample image as an extracted image; and the attention area set in the extracted image based on an imaging environment in which the visible camera and the thermal camera image the subject.
- the authentication device according to any one of appendices 1 to 3, which is generated by a learning operation including a third operation that performs machine learning.
- the second operation changes at least one of the position and size of the region of interest within the extracted image and the position and size of the extracted image, thereby changing the positional relationship between the region of interest and the site of interest.
- the determination means selects one determination engine based on the imaging environment from among a plurality of determination engines respectively generated by a plurality of the second motions with different change modes of the positional relationship, and selects the selected determination engine. 6.
- the imaging environment includes a positional relationship between the subject and the visible camera and a positional relationship between the visible camera and the thermal camera at the first time.
- An engine generation device for generating a determination engine for determining whether or not a subject is a living body using a thermal image generated by imaging a subject with a thermal camera, At least one sample image from a learning data set including a plurality of sample images showing the body surface temperature distribution of a sample person and having a region of interest to be noted for determining whether the sample person is a living body. as an extracted image; The area of interest set in the extracted image and the sample person to be noted for determining whether or not the sample person is a living body based on an imaging environment in which the thermal camera images the subject.
- an image generating means for generating a learning image by changing the positional relationship with the part of interest
- An engine generation device comprising: engine generation means for generating the determination engine by performing machine learning using the learning image.
- the image generation means changes at least one of the position and size of the attention area within the extracted image and the position and size of the extracted image, thereby changing the positional relationship between the attention area and the attention part.
- the engine generating device according to appendix 8.
- the image generation means generates a first learning image by changing a positional relationship between the region of interest and the site of interest set in the extracted image in a first modification mode, and generating a second learning image by changing the positional relationship between the region of interest and the part of interest set in the extracted image in a second modification mode different from the first modification mode;
- the engine generation means performs machine learning using the first learning image to generate a first determination engine, and performs machine learning using the second learning image to generate a first determination engine.
- the engine generation device according to appendix 8 or 9, which generates the determination engine of 2.
- [Appendix 11] authenticating the subject using a person image generated by imaging the subject with a visible camera at a first time; Generated by the thermal camera capturing an image of the subject at a second time closest to the first time and at a third time before or after the second time among a plurality of times at which the thermal camera captured the subject. and determining whether or not the subject is a living body using a plurality of thermal images obtained.
- [Appendix 13] authenticating the subject using a person image generated by imaging the subject with a visible camera at a first time; Generated by the thermal camera capturing an image of the subject at a second time closest to the first time and at a third time before or after the second time among a plurality of times at which the thermal camera captured the subject. and determining whether or not the subject is a living body by using a plurality of thermal images obtained.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Collating Specific Patterns (AREA)
Abstract
Description
はじめに、認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体の第1実施形態について説明する。以下では、図1を参照しながら、第1実施形態における認証装置、認証方法及び記録媒体が適用された認証装置1000を用いて、第1実施形態における認証装置、認証方法及び記録媒体について説明する。図1は、第1実施形態における認証装置1000の構成を示すブロック図である。
続いて、認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体の第2実施形態について説明する。以下では、図2を参照しながら、第2実施形態におけるエンジン生成装置、エンジン生成方法及び記録媒体が適用されたエンジン生成装置2000を用いて、第2実施形態におけるエンジン生成装置、エンジン生成方法及び記録媒体について説明する。図2は、第2実施形態におけるエンジン生成装置2000の構成を示すブロック図である。
続いて、認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体の第3実施形態について説明する。以下では、第3実施形態における認証装置、認証方法及び記録媒体が適用された認証システムSYS3を用いて、第3実施形態における認証装置、認証方法及び記録媒体について説明する。
はじめに、図3を参照しながら第3実施形態における認証システムSYS3の構成について説明する。図3は、第3実施形態における認証システムSYS3の構成を示すブロック図である。
続いて、図4を参照しながら、認証装置3の構成について説明する。図3は、認証装置3の構成を示すブロック図である。
続いて、図5を参照しながら、認証装置3が行う認証動作の流れについて説明する。図5は、認証装置3が行う認証動作の流れを示すフローチャートである。
以上説明したように、第3実施形態では、認証装置3は、認証時刻taに基づいて定まる注目時刻tb(特に、最近接時刻tb1)にサーマルカメラ2が対象者を撮像することで生成されるサーマル画像IMG_Tを用いて、対象者が生体であるか否かを判定する。このため、認証時刻taを考慮しない任意の時刻にサーマルカメラ2が対象者を撮像することで生成されるサーマル画像IMG_Tを用いて対象者が生体であるか否かを判定する比較例の認証装置と比較して、認証装置3は、対象者が生体であるか否かをより高精度に判定することができる。
続いて、第3実施形態における認証装置3の変形例について説明する。但し、第1実施形態における認証装置1000においても、以下に説明する変形例と同様の構成要件が採用されてもよい。
上述した説明では、注目時刻tbとして、認証時刻taに最も近い最近接時刻tb1が用いられている。第1変形例では、最近接時刻tb1に加えて、最近接時刻tb1の前後の時刻である少なくとも一つの前後時刻tb2が、注目時刻tbとして用いられる。つまり、第1変形例では、注目時刻tbは、最近接時刻tb1に加えて、図5のステップS11において取得された複数のサーマル画像IMG_Tが夫々撮像された複数の時刻のうちの、最近接時刻tb1の前後の時刻である少なくとも一つの前後時刻tb2を含んでいてもよい。この場合、生体判定部312は、図5のステップS15において、最近接時刻tb1に撮像されたサーマル画像IMG_Tと、前後時刻tb2に撮像されたサーマル画像IMG_Tとを含む複数のサーマル画像IMG_Tを取得する。
第2変形例では、認証装置3は、人物画像IMG_Pの顔領域FAの位置から特定される注目領域TAの位置を、サーマル画像IMG_T内において調整してもよい。以下、図11を参照しながら、第2変形例における認証動作について説明する。図11は、第2変形例における認証動作の流れを示すフローチャートである。尚、既に説明済みの処理については、同一のステップ番号を付することで、その詳細な説明を省略する。
第2変形例で説明したように、サーマル画像IMG_Tが示す温度分布は、サーマル画像IMG_T内において対象者が写り込んでいる位置を間接的に示している。この場合、認証装置3は、対象者が生体であるか否かを判定するために用いるサーマル画像IMG_Tにおいて、対象者の顔が注目領域TAに適切に写り込んでいるのか、又は、対象者の顔の少なくとも一部が注目領域TAから外れた位置に写り込んでいるのかを判定してもよい。対象者の顔の少なくとも一部が注目領域TAから外れた位置に写り込んでいると判定された場合には、認証装置3は、対象者の顔が注目領域TAに適切に写り込んでいる別のサーマル画像IMG_Tを用いて、対象者が生体であるか否かを判定してもよい。一例として、認証装置3は、対象者の顔が注目領域TAの中心又は当該中心に相対的に近い位置に写り込んでいる別のサーマル画像IMG_Tを用いて、対象者が生体であるか否かを判定してもよい。
上述した説明では、人物画像IMG_Pに写り込んだ対象者を認証する認証装置3が、サーマル画像IMG_Tを用いて、対象者が生体であるか否かを判定している。しかしながら、人物画像IMG_Pに写り込んだ対象者を認証しない任意のなりすまし判定装置が、上述した認証装置3と同様に、サーマル画像IMG_Tを用いて、サーマル画像IMG_Tに写り込んでいる対象者が生体であるか否かを判定してもよい。言い換えれば、任意のなりすまし判定装置が、サーマル画像IMG_Tに生体が写り込んでいるか否かを判定してもよい。この場合においても、任意のなりすまし判定装置は、上述した認証装置3と同様に、対象者が生体であるか否かを相対的に高精度に判定することができる。
続いて、認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体の第4実施形態について説明する。以下では、第4実施形態における認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体が適用された認証システムSYS4を用いて、第4実施形態における認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体について説明する。
はじめに、図15を参照しながら第4実施形態における認証システムSYS4の構成について説明する。図15は、第4実施形態における認証システムSYS4の構成を示すブロック図である。尚、既に説明済みの構成要素については、同一の参照符号を付することで、その詳細な説明を省略する。
続いて、図16を参照しながら、第4実施形態におけるエンジン生成装置4の構成について説明する。図16は、第4実施形態におけるエンジン生成装置4の構成を示すブロック図である。
続いて、図17を参照しながら、エンジン生成装置4が行うエンジン生成動作の流れについて説明する。図17は、エンジン生成装置4が行うエンジン生成動作の流れを示すフローチャートである。
以上説明したように、第4実施形態では、エンジン生成装置4は、サーマルカメラ2が対象者を撮像する撮像環境に基づいて、抽出画像IMG_Eから学習画像IMG_Lを生成する。この場合、学習画像IMG_Lには、サーマルカメラ2が対象者を撮像する撮像環境に関する情報が反映されている。このため、エンジン生成装置4は、撮像環境に関する情報が反映された学習画像IMG_Lを用いた機械学習を行うことで、撮像環境に関する情報が反映された判定エンジンENGを生成することができる。例えば、エンジン生成装置4は、ある特定の撮像環境に関する情報が反映された学習画像IMG_Lを用いた機械学習を行うことで、ある特定の撮像環境下でサーマルカメラ2が対象者を撮像することで生成されるサーマル画像IMG_Tを用いて対象者が生体であるか否かを判定するための判定エンジンENGを生成することができる。その結果、認証装置3は、ある特定の撮像環境に関する情報が反映された判定エンジンENGを用いることで、ある特定の撮像環境に関する情報が反映されていない判定エンジンENGを用いる場合と比較して、ある特定の撮像環境下でサーマルカメラ2が対象者を撮像することで生成されるサーマル画像IMG_Tから対象者が生体であるか否かを高精度に判定することができる。つまり、エンジン生成装置4は、対象者が生体であるか否かを高精度に判定可能な判定エンジンENGを生成することができる。
エンジン生成装置4は、複数の異なる判定エンジンENGを生成し、認証装置3は、複数の判定エンジンENGのうちの一の判定エンジンENGを選択し、且つ、選択した一の判定エンジンENGを用いて、対象者が生体であるか否かを判定してもよい。この場合、認証装置3は、認証動作を行っている認証期間中に、対象者が生体であるか否かを判定するために用いる判定エンジンENGを変更してもよい。
以上説明した実施形態に関して、更に以下の付記を開示する。
[付記1]
第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証する認証手段と、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定する判定手段と
を備える認証装置。
[付記2]
前記判定手段は、前記人物画像に基づいて、前記複数のサーマル画像のうちの少なくとも一つのサーマル画像において前記対象者が生体であるか否かを判定するために注目するべき注目領域を特定し、前記少なくとも一つサーマル画像に基づいて、前記少なくとも一つのサーマル画像内での前記注目領域の位置を調整し、位置が調整された前記注目領域内の温度分布に基づいて、前記対象者が生体であるか否かを判定する
付記1に記載の認証装置。
[付記3]
前記判定手段は、前記人物画像に基づいて、前記複数のサーマル画像の夫々において前記対象者が生体であるか否かを判定するために注目するべき注目領域を特定し、前記複数のサーマル画像に基づいて、前記対象者が生体であるか否かを判定するために注目するべき前記対象者の注目部位が前記注目領域に写り込んでいる少なくとも一つのサーマル画像を前記複数のサーマル画像の中から選択し、選択された前記少なくとも一つのサーマル画像に基づいて、前記対象者が生体であるか否かを判定する
付記1又は2に記載の認証装置。
[付記4]
前記判定手段は、前記複数のサーマル画像から前記対象者が生体であるか否かを判定可能な判定エンジンを用いて、前記対象者が生体であるか否かを判定し、
前記判定エンジンは、(サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出する第1動作と、前記可視カメラと前記サーマルカメラとが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成する第2動作と、前記学習画像を用いた機械学習を行う第3動作とを含む学習動作によって生成される
付記1から3のいずれか一項に記載の認証装置。
[付記5]
前記第2動作は、前記抽出画像内での前記注目領域の位置及びサイズ、並びに、前記抽出画像の位置及びサイズの少なくとも一つを変更することで、前記注目領域と前記注目部位との位置関係を変更する
付記4に記載の認証装置。
[付記6]
前記判定手段は、前記位置関係の変更態様が夫々異なる複数の前記第2動作によって夫々生成される複数の判定エンジンの中から、前記撮像環境に基づいて一の判定エンジンを選択し、選択した前記一の判定エンジンを用いて、前記対象者が生体であるか否かを判定する
付記4又は5に記載の認証装置。
[付記7]
前記撮像環境は、前記第1時刻における前記対象者と前記可視カメラとの間の位置関係、及び、前記可視カメラと前記サーマルカメラとの位置関係を含む
付記6に記載の認証装置。
[付記8]
サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成装置であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出する抽出手段と、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成する画像生成手段と、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成するエンジン生成手段と
を備えるエンジン生成装置。
[付記9]
前記画像生成手段は、前記抽出画像内での前記注目領域の位置及びサイズ、並びに、前記抽出画像の位置及びサイズの少なくとも一つを変更することで、前記注目領域と前記注目部位との位置関係を変更する
付記8に記載のエンジン生成装置。
[付記10]
前記画像生成手段は、前記抽出画像に設定されている前記注目領域と前記注目部位との位置関係を、第1の変更態様で変更することで、第1の学習画像を生成し、且つ、前記抽出画像に設定されている前記注目領域と前記注目部位との位置関係を、前記第1の変更態様とは異なる第2の変更態様で変更することで、第2の学習画像を生成し、
前記エンジン生成手段は、前記第1の学習画像を用いた機械学習を行うことで、第1の判定エンジンを生成し、且つ、前記第2の学習画像を用いた機械学習を行うことで、第2の判定エンジンを生成する
付記8又は9に記載のエンジン生成装置。
[付記11]
第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証することと、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定することと
を含む認証方法。
[付記12]
サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成方法であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出することと、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成することと、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成することと
を含むエンジン生成方法。
[付記13]
第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証することと、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定することと
を含む認証方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。
[付記14]
サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成方法であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出することと、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成することと、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成することと
を含むエンジン生成方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。
1 可視カメラ
2 サーマルカメラ
3 認証装置
31 演算装置
311 認証部
312 生体判定部
313 入退場管理部
32 記憶装置
321 登録人物DB
322 登録体表温分布DB
4 エンジン生成装置
41 演算装置
411 画像抽出部
412 画像生成部
413 エンジン生成部
42 記憶装置
420 学習データセット
421 単位データ
422 注目領域情報
423 正解ラベル
1000 認証装置
1001 認証部
1002 判定部
2000 エンジン生成装置
2001 抽出部
2002 画像生成部
2003 エンジン生成部
IMG_P 人物画像
IMG_T サーマル画像
IMG_S サンプル画像
IMG_E 抽出画像
IMG_L 学習画像
FA 顔領域
TA 注目領域
ENG 判定エンジン
Claims (14)
- 第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証する認証手段と、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定する判定手段と
を備える認証装置。 - 前記判定手段は、前記人物画像に基づいて、前記複数のサーマル画像のうちの少なくとも一つのサーマル画像において前記対象者が生体であるか否かを判定するために注目するべき注目領域を特定し、前記少なくとも一つのサーマル画像に基づいて、前記少なくとも一つのサーマル画像内での前記注目領域の位置を調整し、位置が調整された前記注目領域内の温度分布に基づいて、前記対象者が生体であるか否かを判定する
請求項1に記載の認証装置。 - 前記判定手段は、前記人物画像に基づいて、前記複数のサーマル画像の夫々において前記対象者が生体であるか否かを判定するために注目するべき注目領域を特定し、前記複数のサーマル画像に基づいて、前記対象者が生体であるか否かを判定するために注目するべき前記対象者の注目部位が前記注目領域に写り込んでいる少なくとも一つのサーマル画像を前記複数のサーマル画像の中から選択し、選択された前記少なくとも一つのサーマル画像に基づいて、前記対象者が生体であるか否かを判定する
請求項1又は2に記載の認証装置。 - 前記判定手段は、前記複数のサーマル画像から前記対象者が生体であるか否かを判定可能な判定エンジンを用いて、前記対象者が生体であるか否かを判定し、
前記判定エンジンは、サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出する第1動作と、前記可視カメラと前記サーマルカメラとが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成する第2動作と、前記学習画像を用いた機械学習を行う第3動作とを含む学習動作によって生成される
請求項1から3のいずれか一項に記載の認証装置。 - 前記第2動作は、前記抽出画像内での前記注目領域の位置及びサイズ、並びに、前記抽出画像の位置及びサイズの少なくとも一つを変更することで、前記注目領域と前記注目部位との位置関係を変更する
請求項4に記載の認証装置。 - 前記判定手段は、前記位置関係の変更態様が夫々異なる複数の前記第2動作によって夫々生成される複数の判定エンジンの中から、前記撮像環境に基づいて一の判定エンジンを選択し、選択した前記一の判定エンジンを用いて、前記対象者が生体であるか否かを判定する
請求項4又は5に記載の認証装置。 - 前記撮像環境は、前記第1時刻における前記対象者と前記可視カメラとの間の位置関係、及び、前記可視カメラと前記サーマルカメラとの位置関係を含む
請求項6に記載の認証装置。 - サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成装置であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出する抽出手段と、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成する画像生成手段と、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成するエンジン生成手段と
を備えるエンジン生成装置。 - 前記画像生成手段は、前記抽出画像内での前記注目領域の位置及びサイズ、並びに、前記抽出画像の位置及びサイズの少なくとも一つを変更することで、前記注目領域と前記注目部位との位置関係を変更する
請求項8に記載のエンジン生成装置。 - 前記画像生成手段は、前記抽出画像に設定されている前記注目領域と前記注目部位との位置関係を、第1の変更態様で変更することで、第1の学習画像を生成し、且つ、前記抽出画像に設定されている前記注目領域と前記注目部位との位置関係を、前記第1の変更態様とは異なる第2の変更態様で変更することで、第2の学習画像を生成し、
前記エンジン生成手段は、前記第1の学習画像を用いた機械学習を行うことで、第1の判定エンジンを生成し、且つ、前記第2の学習画像を用いた機械学習を行うことで、第2の判定エンジンを生成する
請求項8又は9に記載のエンジン生成装置。 - 第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証することと、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定することと
を含む認証方法。 - サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成方法であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出することと、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成することと、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成することと
を含むエンジン生成方法。 - 第1時刻に可視カメラが対象者を撮像することで生成される人物画像を用いて、前記対象者を認証することと、
サーマルカメラが前記対象者を撮像した複数の時刻のうちの前記第1時刻に最も近い第2時刻及び前記第2時刻の前後の第3時刻に前記サーマルカメラが前記対象者を撮像することで生成される複数のサーマル画像を用いて、前記対象者が生体であるか否かを判定することと
を含む認証方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。 - サーマルカメラが対象者を撮像することで生成されるサーマル画像を用いて前記対象者が生体であるか否かを判定するための判定エンジンを生成するエンジン生成方法であって、
サンプル人物の体表温分布を示すと共に前記サンプル人物が生体であるか否かを判定するために注目するべき注目領域が設定されているサンプル画像を複数含む学習データセットから、少なくとも一つのサンプル画像を抽出画像として抽出することと、
前記サーマルカメラが前記対象者を撮像する撮像環境に基づいて、前記抽出画像に設定されている前記注目領域と前記サンプル人物が生体であるか否かを判定するために注目するべき前記サンプル人物の注目部位との位置関係を変更することで、学習画像を生成することと、
前記学習画像を用いた機械学習を行うことで、前記判定エンジンを生成することと
を含むエンジン生成方法をコンピュータに実行させるコンピュータプログラムが記録された記録媒体。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/041473 WO2023084667A1 (ja) | 2021-11-11 | 2021-11-11 | 認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 |
EP21964022.4A EP4432142A1 (en) | 2021-11-11 | 2021-11-11 | Authentication device, engine generation device, authentication method, engine generation method, and recording medium |
JP2023559283A JPWO2023084667A1 (ja) | 2021-11-11 | 2021-11-11 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2021/041473 WO2023084667A1 (ja) | 2021-11-11 | 2021-11-11 | 認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023084667A1 true WO2023084667A1 (ja) | 2023-05-19 |
Family
ID=86335380
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2021/041473 WO2023084667A1 (ja) | 2021-11-11 | 2021-11-11 | 認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4432142A1 (ja) |
JP (1) | JPWO2023084667A1 (ja) |
WO (1) | WO2023084667A1 (ja) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005115460A (ja) | 2003-10-03 | 2005-04-28 | Toshiba Corp | 顔照合装置および通行制御装置 |
JP2005259049A (ja) | 2004-03-15 | 2005-09-22 | Omron Corp | 顔面照合装置 |
WO2009107237A1 (ja) | 2008-02-29 | 2009-09-03 | グローリー株式会社 | 生体認証装置 |
JP2011067371A (ja) | 2009-09-25 | 2011-04-07 | Glory Ltd | 体温検査装置、体温検査システムおよび体温検査方法 |
JP2014078052A (ja) | 2012-10-09 | 2014-05-01 | Sony Corp | 認証装置および方法、並びにプログラム |
US20210117529A1 (en) * | 2018-06-13 | 2021-04-22 | Veridas Digital Authentication Solutions, S.L. | Authenticating an identity of a person |
JP2021135679A (ja) * | 2020-02-26 | 2021-09-13 | コニカミノルタ株式会社 | 加工機状態推定システム、および加工機状態推定プログラム |
WO2021220423A1 (ja) * | 2020-04-28 | 2021-11-04 | 日本電気株式会社 | 認証装置、認証システム、認証方法および認証プログラム |
-
2021
- 2021-11-11 EP EP21964022.4A patent/EP4432142A1/en active Pending
- 2021-11-11 WO PCT/JP2021/041473 patent/WO2023084667A1/ja active Application Filing
- 2021-11-11 JP JP2023559283A patent/JPWO2023084667A1/ja active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005115460A (ja) | 2003-10-03 | 2005-04-28 | Toshiba Corp | 顔照合装置および通行制御装置 |
JP2005259049A (ja) | 2004-03-15 | 2005-09-22 | Omron Corp | 顔面照合装置 |
WO2009107237A1 (ja) | 2008-02-29 | 2009-09-03 | グローリー株式会社 | 生体認証装置 |
JP2011067371A (ja) | 2009-09-25 | 2011-04-07 | Glory Ltd | 体温検査装置、体温検査システムおよび体温検査方法 |
JP2014078052A (ja) | 2012-10-09 | 2014-05-01 | Sony Corp | 認証装置および方法、並びにプログラム |
US20210117529A1 (en) * | 2018-06-13 | 2021-04-22 | Veridas Digital Authentication Solutions, S.L. | Authenticating an identity of a person |
JP2021135679A (ja) * | 2020-02-26 | 2021-09-13 | コニカミノルタ株式会社 | 加工機状態推定システム、および加工機状態推定プログラム |
WO2021220423A1 (ja) * | 2020-04-28 | 2021-11-04 | 日本電気株式会社 | 認証装置、認証システム、認証方法および認証プログラム |
Also Published As
Publication number | Publication date |
---|---|
EP4432142A1 (en) | 2024-09-18 |
JPWO2023084667A1 (ja) | 2023-05-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10482230B2 (en) | Face-controlled liveness verification | |
US10977356B2 (en) | Authentication using facial image comparison | |
CN109948408B (zh) | 活性测试方法和设备 | |
US10268910B1 (en) | Authentication based on heartbeat detection and facial recognition in video data | |
CN107995979B (zh) | 用于对用户进行认证的系统、方法和机器可读介质 | |
Peixoto et al. | Face liveness detection under bad illumination conditions | |
EP3862897B1 (en) | Facial recognition for user authentication | |
US11489866B2 (en) | Systems and methods for private authentication with helper networks | |
US20220277065A1 (en) | Authentication using stored authentication image data | |
US20230368582A1 (en) | Authentication device, authentication method, and recording medium | |
JP5850138B2 (ja) | 生体認証装置、生体認証方法、および生体認証プログラム | |
US20230306792A1 (en) | Spoof Detection Based on Challenge Response Analysis | |
KR20220136960A (ko) | 부정 행위를 방지하는 안면윤곽선 인식 인공지능을 사용한 온라인 시험 시스템 및 그 방법 | |
WO2023024734A1 (zh) | 人脸活体检测方法及装置 | |
CN110516426A (zh) | 身份认证方法、认证终端、装置及可读存储介质 | |
WO2023084667A1 (ja) | 認証装置、エンジン生成装置、認証方法、エンジン生成方法、及び、記録媒体 | |
KR20210136771A (ko) | 안면윤곽선 인식 인공지능을 사용한 ubt 시스템 및 그 방법 | |
JP7400924B2 (ja) | 情報提供装置、情報提供方法、およびプログラム | |
KR100653416B1 (ko) | 독립형 얼굴인식 시스템 | |
JP2020135666A (ja) | 認証装置、認証用端末、認証方法、プログラム及び記録媒体 | |
WO2024105778A1 (ja) | 情報処理装置、情報処理方法、及び、記録媒体 | |
KR102318051B1 (ko) | 사용자 인증 시스템에서 여백을 포함한 얼굴영역 이미지를 이용한 라이브니스 검사방법 | |
JP2023079045A (ja) | 画像処理装置、画像処理方法、およびプログラム | |
JP2024130115A (ja) | 情報処理装置、情報処理方法、及びプログラム | |
JP2024080541A (ja) | 顔認証の方法、プログラム、および、コンピューター・システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21964022 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18705055 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2023559283 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2021964022 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2021964022 Country of ref document: EP Effective date: 20240611 |