WO2018086543A1 - Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information - Google Patents

Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information Download PDF

Info

Publication number
WO2018086543A1
WO2018086543A1 PCT/CN2017/109989 CN2017109989W WO2018086543A1 WO 2018086543 A1 WO2018086543 A1 WO 2018086543A1 CN 2017109989 W CN2017109989 W CN 2017109989W WO 2018086543 A1 WO2018086543 A1 WO 2018086543A1
Authority
WO
WIPO (PCT)
Prior art keywords
face
region
pulse
characteristic curve
sub
Prior art date
Application number
PCT/CN2017/109989
Other languages
English (en)
Chinese (zh)
Inventor
赵凌
李季檩
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018086543A1 publication Critical patent/WO2018086543A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Definitions

  • the present application relates to the field of computer technology, and in particular, to a living body discrimination method, an identity authentication method, a terminal, a server, and a storage medium.
  • face recognition has been widely used in face access control, financial and nuclear fields to prevent users from using illegal photos to manually register bank accounts through face recognition systems or machines.
  • Traditional living body discrimination technology usually needs to combine certain interactions in practical application scenarios, such as shaking head, blinking, etc., to distinguish real people and photos through the position movement of a certain point on the face; traditional optical plethysmography, using close contact
  • the blood volume change at the end of the human body is detected by an additional instrument, the pulse of the person is estimated, and the real person and the photo are distinguished according to the change of the pulse.
  • a living body discrimination method an identity authentication method, a terminal, a server, and a storage medium are provided.
  • a living body discrimination method comprising:
  • the terminal collects a multi-frame face image
  • the terminal extracts a face region for each frame of the face image
  • the terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
  • the terminal establishes a pulse characteristic curve according to the pulse feature corresponding to the face image of each frame.
  • the terminal compares the pulse characteristic curve with a pre-stored standard non-in-vivo pulse characteristic curve, and if the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-living pulse characteristic curve differ by more than a preset feature threshold, It is determined that the living face image is collected, otherwise, it is determined that the collected non-living face image.
  • An identity authentication method comprising:
  • the server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier;
  • the server extracts a face region for each frame of the face image
  • the server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region;
  • the server establishes a pulse characteristic curve according to the pulse feature corresponding to each frame of the face image
  • the server compares the pulse characteristic curve with a pre-stored standard pulse characteristic curve corresponding to the user identifier, if the characteristic value of the pulse characteristic curve is different from the characteristic value of the standard pulse characteristic curve in a preset range If the user identity authentication is passed, the user identity authentication will not pass.
  • a terminal includes a memory and a processor, wherein the memory stores computer readable instructions, The computer readable instructions, when executed by the processor, cause the processor to perform the following steps:
  • a server comprising a memory and a processor, the memory storing computer readable instructions, the computer readable instructions being executed by the processor such that the processor performs the following steps:
  • the user identity authentication is performed according to the comparison result. If the feature value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed, otherwise the user identity authentication fails.
  • One or more non-volatile readable storage media storing computer readable instructions for said computing
  • the machine readable instructions are executed by one or more processors such that the one or more processors perform the following steps:
  • One or more non-transitory readable storage mediums storing computer readable instructions, when executed by one or more processors, cause the one or more processors to perform the following steps:
  • 1 is an application environment diagram of a living body discrimination method and an identity authentication method in an embodiment
  • 2A is a diagram showing the internal structure of a server in an embodiment
  • 2B is an internal structural diagram of a terminal in an embodiment
  • FIG. 3 is a flow chart of a living body discrimination method in an embodiment
  • FIG. 4 is a diagram showing a result of segmentation of a face of the face of FIG. 3 in an embodiment
  • Figure 5 is a flow chart of the pulse characteristic calculation method of Figure 3 in one embodiment
  • FIG. 6 is a flow chart of an identity authentication method in an embodiment
  • FIG. 7 is a flow chart of the pulse characteristic calculation method of FIG. 6 in one embodiment
  • FIG. 9 is a structural block diagram of a terminal in an embodiment
  • Figure 10 is a block diagram showing the structure of a server in an embodiment
  • Figure 11 is a block diagram showing the structure of a server in another embodiment.
  • the living body discrimination method provided by the embodiment of the present application can be applied to the environment as shown in FIG. 1.
  • the server 102 can receive and process a plurality of frames of the face image collected by the terminal 104. specific, The server 102 communicates with the terminal 104 through the network, receives the collected multi-frame face image sent by the terminal 104, extracts the face region of the face image of each frame, acquires the illumination intensity of the face region, and calculates the face of each frame.
  • the pulse characteristic corresponding to the image is established and the pulse characteristic curve is established, and the established pulse characteristic curve is compared with the standard non-in vivo pulse characteristic curve, the living body discrimination is performed according to the comparison result, and the determination result is sent to the terminal 104.
  • Terminals herein include, but are not limited to, various personal computers, notebook computers, smart phones, tablets, portable wearable devices, and the like. It should be noted that, in other embodiments, the terminal may directly process the multi-frame face image to collect the multi-frame face image, and determine whether the image is a living body.
  • the identity authentication method provided by the embodiment of the present application is also applicable to the environment shown in FIG. 1.
  • the server 102 can receive the user identity authentication request sent by the terminal 104, and can also return the user identity authentication result to the terminal 104.
  • the server 102 communicates with the terminal 104 through the network, and receives the user identity authentication request sent by the terminal 104 and the multi-frame face image of the user collected by the terminal 104 according to the user identity authentication request, and extracts the face of each frame face image.
  • the area obtains the illumination intensity of the face region, calculates the pulse characteristics corresponding to the face image of each frame, and establishes a pulse characteristic curve, and compares the established pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier, according to the comparison As a result, identity authentication is performed and the identity authentication result is transmitted to the terminal 104.
  • Terminals herein include, but are not limited to, various personal computers, notebook computers, smart phones, tablets, portable wearable devices, and the like.
  • the terminal 104 obtains the user identity authentication request, and can also directly verify the user identity. Specifically, the terminal 104 can directly process the multi-frame face image of the collected user to obtain a pulse characteristic curve of the user, and compare with the standard pulse characteristic curve of the user to obtain a user identity verification result.
  • a server in one embodiment, as shown in FIG. 2A, includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, and a network interface.
  • An operating system and computer readable instructions are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication method when executed.
  • This processor is used to improve computing and control capabilities and support the operation of the entire server.
  • the internal memory is used for a living body discriminating device or an identity authentication device in a non-volatile storage medium
  • the operating environment is provided, and the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to perform a living body discrimination method or an identity authentication method.
  • the network interface is used for network communication with the terminal, receiving or transmitting data, for example, receiving a face image sent by the terminal, and transmitting a living body discrimination result to the terminal; or receiving an identity authentication request sent by the terminal and the collected face image, and The terminal sends the identity authentication result and the like.
  • a terminal in one embodiment, as shown in FIG. 2B, includes a processor coupled through a system bus, a non-volatile storage medium, an internal memory, a network interface, and a display screen.
  • An operating system and a computer readable instruction are stored in the non-volatile storage medium, the computer readable instructions implementing a live discriminating method when executed, or the computer readable instructions implementing an identity authentication when executed method.
  • the processor is used to increase computing and control capabilities to support the operation of the entire terminal.
  • the internal memory is configured to provide an environment for the operation of the living body discriminating device in the non-volatile storage medium, wherein the internal memory can store computer readable instructions that, when executed by the processor, cause the processor to execute A living body discrimination method.
  • the network interface is used for network communication with the server, receiving or transmitting data, for example, receiving a pulse characteristic comparison result sent by the server.
  • the display screen of the terminal may be a liquid crystal display or an electronic ink display screen
  • the input device of the terminal may be a touch layer covered on the display screen, or a button, a trackball or a touchpad provided on the terminal housing, or may be an external connection. Keyboard, trackpad or mouse.
  • the terminal can be a personal computer, a mobile terminal or a wearable device, such as a mobile phone, a tablet or a personal digital assistant. It will be understood by those skilled in the art that the structure shown in FIG.
  • FIG. 2 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the terminal to which the solution of the present application is applied.
  • the specific terminal may include a ratio. More or fewer components are shown in Figure 2, or some components are combined, or have different component arrangements.
  • a living body discriminating method is provided, which is applied to the server shown in FIG. 2A or the terminal shown in FIG. 2B for illustration, and includes:
  • Step 302 The terminal collects a multi-frame face image.
  • the multi-frame face image is processed.
  • the terminal may collect a video containing a face image within a preset time, or each terminal The face image is collected at a preset time to obtain a multi-frame face image.
  • Step 304 The terminal extracts a face region for each frame of the face image.
  • the terminal extracts a face region from each frame of the face image.
  • the terminal may use an image integration map and an Adaboost method to extract a face region for each frame of the face image.
  • the terminal obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area.
  • Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
  • Step 306 The terminal acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
  • the face region is first divided into a plurality of face sub-regions, which specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face region is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained; and the light intensity of all the pixels in the face sub-area is obtained, If the difference of the illumination intensity of any two pixels in the face sub-area is within the preset light intensity threshold, the segmentation of the face sub-region continues until the difference of the illumination intensity of any two pixels in the face sub-region obtained after the segmentation is not Exceeding the preset light intensity threshold, the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
  • face key points such as forehead, left eye, right eye, cheek left, and cheek right.
  • the face region is divided into regions, and the face
  • the face region in order to reduce the influence of ambient light on the skin color change of the human face, in order to accurately reflect the skin color change caused by the oxygen saturation of the blood vessel and the change of the blood volume in the face sub-area, the face region is finely divided into A plurality of face sub-regions such that the illumination intensity of all positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
  • each face sub-area and the weight corresponding to each face sub-area are obtained.
  • the illumination intensity of each face sub-area is weighted and summed according to its corresponding weight, and the summation result is a pulse feature corresponding to the face image.
  • Step 308 The terminal establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
  • the pulse feature corresponding to each frame of the face image is a static value. If one static value cannot be distinguished whether it is a living body or a non-living body, the pulse characteristics corresponding to all the collected face images are connected to a line, and a line can be established.
  • the pulse characteristic curve analyzes some properties of the pulse characteristic curve, such as whether it changes periodically, and if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
  • Step 310 The terminal compares the pulse characteristic curve with the pre-stored standard non-in-vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve exceed the preset feature threshold, the determination is that The living face image, otherwise, it is determined that the collected non-living face image.
  • Non-living objects here are objects without life features, such as paper.
  • a multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions other than the living body is constant.
  • the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored.
  • a curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
  • the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so
  • the pre-existing standard non-in vivo pulse characteristic curve is single and unchanged. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than the preset feature threshold, it indicates that the pulse characteristic curve obtained in the previous step is changed, and then the determination is made.
  • the face image corresponding to the pulse characteristic curve is a living face image, and conversely, the pulse characteristic curve is a single constant, that is, a non-living face image.
  • the pulse characteristics corresponding to the face image are calculated by using the illumination intensity of the face region, and the comparison between the pulse characteristic curve and the pre-stored standard non-vivo pulse characteristic curve is performed to realize the living body determination without additional equipment.
  • the hardware cost is saved; the user interaction is not required to complete the living body determination, and the detection rate of the living body discrimination is improved.
  • the foregoing step 306 includes:
  • Step 316 The terminal divides the face area to obtain a face sub-area.
  • the segmentation algorithm can be used to segment the face region.
  • some face key points such as forehead, left eye, right eye, and cheek left
  • the face registration algorithm Some points on the side and the right side of the cheek, according to these key points, the face area is divided into regions, and the face regions such as the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
  • the terminal uses the region segmentation algorithm on the face region, acquires the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation continues. In the face sub-region, the difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold.
  • the terminal determines the degree of subdivision of the sub-region of the face by determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range, so that all positions in the face sub-area of any one of the faces
  • the intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
  • Step 336 The terminal acquires the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
  • the skin color changes in different regions of the face are different. In the region where the blood vessels are concentrated, the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small.
  • the acquired illumination intensity corresponding to each face sub-region is weighted and summed according to the weights corresponding to each of the face sub-regions, and each frame is obtained.
  • the pulse characteristics can be calculated according to the following formula:
  • n is the total number of regions
  • G i is the weight corresponding to each region.
  • the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
  • the face region is obtained by segmenting the face region, and the pulse feature corresponding to the face image of each frame is calculated according to the light intensity corresponding to the face region and the weight corresponding to the face region.
  • the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
  • an identity authentication method is provided, which is applied to a server as shown in FIG. 1, and the method includes:
  • Step 602 The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
  • the user identity authentication request is a request for verifying the identity sent by the terminal where the user is located to the server; the user identifier is used to identify each user, and is unique, and may be the user's ID number, instant communication number, social account number, email address, or mobile communication. Any of the numbers.
  • the user places the ID card in the designated scanable area, and the user's terminal scans the ID card to obtain the user ID of the user, that is, the ID number, and after obtaining the success, sends an identity authentication request to the server, so that The subsequent server can find a standard pulse characteristic curve corresponding to the user identifier from the database.
  • Step 604 The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
  • the server may compare the user identifier carried in the user identity authentication request with the user identifier pre-stored in the server. If the comparison is consistent, the server stores the user information corresponding to the user identifier, such as the user pulse characteristics used subsequently. Curve, then get more users collected by the terminal Frame face image.
  • the method for collecting the multi-frame face image of the user by the terminal may be that the terminal collects the video that includes the face image within the preset time, or that the terminal collects the face image every preset time to obtain the multi-frame face image. .
  • the server extracts a face region for each frame of the face image.
  • the server needs to extract the face region from each frame of the face image.
  • the server may use an image integration map and an Adaboost method to extract a face region for each frame of the face image.
  • the server obtains the Haar facial features quickly by calculating the image integral map, and classifies the training samples by using the Adaboost classification algorithm according to the Haar facial features, and classifies the test samples by using the final classifier obtained by the classification, thereby realizing the extractor. Face area.
  • Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
  • Step 608 The server acquires the illumination intensity of the face region, and calculates a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
  • the server first divides the face region into a plurality of face sub-regions, and specifically includes: using a face registration algorithm to obtain some face key points, such as forehead, left eye, right eye, cheek left, and cheek right According to these key points, the face region is segmented to obtain the face region of the forehead, the left eye, the right eye, the left side of the cheek and the right side of the cheek; the light intensity of all the pixels in the face sub-area is obtained.
  • face key points such as forehead, left eye, right eye, cheek left, and cheek right According to these key points, the face region is segmented to obtain the face region of the forehead, the left eye, the right eye, the left side of the cheek and the right side of the cheek; the light intensity of all the pixels in the face sub-area is obtained.
  • the preset light intensity threshold is not exceeded, so that the face region segmentation is more detailed, such as dividing the forehead region into three parts of the left middle and the right, as shown in FIG.
  • the face region is finely divided into a plurality of face sub-regions, so that the illumination intensity of all the positions in each face region can be approximated as a constant, and the illumination intensity of the face sub-region is the constant.
  • the server obtains the light intensity of each face sub-region and the weight corresponding to each face sub-region, and weights the illumination intensity of each face sub-region according to its corresponding weight, and the summation result is the pulse feature corresponding to the face image.
  • the maximum intensity and the minimum intensity in the face sub-region exceed a certain threshold within a certain time range, indicating that the region is greatly affected by the ambient light, the region is ignored, that is, the pulse feature is not involved in the calculation.
  • Step 610 The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
  • the pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value.
  • the pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
  • Step 612 The server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed. Otherwise, the user identity authentication does not pass.
  • the standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the multi-frame face image of the user may be pre-acquired, the face region is extracted for each frame face image, and the illumination intensity of the face region is obtained, according to the face The light intensity of the region is calculated, and the pulse feature corresponding to the face image of each frame is calculated, and the pulse characteristic curve is established according to the pulse feature corresponding to the face image of each frame, and the pulse characteristic curve is the standard pulse characteristic curve of the user, and the user identifier Correspondence is stored in the database and compared when authentication is required.
  • the server compares the pulse characteristic curve obtained by the living body discrimination with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the difference between the characteristic values (such as the change period, the maximum amplitude, etc.) of the two is acceptable.
  • the face image collected by the terminal is a living face image
  • the user corresponding to the face image collected by the terminal is the same person as the user corresponding to the user identifier in the server database, that is, the identity verification is passed.
  • by pulse The characteristic curve is compared with the pre-stored standard pulse characteristic curve corresponding to the user identification, and the user does not need to perform additional interactions or additional instruments, thereby improving the efficiency of identity authentication and saving hardware costs.
  • the method before the server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier, the method further includes: comparing, by the server, the pulse characteristic curve with a pre-standard non-in vivo pulse characteristic curve, if the pulse characteristic If the characteristic value of the curve differs from the characteristic value of the pre-standard non-vivo pulse characteristic curve by more than a threshold value, a step of comparing the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identification is performed.
  • the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier.
  • the curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
  • the foregoing step 608 includes:
  • step 618 the server segments the face region to obtain a face sub-region.
  • the segmentation algorithm can be used to segment the face region.
  • some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
  • the illumination intensity of all the pixels in the face sub-region is obtained. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. The difference in illumination intensity of any two pixels in the face sub-region obtained after the segmentation does not exceed the preset light intensity threshold.
  • determining whether the difference of the illumination intensity of any two pixels in the sub-region of the face is within a preset light intensity threshold range determines the degree of subdivision of the sub-region of the face, so that all positions in the face sub-area of any one of the faces
  • the intensity of the light can be approximated as a constant, which reduces the effect of light on skin color changes, so that skin color changes caused by blood flow are accurately reflected.
  • Step 638 the server obtains the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, according to the light intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
  • the pulse characteristics corresponding to the face image of each frame are calculated.
  • the server performs weighted summation on the acquired light intensity corresponding to each face sub-region according to the weight corresponding to each face sub-region, and obtains a pulse feature corresponding to each frame face image.
  • the server obtains the face sub-region by segmenting the face region, and then calculates the pulse feature corresponding to the face image of each frame according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region. This way, the estimated value of the pulse characteristics is obtained by weighted summation, which improves the accuracy of the pulse characteristic calculation.
  • another identity authentication method comprising:
  • Step 802 The server receives a user identity authentication request sent by the terminal, where the user identity authentication request carries the user identifier.
  • the user identity authentication request is a request for verifying identity sent by the terminal where the user is located to the server;
  • the user identifier is a unique credential that distinguishes each user, such as a user's phone number, a social account number, or a mailbox.
  • Step 804 The server acquires a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
  • the present embodiment is based on the identity authentication of the face image.
  • the living body discrimination needs to be performed first, and the living body discrimination needs to be performed by the feature analysis of more images, so the multi-frame person is first collected. Face image.
  • Step 806 The server extracts a face region for each frame of the face image.
  • the server needs to extract the face region from each frame of the face image.
  • the server may use an image integration map and an Adaboost method to face each frame of the image. Extract the face area.
  • the Haar facial features are obtained quickly by calculating the image integral map.
  • the training samples are classified by the Adaboost classification algorithm, and the final classifier obtained by the classification is used to classify the test samples, thereby extracting the human face. region.
  • Haar feature is a commonly used feature description operator in the field of computer vision. Haar eigenvalue reflects the gray level change of image; Adaboost is an iterative algorithm whose core idea is to train different classifications for the same training set. (weak classifier), then combine these weak classifiers to form a stronger final classifier (strong classifier).
  • step 808 the server segments the face region to obtain a face sub-region.
  • the segmentation algorithm can be used to segment the face region.
  • some face key points are usually obtained according to the face registration algorithm, such as forehead, left eye, right eye, cheek left, and cheek right. According to these key points, the face area is divided into regions, and the forehead, the left eye, the right eye, the left side of the cheek, and the right side of the cheek are obtained.
  • Step 810 The server acquires the illumination intensity of all the pixels in the face sub-area. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. Otherwise, the segmentation of the face sub-region is stopped.
  • the degree of subdivision of the sub-region of the face is determined, so that the illumination intensity of all positions in any face sub-region can be approximated. As a constant, this reduces the effect of light on skin tone changes, so that skin color changes caused by blood flow are accurately reflected.
  • Step 812 The server obtains the light intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, and calculates a pulse feature corresponding to each frame face image according to the illumination intensity corresponding to the face sub-region and the weight corresponding to the face sub-region.
  • the skin color changes in different regions of the face are different.
  • the skin color changes are relatively obvious, and the weight corresponding to the region is relatively large. Conversely, the weight corresponding to the region is relatively small.
  • the obtained light intensity corresponding to each face sub-region is weighted and summed according to the weight corresponding to each face sub-region, and the pulse feature corresponding to each frame face image is obtained. Specific calculation The formula is as follows:
  • n is the total number of regions
  • G i is the weight corresponding to each region.
  • the function I is an indication function, indicating that within a certain time range, when the maximum intensity and the minimum intensity in the region i exceed a certain threshold, the region will be ignored and will not participate in the calculation of the pulse feature.
  • Step 814 The server establishes a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
  • the pulse feature corresponding to each frame of the face image is a static value. It is impossible to distinguish whether it is a real person or a picture by looking at a static value.
  • the pulse features corresponding to all the collected face images are connected into a line to establish a pulse characteristic curve. By analyzing some properties of the pulse characteristic curve, such as whether it is periodically changed, if so, the change period, the maximum amplitude, etc., to perform subsequent living body discrimination.
  • Step 816 The server compares the pulse characteristic curve with the pre-standard non-in vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-standard non-in vivo pulse characteristic curve differ from the preset feature threshold, the determination is that Live face image.
  • Non-living objects here are objects without life features, such as paper sheets.
  • a multi-frame non-living image is obtained by collecting a video containing a non-living body within a preset time period, or collecting a non-living video at a preset time. Since the non-living body does not have a skin color change caused by a change in blood volume and oxygen saturation, that is, the light intensity at all positions of the non-living region is constant.
  • the pre-stored standard non-living pulse characteristic curve is a straight line whose pulse characteristic value is close to the ambient light intensity; if the ambient light intensity changes, the standard non-living pulse characteristic is pre-stored.
  • a curve is a curve in which the pulse characteristic value approximates the ambient light intensity.
  • the oxygen saturation in the blood vessel and the blood volume are fixed, and the corresponding skin color does not change, so the light intensity in the adjacent one or more frames of the non-living human face region is constant, so The pre-stored standard non-in vivo pulse characteristic curve showed no change. If the characteristic value of the pulse characteristic curve obtained in the previous step differs from the characteristic value of the pre-stored standard non-in-vivo pulse characteristic curve by more than a preset characteristic threshold, if the pulse characteristic curve obtained in the previous step is changed, the The face image corresponding to the pulse characteristic curve is a living face image.
  • Step 818 the server compares the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the characteristic value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user identity authentication is passed. Otherwise, the user identity authentication does not pass.
  • the standard pulse characteristic curve corresponding to the user identifier is pre-stored in the database, and the pulse characteristic curve obtained by the living body discrimination is compared with the standard pulse characteristic curve corresponding to the pre-stored user identifier, as long as the characteristic values of the two (such as the change period, If the difference between the maximum amplitude and the like is within an acceptable range, it indicates that the user corresponding to the collected image is the same person as the user corresponding to the user identifier in the database, that is, the authentication is passed.
  • the database here can be an online database or a local database.
  • the subsequent identity authentication is performed only after the live discrimination is performed, and the possibility of the image impersonation is excluded, and the identity authentication is performed by comparing the currently collected pulse characteristic curve with the pre-stored standard pulse feature corresponding to the user identifier.
  • the implementation of the curve eliminates the possibility of impersonation, so the identity authentication method is highly secure.
  • a terminal is also provided, the internal structure of which can be as shown in FIG. 2B, and each of the following modules can be implemented in whole or in part by software, hardware or a combination thereof.
  • the terminal includes a living body discriminating device, and the device includes:
  • the face image collecting module 902 is configured to collect a multi-frame face image.
  • the face region extraction module 904 is configured to extract a face region for each frame of the face image.
  • the pulse feature calculation module 906 is configured to acquire the illumination intensity of the face region, and calculate a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
  • the pulse characteristic curve establishing module 908 is configured to establish a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
  • the living body discriminating module 910 is configured to compare the pulse characteristic curve with the pre-stored standard non-in vivo pulse characteristic curve. If the characteristic value of the pulse characteristic curve and the characteristic value of the pre-stored standard non-in vivo pulse characteristic curve exceed the preset feature threshold, the determination is performed. The image of the living face is obtained, otherwise, it is determined that the non-living face image is collected.
  • the pulse feature calculation module 906 is configured to segment the face region to obtain a face sub-region; obtain the illumination intensity corresponding to each face sub-region and the weight corresponding to each face sub-region, according to the illumination intensity corresponding to the face sub-region The weight corresponding to the face sub-region is calculated, and the pulse feature corresponding to the face image of each frame is calculated.
  • the pulse feature calculation module 906 is configured to acquire the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues. Otherwise, Stop splitting the face sub-area.
  • a server is provided, the internal structure of which may be as shown in FIG. 2A, and each of the following modules may be implemented in whole or in part by software, hardware or a combination thereof.
  • the server includes an identity authentication device, the device comprising:
  • the identity authentication request receiving module 1002 is configured to send and receive a user identity authentication request, and the user identity authentication request carries the user identifier.
  • the face image collecting module 1004 is configured to receive a multi-frame face image of the user collected by the terminal according to the user identity authentication request.
  • the face region extraction module 1006 is configured to extract a face region for each frame of the face image.
  • the pulse feature calculation module 1008 is configured to acquire the illumination intensity of the face region, and calculate a pulse feature corresponding to the face image of each frame according to the illumination intensity of the face region.
  • the pulse characteristic curve establishing module 1010 is configured to establish a pulse characteristic curve according to a pulse feature corresponding to each frame of the face image.
  • the identity authentication module 1012 is configured to compare the pulse characteristic curve with the pre-stored standard pulse characteristic curve corresponding to the user identifier. If the feature value of the pulse characteristic curve and the characteristic value of the standard pulse characteristic curve are within a preset range, the user The identity authentication is passed, otherwise the user identity authentication fails.
  • the pulse feature calculation module 1008 is configured to segment the face region to obtain a face sub-region; obtain a light intensity corresponding to each face sub-region and a weight corresponding to each face sub-region, according to the light intensity corresponding to the face sub-region The weight corresponding to the face sub-region is calculated, and the pulse feature corresponding to the face image of each frame is calculated.
  • the identity authentication device in the server further includes: a living body discriminating module 1011, configured to compare the pulse characteristic curve with a pre-stored standard non-in vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the pre-stored standard non-living body If the characteristic values of the pulse characteristic curve differ by more than the preset feature threshold, it is determined that the collected living face image is obtained.
  • a living body discriminating module 1011 configured to compare the pulse characteristic curve with a pre-stored standard non-in vivo pulse characteristic curve, if the characteristic value of the pulse characteristic curve and the pre-stored standard non-living body If the characteristic values of the pulse characteristic curve differ by more than the preset feature threshold, it is determined that the collected living face image is obtained.
  • the pulse feature calculation module 1008 is configured to acquire the illumination intensity of all the pixels in the face sub-region. If the difference between the illumination intensities of any two pixels exceeds the preset light intensity threshold, the segmentation of the face sub-region continues until segmentation. The difference in illumination intensity of any two pixels in the obtained face sub-region does not exceed the preset light intensity threshold.
  • the storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

L'invention concerne un procédé d'identification de corps vivant, qui consiste : à collecter de multiples trames d'images de visage (302); à extraire une zone de visage pour chaque trame d'image de visage (304); à obtenir une intensité d'éclairage de la zone de visage, et à calculer une caractéristique de pouls correspondant à chaque trame d'image de visage en fonction de l'intensité d'éclairage de la zone de visage (306); à établir une courbe caractéristique de pouls en fonction de la caractéristique de pouls correspondant à chaque trame d'image de visage (308); et à comparer la courbe caractéristique de pouls à une courbe caractéristique de pouls de corps non vivant standard pré-mémorisée, et si une différence entre une valeur caractéristique de la courbe caractéristique de pouls et une valeur caractéristique de la courbe caractéristique d'impulsion de corps non vivant standard pré-mémorisée dépasse un seuil caractéristique prédéfini, déterminer qu'il s'agit de l'image de visage vivant qui est collectée, sinon, déterminer qu'il s'agit de l'image de visage non vivant qui est collectée.
PCT/CN2017/109989 2016-11-10 2017-11-08 Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information WO2018086543A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610992121.8A CN106570489A (zh) 2016-11-10 2016-11-10 活体判别方法和装置、身份认证方法和装置
CN201610992121.8 2016-11-10

Publications (1)

Publication Number Publication Date
WO2018086543A1 true WO2018086543A1 (fr) 2018-05-17

Family

ID=58541303

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/109989 WO2018086543A1 (fr) 2016-11-10 2017-11-08 Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information

Country Status (2)

Country Link
CN (1) CN106570489A (fr)
WO (1) WO2018086543A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470169A (zh) * 2018-05-23 2018-08-31 国政通科技股份有限公司 人脸识别系统及方法
CN111666786A (zh) * 2019-03-06 2020-09-15 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN112016482A (zh) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 一种判别虚假人脸的方法、装置及计算机设备
CN112784661A (zh) * 2019-11-01 2021-05-11 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
US11443527B2 (en) 2021-01-13 2022-09-13 Ford Global Technologies, Llc Material spectroscopy
US11657589B2 (en) 2021-01-13 2023-05-23 Ford Global Technologies, Llc Material spectroscopy
CN116389647A (zh) * 2023-06-02 2023-07-04 深圳市尚哲医健科技有限责任公司 急诊急救一体化平台
US11741747B2 (en) 2021-01-13 2023-08-29 Ford Global Technologies, Llc Material spectroscopy

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106570489A (zh) * 2016-11-10 2017-04-19 腾讯科技(深圳)有限公司 活体判别方法和装置、身份认证方法和装置
CN107038428B (zh) * 2017-04-28 2020-04-07 北京小米移动软件有限公司 活体识别方法和装置
CN107392606B (zh) * 2017-06-28 2020-08-04 阿里巴巴集团控股有限公司 账户管理方法及装置
CN107506713A (zh) * 2017-08-15 2017-12-22 哈尔滨工业大学深圳研究生院 活体人脸检测方法及存储设备
CN108875333B (zh) * 2017-09-22 2023-05-16 北京旷视科技有限公司 终端解锁方法、终端和计算机可读存储介质
CN108197279B (zh) * 2018-01-09 2020-08-07 北京旷视科技有限公司 攻击数据生成方法、装置、系统及计算机可读存储介质
CN110141246A (zh) * 2018-02-10 2019-08-20 上海聚虹光电科技有限公司 基于肤色变化的活体检测方法
CN110473311B (zh) * 2018-05-09 2021-07-23 杭州海康威视数字技术股份有限公司 防范非法攻击方法、装置及电子设备
CN109446981B (zh) * 2018-10-25 2023-03-24 腾讯科技(深圳)有限公司 一种脸部活体检测、身份认证方法及装置
CN109858375B (zh) * 2018-12-29 2023-09-26 简图创智(深圳)科技有限公司 活体人脸检测方法、终端及计算机可读存储介质
CN109766849B (zh) * 2019-01-15 2023-06-20 深圳市凯广荣科技发展有限公司 一种活体检测方法、检测装置及自助终端设备
CN110335216B (zh) * 2019-07-09 2021-11-30 Oppo广东移动通信有限公司 图像处理方法、图像处理装置、终端设备及可读存储介质
CN111464519B (zh) * 2020-03-26 2023-06-20 支付宝(杭州)信息技术有限公司 基于语音交互的账号注册的方法和系统
CN111523438B (zh) * 2020-04-20 2024-02-23 支付宝实验室(新加坡)有限公司 一种活体识别方法、终端设备和电子设备
CN111931153B (zh) * 2020-10-16 2021-02-19 腾讯科技(深圳)有限公司 基于人工智能的身份验证方法、装置和计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113883A (zh) * 2010-01-05 2011-07-06 精工爱普生株式会社 活体信息检测器和活体信息测定装置
US20110190646A1 (en) * 2010-02-01 2011-08-04 Seiko Epson Corporation Biological information measuring device
CN103761465A (zh) * 2014-02-14 2014-04-30 上海云亨科技有限公司 一种身份验证的方法及装置
CN106570489A (zh) * 2016-11-10 2017-04-19 腾讯科技(深圳)有限公司 活体判别方法和装置、身份认证方法和装置

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101369315A (zh) * 2007-08-17 2009-02-18 上海银晨智能识别科技有限公司 人脸检测方法
JP5780053B2 (ja) * 2011-08-22 2015-09-16 富士通株式会社 生体認証装置、生体認証方法、及びプログラム
US9734418B2 (en) * 2014-01-17 2017-08-15 Htc Corporation Methods for identity authentication and handheld electronic devices utilizing the same
CN105844206A (zh) * 2015-01-15 2016-08-10 北京市商汤科技开发有限公司 身份认证方法及设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102113883A (zh) * 2010-01-05 2011-07-06 精工爱普生株式会社 活体信息检测器和活体信息测定装置
US20110190646A1 (en) * 2010-02-01 2011-08-04 Seiko Epson Corporation Biological information measuring device
CN103761465A (zh) * 2014-02-14 2014-04-30 上海云亨科技有限公司 一种身份验证的方法及装置
CN106570489A (zh) * 2016-11-10 2017-04-19 腾讯科技(深圳)有限公司 活体判别方法和装置、身份认证方法和装置

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108470169A (zh) * 2018-05-23 2018-08-31 国政通科技股份有限公司 人脸识别系统及方法
CN111666786A (zh) * 2019-03-06 2020-09-15 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN111666786B (zh) * 2019-03-06 2024-05-03 杭州海康威视数字技术股份有限公司 图像处理方法、装置、电子设备及存储介质
CN112784661A (zh) * 2019-11-01 2021-05-11 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
CN112784661B (zh) * 2019-11-01 2024-01-19 宏碁股份有限公司 真实人脸的识别方法与真实人脸的识别装置
CN112016482A (zh) * 2020-08-31 2020-12-01 成都新潮传媒集团有限公司 一种判别虚假人脸的方法、装置及计算机设备
US11443527B2 (en) 2021-01-13 2022-09-13 Ford Global Technologies, Llc Material spectroscopy
US11657589B2 (en) 2021-01-13 2023-05-23 Ford Global Technologies, Llc Material spectroscopy
US11741747B2 (en) 2021-01-13 2023-08-29 Ford Global Technologies, Llc Material spectroscopy
CN116389647A (zh) * 2023-06-02 2023-07-04 深圳市尚哲医健科技有限责任公司 急诊急救一体化平台
CN116389647B (zh) * 2023-06-02 2023-08-08 深圳市尚哲医健科技有限责任公司 急诊急救一体化平台

Also Published As

Publication number Publication date
CN106570489A (zh) 2017-04-19

Similar Documents

Publication Publication Date Title
WO2018086543A1 (fr) Procédé d'identification de corps vivant, procédé d'authentification d'identité, terminal, serveur et support d'information
US10664581B2 (en) Biometric-based authentication method, apparatus and system
CN108985134B (zh) 基于双目摄像机的人脸活体检测及刷脸交易方法及系统
US20190034702A1 (en) Living body detecting method and apparatus, device and storage medium
Chakraborty et al. An overview of face liveness detection
Marciniak et al. Influence of low resolution of images on reliability of face detection and recognition
US20180034852A1 (en) Anti-spoofing system and methods useful in conjunction therewith
WO2020034733A1 (fr) Procédé et appareil d'authentification d'identité, dispositif électronique, et support de stockage
CN106778450B (zh) 一种面部识别方法和装置
WO2019011165A1 (fr) Procédé et appareil de reconnaissance faciale, dispositif électronique et support de stockage
CN109410026A (zh) 基于人脸识别的身份认证方法、装置、设备和存储介质
WO2018082011A1 (fr) Procédé et dispositif de reconnaissance d'empreintes digitales vivantes
US11367310B2 (en) Method and apparatus for identity verification, electronic device, computer program, and storage medium
CN104143086A (zh) 人像比对在移动终端操作系统上的应用技术
Tiwari et al. A touch-less fingerphoto recognition system for mobile hand-held devices
US11670069B2 (en) System and method for face spoofing attack detection
US20190286927A1 (en) Methods and systems for detecting user liveness
CN109657627A (zh) 身份验证方法、装置及电子设备
KR101455666B1 (ko) 인증 장치 및 그 인증 방법
Rahouma et al. Design and implementation of a face recognition system based on API mobile vision and normalized features of still images
WO2020007191A1 (fr) Procédé et appareil de reconnaissance et de détection de corps vivant, support et dispositif électronique
Wojciechowska et al. The overview of trends and challenges in mobile biometrics
CN108596127B (zh) 一种指纹识别方法、身份验证方法及装置和身份核验机
Hassan et al. Facial image detection based on the Viola-Jones algorithm for gender recognition
US11842573B1 (en) Methods and systems for enhancing liveness detection of image data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17870097

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17870097

Country of ref document: EP

Kind code of ref document: A1