WO2022099989A1 - Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique - Google Patents

Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique Download PDF

Info

Publication number
WO2022099989A1
WO2022099989A1 PCT/CN2021/086219 CN2021086219W WO2022099989A1 WO 2022099989 A1 WO2022099989 A1 WO 2022099989A1 CN 2021086219 W CN2021086219 W CN 2021086219W WO 2022099989 A1 WO2022099989 A1 WO 2022099989A1
Authority
WO
WIPO (PCT)
Prior art keywords
living body
image
sample image
recognition
face
Prior art date
Application number
PCT/CN2021/086219
Other languages
English (en)
Chinese (zh)
Inventor
滕家宁
黄耿石
邵婧
Original Assignee
深圳市商汤科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市商汤科技有限公司 filed Critical 深圳市商汤科技有限公司
Publication of WO2022099989A1 publication Critical patent/WO2022099989A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present disclosure relates to the field of computer technologies, and in particular, to a method and device for identification of living bodies, access control equipment, electronic equipment, storage media, and computer programs.
  • Living body recognition is to identify whether the face image detected on the image acquisition device (eg, camera, mobile phone, etc.) is from a real face, or some form of attack or camouflage.
  • the main attack forms include photos, videos, masks, face models, etc. Liveness detection can be applied to security prevention and control in unattended scenarios. Therefore, improving the recognition accuracy of living body recognition plays a vital role in security prevention and control.
  • the present disclosure proposes a technical scheme of a living body identification, an access control device control method and device, an electronic device, a storage medium, and a computer program.
  • a method for identifying a living body comprising: performing a first living body recognition on a target image corresponding to an object to be identified to obtain a first identification result, where the first living body recognition is used to identify the object to be identified Whether it is a living body or a 2D non-living body; when the first recognition result indicates that the object to be recognized is a living body, a second living body recognition is performed on the target image to obtain a second recognition result, which is used for the second living body recognition. for identifying whether the object to be identified is a living body or a 3D non-living body.
  • the first recognition result obtained in the first living body recognition indicates that the to-be-identified object is a living body
  • the second-stage second living body recognition is performed on the target image to identify whether the object to be identified is a living body or a 3D non-living body, and an accurate second recognition result can be obtained.
  • the two-stage living body recognition method can effectively improve the recognition accuracy of living bodies.
  • performing the first living body recognition on the target image corresponding to the object to be identified, and obtaining the first recognition result includes: performing the first living body recognition on the target image through the first living body recognition network, and obtaining For the first recognition result, the first living body recognition network is obtained by training based on the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body.
  • the first living body recognition network trained based on the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body can improve the recognition accuracy of the 2D non-living body, so that the object to be recognized through the first living body recognition network corresponds to After the first living body recognition is performed on the target image of , a first recognition result with higher recognition accuracy can be obtained.
  • the first sample image includes a first label
  • the first label is used to indicate that the first sample image is an image corresponding to a living body
  • the method further includes: classifying the first sample image and the second sample image through a first initial network to obtain a first classification result; The first label included in the first sample image and the first classification result determine the first classification loss corresponding to the first initial network; according to the first classification loss, the first classification loss is trained. an initial network to obtain the trained first living body recognition network.
  • the first initial network is trained by using the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body. Since the first sample image includes the first label, it can be classified according to the first label and the first classification. As a result, the classification accuracy of the first initial network, that is, the first classification loss corresponding to the first initial network, is determined, so that the first initial network can be effectively trained according to the first classification loss, so as to obtain a trained 2D non-living body The first living body recognition network with higher recognition accuracy.
  • the classifying the first sample image and the second sample image through a first initial network to obtain a first classification result includes: classifying the first sample image performing face detection to obtain a first face frame, and performing face detection on the second sample image to obtain a second face frame; cropping the first sample image according to the first face frame cutting to obtain a first face image, and cutting the second sample image according to the second face frame to obtain a second face image; The image and the second face image are classified to obtain the first classification result.
  • classifying the cropped first face image and the second face image can effectively improve the classification efficiency.
  • the first sample image is cropped according to the first face frame to obtain a first face image
  • the Cropping the second sample image to obtain a second face image includes: adjusting the size of the first face frame to obtain a third face frame, and adjusting the size of the second face frame to obtain a fourth face frame face frame; according to the third face frame, the first sample image is cropped to obtain the first face image, and the second sample image is processed according to the fourth face frame crop to obtain the second face image.
  • the adjusted third face frame and the fourth face frame can be obtained by cropping from the first sample image and the second sample image.
  • the first face image and the second face image have more effective information, so that the classification efficiency can be effectively improved when the first face image and the second face image obtained by cutting are subsequently classified.
  • performing a second living body recognition on the target image to obtain a second recognition result includes: performing a second living body recognition on the target image through a second living body recognition network, and obtaining the second living body recognition The second recognition result, the second living body recognition network is obtained by training based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body.
  • the second living body recognition network trained based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body can improve the recognition accuracy of the 3D non-living body, so that the second living body After the second living body recognition is performed on the target image, a second recognition result with higher recognition accuracy can be obtained.
  • the third sample image includes a second label
  • the second label is used to indicate that the third sample image is an image corresponding to a living body
  • the method further includes: classifying the third sample image and the fourth sample image through a second initial network to obtain a second classification result;
  • the second label and the second classification result included in the three-sample images determine the second classification loss corresponding to the second initial network; and according to the second classification loss, the second initial network is trained to The trained second living body recognition network is obtained.
  • the second initial network is trained by using the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body. Since the third sample image includes the second label, according to the second label and the second classification result, Determine the classification accuracy of the second initial network, that is, the second classification loss corresponding to the second initial network, so that the second initial network can be effectively trained according to the second classification loss, so as to obtain the recognition of 3D non-living bodies after training The second living body recognition network with higher accuracy.
  • the classifying the third sample image and the fourth sample image by using a second initial network to obtain a second classification result includes: performing a human analysis on the third sample image. face detection to obtain a fifth face frame, and performing face detection on the fourth sample image to obtain a sixth face frame; cutting the third sample image according to the fifth face frame to obtain a third face image, and cutting the fourth sample image according to the sixth face frame to obtain a fourth face image; The fourth face image is classified to obtain the second classification result.
  • classifying the cropped third face image and the fourth face image can effectively improve the classification efficiency.
  • the third sample image is cropped according to the fifth face frame to obtain a third face image
  • the third face image is cut according to the sixth face frame.
  • Four sample images are cropped to obtain a fourth face image, including: adjusting the size of the fifth face frame to obtain a seventh face frame, and adjusting the size of the sixth face frame to obtain an eighth face frame face frame; trimming the third sample image according to the seventh face frame to obtain the third face image, and trimming the fourth sample image according to the eighth face frame , to obtain the fourth face image.
  • the third sample image and the fourth sample image can be cropped to obtain
  • the third face image and the fourth face image with more effective information can effectively improve the classification efficiency when the third face image and the fourth face image obtained by cutting are subsequently classified.
  • a method for controlling an access control device including: collecting a target image corresponding to an object to be identified that needs to pass through the access control device; using the above-mentioned method for in vivo identification, performing in vivo identification on the target image, and obtaining the in vivo identification Result: in the case that the living body identification result indicates that the object to be identified is a living body, the access control device is controlled to be turned on.
  • the living body recognition is performed on the object to be identified that needs to pass through the access control device, so that both 2D non-living bodies (for example, photos, images) can be effectively identified, and 3D non-living bodies can be effectively identified, Further, the access control device is controlled to be turned on only when the living body recognition result indicates that the object to be recognized is a living body, so that the security of the access control device can be effectively improved.
  • 2D non-living bodies for example, photos, images
  • 3D non-living bodies can be effectively identified
  • the access control device is controlled to be turned on only when the living body recognition result indicates that the object to be recognized is a living body, so that the security of the access control device can be effectively improved.
  • the collecting the target image corresponding to the object to be identified that needs to pass through the access control device includes: using a dual infrared camera module to collect the target image corresponding to the object to be identified.
  • the clear target image corresponding to the object to be recognized that needs to pass through the access control device is collected by using the dual infrared camera module, and then the target image corresponding to the object to be recognized is recognized by the above-mentioned two-stage living body recognition method, so that the access control equipment can be improved.
  • the accuracy of living body recognition in dark light scenes can effectively improve the security of access control equipment.
  • a living body recognition device comprising: a first recognition module configured to perform a first living body recognition on a target image corresponding to an object to be recognized, and obtain a first recognition result, the first living body recognition using for recognizing whether the object to be recognized is a living body or a 2D non-living body; a second recognition module is used to perform second living body recognition on the target image when the first recognition result indicates that the object to be recognized is a living body , to obtain a second identification result, where the second living body identification is used to identify whether the object to be identified is a living body or a 3D non-living body.
  • an access control device control device comprising: an image acquisition module for collecting target images corresponding to objects to be recognized that need to pass through the access control device; a living body recognition module for the above living body recognition method, for The target image is subjected to living body recognition to obtain a living body recognition result; the control module is configured to control the opening of the access control device when the living body recognition result indicates that the object to be identified is a living body.
  • an electronic device comprising: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above A living body identification method, or, performing the above-mentioned access control device control method.
  • a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned living body identification method or the above-mentioned access control device control method is implemented.
  • a computer program comprising computer-readable code, when the computer code is executed in an electronic device, a processor in the electronic device executes the method for realizing the above-mentioned living body identification, or , to realize the above access control device control method.
  • FIG. 1 shows an interactive schematic diagram of a method for identifying a living body according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of a method for identifying a living body according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a living body recognition network according to an embodiment of the present disclosure
  • FIG. 4 shows a flowchart of a method for controlling an access control device according to an embodiment of the present disclosure
  • FIG. 5 shows a block diagram of a living body recognition apparatus according to an embodiment of the present disclosure
  • FIG. 6 shows a block diagram of an access control device control apparatus according to an embodiment of the present disclosure
  • FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • Face recognition is a biometric recognition technology based on human facial feature information.
  • living body recognition technology has gradually become the core technology of face recognition systems.
  • the living body identification method of the embodiment of the present disclosure can be applied to scenarios such as security, finance, and e-commerce that require identity verification, for example, access control equipment, remote transactions, and the like.
  • identity verification for example, access control equipment, remote transactions, and the like.
  • it can be determined whether the object to be identified is a living body, not a non-living body such as photos, videos, masks, and face models, so that malicious attacks can be effectively reduced.
  • FIG. 1 shows an interactive schematic diagram of a method for identifying a living body according to an embodiment of the present disclosure.
  • the electronic device 11 is used to execute the living body recognition method.
  • the electronic device 11 may be an access control device (eg, door lock 13, gate 14, etc.), may be a user device (eg, mobile phone 15) used for remote transactions, or may be other devices that need to be authenticated by living body recognition , which is not specifically limited in the present disclosure.
  • the image acquisition component included in the electronic device 11 acquires a target image of the object to be recognized 12 that needs to be recognized by the electronic device 11 .
  • the electronic device 11 performs the first-stage first living body recognition on the target image, and identifies whether the object 12 to be identified is a living body or a 2D non-living body. In the case where the first recognition result obtained from the first living body recognition indicates that the object to be recognized 12 is a 2D non-living body, the process of living body recognition is ended, the first recognition result is output, and the object 12 to be recognized has failed the living body recognition for the electronic device 11 .
  • the electronic device 11 performs the second-stage second living body recognition on the target image.
  • the second living body identification is used to identify whether the object 12 to be identified is a living body or a 3D non-living body.
  • the process of living body recognition is ended, the second recognition result is output, and the object 12 to be recognized has failed the living body recognition for the electronic device 11 .
  • the living body identification process is ended, the second identification result is output, and the object 12 to be identified is prompted to pass the living body identification of the electronic device 11 .
  • the electronic device 11 After the electronic device 11 passes the living body recognition of the object 12 to be recognized, the electronic device 11 can perform corresponding operations. For example, in the case where the electronic device 11 is an access control device (eg, a door lock, a gate, etc.) Object 12 enters or passes; when electronic device 11 is a user device for conducting a remote transaction, the remote transaction is performed.
  • the embodiment of the present disclosure adopts a two-stage living body identification method, which can effectively improve the identification accuracy of living bodies, thereby improving the security defense performance.
  • FIG. 2 shows a flowchart of a method for identifying a living body according to an embodiment of the present disclosure.
  • the method can be executed by electronic equipment such as terminal equipment or server, and the terminal equipment can be user equipment (User Equipment, UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital assistant (Personal Digital Assistant, PDA) , handheld device, computing device, vehicle-mounted device, wearable device, etc.
  • the method can be implemented by the processor calling the computer-readable instructions stored in the memory.
  • the method may be performed by a server, and the server may be a local server, a cloud server, or the like.
  • the method includes:
  • step S21 a first living body recognition is performed on the target image corresponding to the object to be recognized to obtain a first recognition result, and the first living body recognition is used to identify whether the object to be recognized is a living body or a 2D non-living body.
  • the first in vivo recognition can be performed on the target image corresponding to the object to be recognized to identify whether the object to be recognized is a living body or 2D In vivo, the first identification result is obtained.
  • the living body recognition process can be directly ended (for example, the door lock is controlled not to be opened); when the first recognition result indicates that the object to be recognized is a living body, in order to further treat the recognized object For identification, the following step S22 is performed on the target image corresponding to the object to be identified.
  • step S22 when the first recognition result indicates that the object to be recognized is a living body, a second living body recognition is performed on the target image to obtain a second recognition result, and the second living body recognition is used to identify whether the object to be recognized is a living body or a 3D non-living body. living body.
  • a second living body recognition is performed on the target image corresponding to the object to be recognized, so as to identify whether the object to be recognized is a living body or a 3D non-living body, and a second recognition result is obtained. Then, a corresponding operation can be performed according to the second recognition result.
  • the object to be recognized is an object that needs to be unlocked, and when the second recognition result indicates that the object to be recognized is a 3D non-living body, the door lock is controlled not to be opened; When it is indicated that the object to be identified is a living body, the door lock is controlled to open.
  • a first-stage first living body recognition is performed on the target image corresponding to the recognized object to be recognized, to identify whether the to-be-identified object is a living body or a 2D non-living body, and the first recognition result obtained from the first living body recognition indicates that the to-be-identified object is to be recognized.
  • a second-stage second living body recognition is performed on the target image to identify whether the object to be recognized is a living body or a 3D non-living body, and an accurate second recognition result can be obtained.
  • performing the first living body recognition on the target image corresponding to the object to be recognized, and obtaining the first recognition result includes: performing the first living body recognition on the target image through the first living body recognition network, and obtaining the first recognition result , the first living body recognition network is obtained by training based on the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body.
  • the first living body recognition network trained based on the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body can improve the recognition accuracy of the 2D non-living body, so that the object to be recognized through the first living body recognition network corresponds to After the first living body recognition is performed on the target image of , a first recognition result with higher recognition accuracy can be obtained.
  • the function of the first living body recognition network is to identify the image input to the network and determine whether it is an image corresponding to a living body or an image corresponding to a 2D non-living body.
  • the first sample image includes a first label
  • the first label is used to indicate that the first sample image is an image corresponding to a living body
  • the method further includes: classifying the first sample image and the second sample image through the first initial network to obtain a first classification result; according to the first label and the first classification included in the first sample image As a result, the first classification loss corresponding to the first initial network is determined; according to the first classification loss, the first initial network is trained to obtain the trained first living body recognition network.
  • the first initial network is trained by using the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body. Since the first sample image includes the first label, it can be classified according to the first label and the first classification. As a result, the classification accuracy of the first initial network is determined, that is, the first classification loss corresponding to the first initial network is determined, so that the first initial network can be effectively trained according to the first classification loss, so as to obtain a A first living body recognition network with high recognition accuracy of living bodies.
  • the method before classifying the first sample image and the second sample image through the first initial network, the method further includes: using a public image data set to train the first original network to obtain The first initial network after training.
  • the public image data set can be ImageNet.
  • ImageNet is an image classification data set containing about 15 million images, including more than 1,000 categories. Using ImageNet to train the first original network, you can get the classification function after training. Then, the first initial network corresponding to the living body and the second sample image corresponding to the 2D non-living body can be used to train the first initial network to obtain the trained first living body recognition network.
  • the public image dataset may also be other public image datasets used for classification training, which is not specifically limited in the present disclosure.
  • the process of obtaining the first living body recognition network using the first initial network training firstly, based on the first sample image corresponding to the living body including the first label, and the second sample image corresponding to the 2D non-living body without the first label , construct the first training sample set, and input the first training sample set into the first initial network.
  • the first initial network identifies whether the first sample image is an image corresponding to a living body or an image corresponding to a 2D non-living body, and classifies the first sample image according to the recognition result to obtain a first classification result corresponding to the first sample image. For example, when the recognition result of the first sample image is that the first sample image is an image corresponding to a living body, the first classification result corresponding to the first sample image is a living body category; When the first sample image is an image corresponding to a 2D non-living body, the first classification result corresponding to the first sample image is a 2D non-living body category.
  • the first initial network identifies whether the second sample image is an image corresponding to a living body or an image corresponding to a 2D non-living body, and classifies the second sample image according to the recognition result to obtain a first classification result corresponding to the second sample image. For example, when the recognition result is that the second sample image is an image corresponding to a living body, the first classification result corresponding to the second sample image is a living body category; when the recognition result is that the second sample image is an image corresponding to a 2D non-living body, the second The first classification result corresponding to the sample image is a 2D non-living class.
  • the first classification loss corresponding to the first initial network can be determined.
  • the first initial network successfully classifies the first sample image; if the first sample image including the first label corresponds to When the first classification result is a 2D non-living category, the first initial network fails to classify the first sample image; or, when the first classification result corresponding to the second sample image that does not include the first label is a 2D non-living category When the first initial network successfully classifies the second sample image; when the first classification result corresponding to the second sample image that does not include the first label is a living body category, the first initial network fails to classify the second sample image.
  • the first classification loss corresponding to the first initial network can be determined, and then according to the first initial network A classification loss is used to train the first initial network to obtain the trained first living body recognition network.
  • training the first initial network according to the first classification loss to obtain the trained first living body recognition network includes: constructing a first loss function according to the first classification loss; according to the first loss function and the first recognition threshold, and train the first initial network to obtain the trained first living body recognition network.
  • the network parameters corresponding to the first initial network are adjusted to obtain an intermediate network, and the same network training method as the above-mentioned training of the first initial network is used to iteratively train the intermediate network until the recognition accuracy corresponding to the network is reached. If it is greater than the first recognition threshold, it is determined that a trained first living body recognition network that meets the conditions is obtained.
  • the first loss function may be a cross-entropy loss function or other loss functions, and the specific value of the first identification threshold may be determined according to the actual situation, which is not specifically limited in the present disclosure.
  • training the first initial network according to the first classification loss to obtain the trained first living body recognition network includes: constructing a first loss function according to the first classification loss; A loss function and a first number of iterations are used to train a first initial network to obtain a trained first living body recognition network.
  • the network parameters corresponding to the first initial network are adjusted to obtain an intermediate network, and the same network training method as the above-mentioned training of the first initial network is used to iteratively train the intermediate network until the number of iterative training reaches the third For one iteration, it is determined to obtain the first trained living body recognition network that meets the conditions.
  • classifying the first sample image and the second sample image through the first initial network to obtain the first classification result includes: performing face detection on the first sample image to obtain the first classification result. face frame, and performing face detection on the second sample image to obtain a second face frame; cropping the first sample image according to the first face frame to obtain a first face image, and according to the second person The face frame cuts the second sample image to obtain a second face image; the first initial network is used to classify the first face image and the second face image to obtain a first classification result.
  • the first sample image is cut according to the first face frame to obtain the first face image
  • the second sample image is cut according to the second face frame to obtain the first face image.
  • Two face images including: adjusting the size of the first face frame to obtain a third face frame, and adjusting the size of the second face frame to obtain a fourth face frame; This image is cropped to obtain a first face image, and the second sample image is cropped according to the fourth face frame to obtain a second face image.
  • the adjusted third face frame and the fourth face frame can be obtained by cropping from the first sample image and the second sample image.
  • the first face image and the second face image have more effective information, so that the classification efficiency can be effectively improved when the first face image and the second face image obtained by cutting are subsequently classified.
  • the one-person face frame is expanded by the first preset ratio threshold (for example, 0.2 times upwards, leftwards, and rightwards, and 0.4 times downwards) to obtain a third face frame
  • the second The face frame is expanded outward by the second preset ratio threshold (for example, 0.3 times upward, leftward, and rightward outward expansion, and 0.5 times downward outward expansion) to obtain a fourth face frame.
  • the specific values of the first preset ratio threshold and the second preset ratio threshold may be determined according to actual conditions, which are not specifically limited in the present disclosure.
  • the expanded third face frame and the fourth face frame contain more information around the face, making it easier to analyze the inside of the third face frame.
  • the first face image and the second face image within the fourth face frame are classified.
  • the second sample image is an image corresponding to a photo
  • a second face frame is obtained by performing face detection on the second sample image, wherein the second face frame corresponds to the face part in the photo.
  • the fourth face frame is obtained by expanding the second face frame, so that the fourth face frame includes not only the face part in the photo, but also the border part of the photo.
  • the first classification result corresponding to the face image is a 2D non-living class.
  • the method for adjusting the size of the first face frame and the second face frame may include, in addition to the above-mentioned outward expansion method, a shrinking method, which is not specifically limited in the present disclosure.
  • the first sample image is obtained by cropping the first sample image according to the third face frame, and after the second sample image is obtained by cropping the second sample image according to the fourth face frame, It is also possible to adjust the first face image and the second face image to images with a first target size (for example, a length and a width of 224 pixels), so that the first living body recognition network can recognize the first face image and the second face image of the same size.
  • the second image is classified to improve classification accuracy.
  • the specific value of the first target size may be determined according to actual conditions, which is not specifically limited in the present disclosure.
  • performing a second living body recognition on the target image to obtain a second recognition result includes: performing a second living body recognition on the target image through a second living body recognition network to obtain a second recognition result, the second living body The recognition network is trained based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body.
  • the second living body recognition network trained based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body can improve the recognition accuracy of the 3D non-living body, so that the second living body After the second living body recognition is performed on the target image, a second recognition result with higher recognition accuracy can be obtained.
  • the second living body recognition network In order to recognize living bodies and 3D non-living bodies (for example, masks, head models, etc.), before performing the second living body recognition on the target image corresponding to the object to be identified, it is necessary to base on the third sample image corresponding to the living body and the 3D non-living body corresponding to The fourth sample image is pre-trained to obtain the second living body recognition network.
  • the function of the second living body recognition network is to identify the image input to the network, and determine whether it is an image corresponding to a living body or an image corresponding to a 3D non-living body.
  • the following describes the process of obtaining the second living body recognition network by training based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body.
  • the third sample image includes a second label
  • the second label is used to indicate that the third sample image is an image corresponding to a living body
  • the second living body recognition is performed on the target image through the second living body recognition network.
  • the method further includes: classifying the third sample image and the fourth sample image through the second initial network to obtain a second classification result; and determining the first classification result according to the second label included in the third sample image and the second classification result
  • the second classification loss corresponding to the second initial network; according to the second classification loss, the second initial network is trained to obtain the trained second living body recognition network.
  • the second initial network is trained by using the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body. Since the third sample image includes the second label, according to the second label and the second classification result, Determine the classification accuracy of the second initial network, that is, determine the second classification loss corresponding to the second initial network, so that the second initial network can be effectively trained according to the second classification loss, so as to obtain the trained 3D non-living body.
  • the method before classifying the third sample image and the fourth sample image through the second initial network, the method further includes: using the public image data set to train the second original network to obtain the training After the second initial network.
  • the public image data set can be ImageNet.
  • ImageNet is an image classification data set containing about 15 million images, including more than 1,000 categories. Using ImageNet to train the second original network, you can get the classification function after training.
  • the second initial network can be used to train the second initial network by using the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body to obtain the trained second living body recognition network.
  • the public image dataset may also be other public image datasets used for classification training, which is not specifically limited in the present disclosure.
  • the second initial network identifies whether the third sample image is an image corresponding to a living body or an image corresponding to a 3D non-living body, and classifies the third sample image according to the recognition result to obtain a second classification result corresponding to the third sample image. For example, when the recognition result of the third sample image is that the third sample image is an image corresponding to a living body, the second classification result corresponding to the third sample image is a living body category; when the recognition result of the third sample image is the third sample When the image is an image corresponding to a 3D non-living body, the second classification result corresponding to the third sample image is a 3D non-living body category.
  • the second initial network identifies whether the fourth sample image is an image corresponding to a living body or an image corresponding to a 3D non-living body, and classifies the fourth sample image according to the recognition result to obtain a second classification result corresponding to the fourth sample image. For example, when the recognition result of the fourth sample image is that the fourth sample image is an image corresponding to a living body, the second classification result corresponding to the fourth sample image is a living body category; when the recognition result of the fourth sample image is the fourth sample When the image is an image corresponding to a 3D non-living body, the second classification result corresponding to the fourth sample image is a 3D non-living body category.
  • the second classification loss corresponding to the second initial network can be determined. For example, when the second classification result corresponding to the third sample image including the second label is a living body category, the second initial network successfully classifies the third sample image; When the classification result is a 3D non-living category, the second initial network fails to classify the third sample image; or, when the second classification result corresponding to the fourth sample image that does not include the second label is a 3D non-living category, then the third sample image fails to be classified. 2.
  • the initial network successfully classifies the fourth sample image; when the second classification result corresponding to the fourth sample image that does not include the second label is a living body category, the second initial network fails to classify the fourth sample image.
  • the classification success rate of the third sample image and/or the fourth sample image by the second initial network that is, the recognition accuracy of the second initial network
  • the second classification loss corresponding to the second initial network can be determined, and then according to the second initial network
  • the classification loss is used to train the second initial network to obtain the trained second living body recognition network.
  • training the second initial network according to the second classification loss to obtain the trained second living body recognition network includes: constructing a second loss function according to the second classification loss; according to the second loss function and the second recognition threshold, and train the second initial network to obtain the trained second living body recognition network.
  • the network parameters corresponding to the second initial network are adjusted to obtain an intermediate network, and the same network training method as the above-mentioned training of the second initial network is used to iteratively train the intermediate network until the recognition accuracy rate corresponding to the network is reached. If the value is greater than the second recognition threshold, it is determined that a qualified trained second living body recognition network is obtained.
  • the second loss function may be a cross-entropy loss function or other loss functions, and the specific value of the second identification threshold may be determined according to the actual situation, which is not specifically limited in the present disclosure.
  • the first identification threshold and the second identification threshold are different. Compared with using the same recognition threshold to train the first initial network and the second initial network, different recognition thresholds are used to train the first initial network and the second initial network, so that the trained first living body recognition network can be guaranteed. and the trained second living body recognition network both have high recognition accuracy.
  • training the second initial network according to the second classification loss to obtain the trained second living body recognition network includes: constructing a second loss function according to the second classification loss; The second loss function and the second number of iterations are used to train the second initial network to obtain the trained second living body recognition network.
  • the network parameters corresponding to the second initial network are adjusted to obtain an intermediate network, and the same network training method as the above-mentioned training of the second initial network is used to iteratively train the intermediate network until the number of iterative training reaches the third
  • the number of iterations is two, and it is determined that a qualified second living body recognition network after training is obtained.
  • the number of first iterations and the number of second iterations are different. Compared with using the same number of iterations to train the first initial network and the second initial network, different iterations are used to train the first initial network and the second initial network, so that the trained first living body recognition network can be guaranteed. and the trained second living body recognition network both have high recognition accuracy.
  • classifying the third sample image and the fourth sample image through the second initial network to obtain the second classification result includes: performing face detection on the third sample image to obtain the fifth face frame, and performing face detection on the fourth sample image to obtain a sixth face frame; cropping the third sample image according to the fifth face frame to obtain a third face image, and according to the sixth face frame The fourth sample image is cut to obtain a fourth face image; the third face image and the fourth face image are classified through the second initial network to obtain a second classification result.
  • the third sample image is cut according to the fifth face frame to obtain the third face image
  • the fourth sample image is cut according to the sixth face frame to obtain the fourth A face image, including: adjusting the size of the fifth face frame to obtain the seventh face frame, and adjusting the size of the sixth face frame to obtain the eighth face frame; and adjusting the third sample image according to the seventh face frame Cutting is performed to obtain a third face image
  • the fourth sample image is cut according to the eighth face frame to obtain a fourth face image.
  • the third sample image and the fourth sample image can be cropped to obtain
  • the third face image and the fourth face image with more effective information can effectively improve the classification efficiency when the third face image and the fourth face image obtained by cutting are subsequently classified.
  • the face frame is expanded by a third preset ratio threshold (for example, 0.2 times upward, leftward and rightward, and 0.3 times downward) to obtain a seventh face frame
  • the sixth face frame is Carry out the outward expansion of the fourth preset ratio threshold (for example, 0.3 times upward, leftward, and rightward outward expansion, and 0.3 times downward outward expansion) to obtain the eighth face frame.
  • the specific values of the third preset ratio threshold and the fourth preset ratio threshold may be determined according to actual conditions, which are not specifically limited in the present disclosure.
  • the expanded seventh face frame and the eighth face frame contain more information around the face, which makes it easier to compare the inside of the seventh face frame.
  • the third face image and the fourth face image within the eighth face frame are classified.
  • the fourth sample image is an image corresponding to a mask
  • a sixth face frame is obtained by performing face detection on the fourth sample image, wherein the sixth face frame corresponds to the face part in the mask.
  • the eighth face frame is obtained by externally expanding the sixth face frame, so that the eighth face frame includes not only the face part in the mask, but also the boundary part of the mask. Therefore, when classifying the fourth face image cropped from the fourth sample image according to the eighth face frame, since the fourth face image includes the boundary part of the mask, it is easy to determine the fourth person
  • the second classification result corresponding to the face image is the 3D non-living category.
  • the method for adjusting the size of the fifth face frame and the sixth face frame may include, in addition to the above-mentioned outward expansion, methods such as shrinkage, which are not specifically limited in this disclosure.
  • the third sample image is cut according to the seventh face frame to obtain the third face image
  • the fourth sample image is cut according to the eighth face frame to obtain the fourth face image
  • the The third face image and the fourth face image can be adjusted to images of the second target size (for example, the length and width are 224 pixels), so that the second living body recognition network can be used for the third face image and the third face image of the same size.
  • Four images are classified to improve classification accuracy.
  • the specific value of the second target size may be determined according to the actual situation, which is not specifically limited in the present disclosure.
  • two-stage living body recognition can be performed on the object to be recognized.
  • FIG. 3 shows a schematic diagram of a living body recognition network according to an embodiment of the present disclosure. As shown in Figure 3:
  • the first step is to input the target image corresponding to the object to be recognized into the first living body recognition network, and according to the first living body recognition network, identify whether the object to be recognized is a living body or a 2D non-living body.
  • the process of living body recognition is ended, and the first recognition result is output; the first recognition result output by the first living body recognition network indicates that the object to be recognized is When alive, perform the second step.
  • the target image corresponding to the object to be recognized is input into the second living body recognition network, and according to the second living body recognition network, whether the object to be recognized is a living body or a 3D non-living body is identified.
  • the third step is to output the second identification result of the second network identification network.
  • a first-stage first living body recognition is performed on the target image corresponding to the recognized object to be recognized, to identify whether the to-be-identified object is a living body or a 2D non-living body, and the first recognition result obtained from the first living body recognition indicates that the to-be-identified object is to be recognized.
  • a second-stage second living body recognition is performed on the target image to identify whether the object to be recognized is a living body or a 3D non-living body, and an accurate second recognition result can be obtained.
  • FIG. 4 shows a flowchart of a method for controlling an access control device according to a disclosed embodiment.
  • the access control device in the method may include door locks, gates, and other terminal devices that need to control access, which is not specifically limited in the present disclosure.
  • the method includes:
  • step S41 a target image corresponding to an object to be identified that needs to pass through the access control device is collected.
  • step S42 a first living body recognition is performed on the target image to obtain a first recognition result, and the first living body recognition is used to identify whether the object to be recognized is a living body or a 2D non-living body.
  • step S43 when the first recognition result indicates that the object to be recognized is a living body, a second living body recognition is performed on the target image to obtain a second recognition result, and the second living body recognition is used to identify whether the object to be recognized is a living body or a 3D non-living body living body.
  • step S44 when the second living body identification result indicates that the object to be identified is a living body, the access control device is controlled to be turned on.
  • the above-mentioned two-stage living body recognition method can effectively identify both 2D non-living bodies (for example, photos, images) and 3D non-living bodies
  • the above two-stage method is performed on the objects to be identified that need to pass through the access control equipment. and control the access control device to open only when the first and second living body recognition results indicate that the object to be identified is a living body, thereby effectively improving the security of the access control device.
  • collecting the target image corresponding to the object to be identified that needs to pass through the access control device includes: using a dual infrared camera module to collect the target image corresponding to the object to be identified.
  • the dual-infrared camera module can be used to collect the image of the object to be identified that needs to pass through the access control device, and the target image of the object to be identified can be obtained.
  • the dual-infrared camera module is integrated with the access control device, or the dual-infrared camera module is separately set near the access control device, so that the dual-infrared camera module can be used to capture images of objects to be identified that need to pass through the access control device.
  • the clear target image corresponding to the object to be recognized that needs to pass through the access control device is collected by using the dual infrared camera module, and then the target image corresponding to the object to be recognized is recognized by the above-mentioned two-stage living body recognition method, so that the access control equipment can be improved.
  • the accuracy of living body recognition in dark light scenes can effectively improve the security of access control equipment.
  • the present disclosure also provides living body recognition/access control equipment control devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any of the living body recognition/access control equipment control methods provided by the present disclosure, and the corresponding technical solutions and The description and reference to the corresponding records in the method section will not be repeated.
  • FIG. 5 shows a block diagram of a living body recognition apparatus according to an embodiment of the present disclosure.
  • the living body identification device 50 includes:
  • the first identification module 51 is used to perform first living body identification on the target image corresponding to the object to be identified, to obtain a first identification result, and the first living body identification is used to identify whether the object to be identified is a living body or a 2D non-living body;
  • the second recognition module 52 is configured to perform second in vivo recognition on the target image when the first recognition result indicates that the object to be recognized is a living body to obtain a second recognition result, and the second in vivo recognition is used to recognize that the object to be recognized is a living body Still 3D non-living.
  • the first identification module 51 is specifically used for:
  • a first living body recognition is performed on the target image through a first living body recognition network to obtain a first recognition result.
  • the first living body recognition network is obtained by training based on the first sample image corresponding to the living body and the second sample image corresponding to the 2D non-living body.
  • the first sample image includes a first label, where the first label is used to indicate that the first sample image is an image corresponding to a living body;
  • the living body identification device 50 further includes:
  • a first classification module configured to classify the first sample image and the second sample image through the first initial network before performing the first living body recognition on the target image through the first living body recognition network to obtain a first classification result
  • a first determination module configured to determine the first classification loss corresponding to the first initial network according to the first label included in the first sample image and the first classification result
  • the first training module is used for training the first initial network according to the first classification loss, so as to obtain the trained first living body recognition network.
  • the first classification module includes:
  • a first detection submodule configured to perform face detection on the first sample image to obtain a first face frame, and perform face detection on the second sample image to obtain a second face frame;
  • the first cropping submodule is used for cropping the first sample image according to the first face frame to obtain the first face image, and for cropping the second sample image according to the second face frame to obtain the second face image;
  • the first classification submodule is used for classifying the first face image and the second face image through the first initial network to obtain a first classification result.
  • the first cutting sub-module includes:
  • a first size adjustment unit for adjusting the size of the first face frame to obtain a third face frame, and for adjusting the size of the second face frame to obtain a fourth face frame;
  • the first cropping unit is used for cropping the first sample image according to the third face frame to obtain the first face image, and for cropping the second sample image according to the fourth face frame to obtain the second face image.
  • the second identification module 52 is specifically used for:
  • the second living body recognition network is used to perform second living body recognition on the target image to obtain a second recognition result.
  • the second living body recognition network is trained based on the third sample image corresponding to the living body and the fourth sample image corresponding to the 3D non-living body.
  • the third sample image includes a second label, and the second label is used to indicate that the third sample image is an image corresponding to a living body;
  • the living body identification device 50 further includes:
  • the second classification module is configured to classify the third sample image and the fourth sample image through the second initial network before performing the second living body recognition on the target image through the second living body recognition network to obtain a second classification result;
  • a second determination module configured to determine the second classification loss corresponding to the second initial network according to the second label included in the third sample image and the second classification result
  • the second training module is used for training the second initial network according to the second classification loss, so as to obtain the trained second living body recognition network.
  • the second classification module includes:
  • the second detection submodule is used for performing face detection on the third sample image to obtain a fifth face frame, and performing face detection on the fourth sample image to obtain a sixth face frame;
  • the second cropping submodule is used for cropping the third sample image according to the fifth face frame to obtain the third face image, and for cropping the fourth sample image according to the sixth face frame to obtain the fourth person face image;
  • the second classification submodule is configured to classify the third face image and the fourth face image through the second initial network to obtain a second classification result.
  • the second cutting sub-module includes:
  • the second size adjustment unit is used to adjust the size of the fifth face frame to obtain the seventh face frame, and adjust the size of the sixth face frame to obtain the eighth face frame;
  • the second cropping unit is used for cropping the third sample image according to the seventh face frame to obtain the third face image, and for cropping the fourth sample image according to the eighth face frame to obtain the fourth person face image.
  • FIG. 6 shows a block diagram of an access control device control apparatus according to an embodiment of the present disclosure.
  • the access control device control device 60 includes:
  • the image acquisition module 61 is used to collect the target image corresponding to the object to be identified that needs to pass through the access control device;
  • the living body recognition module 62 is configured to use the above-mentioned living body recognition method to perform living body recognition on the target image to obtain a living body recognition result;
  • the control module 63 is configured to control the access control device to open when the living body identification result indicates that the object to be identified is a living body.
  • the image acquisition module 61 is specifically used for:
  • the target image corresponding to the object to be recognized is collected.
  • the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium.
  • An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above-mentioned method for living body identification , or execute the above access control device control method.
  • Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
  • a processor in the device executes the method for realizing the living body identification/access control provided by any of the above embodiments. Instructions for device control methods.
  • Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the living body recognition/access control device control method provided by any of the foregoing embodiments.
  • the electronic device may be provided as a terminal, server or other form of device.
  • FIG. 7 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
  • an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
  • the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
  • the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
  • processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
  • processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
  • Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
  • SRAM static random access memory
  • EEPROM electrically erasable programmable read only memory
  • EPROM erasable Programmable Read Only Memory
  • PROM Programmable Read Only Memory
  • ROM Read Only Memory
  • Magnetic Memory Flash Memory
  • Magnetic or Optical Disk Magnetic Disk
  • Power supply assembly 806 provides power to various components of electronic device 800 .
  • Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
  • Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
  • the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
  • the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
  • the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
  • Audio component 810 is configured to output and/or input audio signals.
  • audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
  • the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
  • audio component 810 also includes a speaker for outputting audio signals.
  • the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
  • Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
  • the sensor assembly 814 can detect the open/closed state of the electronic device 800, the relative positioning of the components, such as the display and keypad of the electronic device 800, and the sensor assembly 814 can also detect the electronic device 800 or the electronic device 800.
  • the position of a component changes, the presence or absence of user contact with the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature of the electronic device 800 changes.
  • Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
  • Sensor assembly 814 may also include a light sensor, such as a complementary metal oxide semiconductor (CMOS) or charge coupled device (CCD) image sensor, for use in imaging applications.
  • CMOS complementary metal oxide semiconductor
  • CCD charge coupled device
  • the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
  • Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
  • the electronic device 800 may access a wireless network based on a communication standard, such as wireless network (WiFi), second generation mobile communication technology (2G) or third generation mobile communication technology (3G), or a combination thereof.
  • the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
  • the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
  • the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
  • RFID radio frequency identification
  • IrDA infrared data association
  • UWB ultra-wideband
  • Bluetooth Bluetooth
  • electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • ASICs application specific integrated circuits
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGA field programmable A programmed gate array
  • controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
  • a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
  • FIG. 8 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
  • the electronic device 1900 may be provided as a server.
  • electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
  • An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
  • the processing component 1922 is configured to execute instructions to perform the above-described methods.
  • the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
  • the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as a Microsoft server operating system (Windows Server TM ), a graphical user interface based operating system (Mac OS X TM ) introduced by Apple, a multi-user multi-process computer operating system (Unix TM ), Free and Open Source Unix-like Operating System (Linux TM ), Open Source Unix-like Operating System (FreeBSD TM ) or the like.
  • Microsoft server operating system Windows Server TM
  • Mac OS X TM graphical user interface based operating system
  • Uniix TM multi-user multi-process computer operating system
  • Free and Open Source Unix-like Operating System Linux TM
  • FreeBSD TM Open Source Unix-like Operating System
  • a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
  • the present disclosure may be a system, method and/or computer program product.
  • the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be a volatile storage medium or a non-volatile storage medium.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
  • the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
  • Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
  • Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
  • the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
  • LAN local area network
  • WAN wide area network
  • custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
  • Computer readable program instructions are executed to implement various aspects of the present disclosure.
  • These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
  • Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • the computer program product can be specifically implemented by hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • a software development kit Software Development Kit, SDK

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique. Un procédé selon l'invention comporte les étapes consistant à: effectuer une première identification de vitalité sur une image cible correspondant à un objet à identifier, et obtenir un premier résultat d'identification, la première identification de vitalité étant utilisée pour identifier si l'objet à identifier est un corps vivant ou est un corps 2D non vivant (S21); dans l'éventualité où le premier résultat d'identification indique que l'objet à identifier est un corps vivant, effectuer une seconde identification de vitalité sur l'image cible, et obtenir un second résultat d'identification, la seconde identification de vitalité étant utilisée pour identifier si l'objet à identifier est un corps vivant ou un corps 3D non vivant (S22).
PCT/CN2021/086219 2020-11-10 2021-04-09 Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique WO2022099989A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011248985.1A CN112270288A (zh) 2020-11-10 2020-11-10 活体识别、门禁设备控制方法和装置、电子设备
CN202011248985.1 2020-11-10

Publications (1)

Publication Number Publication Date
WO2022099989A1 true WO2022099989A1 (fr) 2022-05-19

Family

ID=74339723

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086219 WO2022099989A1 (fr) 2020-11-10 2021-04-09 Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique

Country Status (2)

Country Link
CN (1) CN112270288A (fr)
WO (1) WO2022099989A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112270288A (zh) * 2020-11-10 2021-01-26 深圳市商汤科技有限公司 活体识别、门禁设备控制方法和装置、电子设备
CN113657154A (zh) * 2021-07-08 2021-11-16 浙江大华技术股份有限公司 活体检测方法、装置、电子装置和存储介质
CN115690918A (zh) * 2021-07-22 2023-02-03 京东科技控股股份有限公司 构建活体识别模型和活体识别的方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034102A (zh) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 人脸活体检测方法、装置、设备及存储介质
CN110909693A (zh) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3d人脸活体检测方法、装置、计算机设备及存储介质
US20200257914A1 (en) * 2017-11-20 2020-08-13 Tencent Technology (Shenzhen) Company Limited Living body recognition method, storage medium, and computer device
CN111582045A (zh) * 2020-04-15 2020-08-25 深圳市爱深盈通信息技术有限公司 一种活体的检测方法、装置以及电子设备
CN112270288A (zh) * 2020-11-10 2021-01-26 深圳市商汤科技有限公司 活体识别、门禁设备控制方法和装置、电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203235B (zh) * 2015-04-30 2020-06-30 腾讯科技(深圳)有限公司 活体鉴别方法和装置
CN109784148A (zh) * 2018-12-06 2019-05-21 北京飞搜科技有限公司 活体检测方法及装置
CN111860055B (zh) * 2019-04-29 2023-10-24 北京眼神智能科技有限公司 人脸静默活体检测方法、装置、可读存储介质及设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200257914A1 (en) * 2017-11-20 2020-08-13 Tencent Technology (Shenzhen) Company Limited Living body recognition method, storage medium, and computer device
CN109034102A (zh) * 2018-08-14 2018-12-18 腾讯科技(深圳)有限公司 人脸活体检测方法、装置、设备及存储介质
CN110909693A (zh) * 2019-11-27 2020-03-24 深圳市华付信息技术有限公司 3d人脸活体检测方法、装置、计算机设备及存储介质
CN111582045A (zh) * 2020-04-15 2020-08-25 深圳市爱深盈通信息技术有限公司 一种活体的检测方法、装置以及电子设备
CN112270288A (zh) * 2020-11-10 2021-01-26 深圳市商汤科技有限公司 活体识别、门禁设备控制方法和装置、电子设备

Also Published As

Publication number Publication date
CN112270288A (zh) 2021-01-26

Similar Documents

Publication Publication Date Title
US11410001B2 (en) Method and apparatus for object authentication using images, electronic device, and storage medium
TWI766286B (zh) 圖像處理方法及圖像處理裝置、電子設備和電腦可讀儲存媒介
WO2022099989A1 (fr) Procédés de commande de dispositif d'identification de vitalité et de contrôle d'accès, appareil, dispositif électronique, support de stockage, et programme informatique
WO2021031609A1 (fr) Procédé et dispositif de détection de corps vivant, appareil électronique et support de stockage
TWI771645B (zh) 文本識別方法及裝置、電子設備、儲存介質
TWI759647B (zh) 影像處理方法、電子設備,和電腦可讀儲存介質
WO2022011892A1 (fr) Procédé et appareil d'instruction de réseau, procédé et appareil de détection de cible et dispositif électronique
CN109934275B (zh) 图像处理方法及装置、电子设备和存储介质
TWI702544B (zh) 圖像處理方法、電子設備和電腦可讀儲存介質
WO2021036382A9 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2020259073A1 (fr) Procédé et appareil de traitement d'image, dispositif électronique et support de stockage
WO2017031901A1 (fr) Procédé et appareil de reconnaissance de visage humain, et terminal
CN110287671B (zh) 验证方法及装置、电子设备和存储介质
WO2020019760A1 (fr) Procédé, appareil et système de détection de corps vivant, dispositif électronique et support d'enregistrement
TW202105202A (zh) 影片處理方法及裝置、電子設備、儲存媒體和電腦程式
TWI766458B (zh) 資訊識別方法及裝置、電子設備、儲存媒體
WO2022134388A1 (fr) Procédé et dispositif de détection d'évitement de tarif d'utilisateur, dispositif électronique, support de stockage et produit programme d'ordinateur
WO2021208666A1 (fr) Procédé et appareil de reconnaissance de caractères, dispositif électronique et support de stockage
JP7482326B2 (ja) 身元認証方法および装置、電子機器並びに記憶媒体
WO2022160616A1 (fr) Procédé et appareil de détection de passages, dispositif électronique et support de stockage lisible par ordinateur
CN111259967A (zh) 图像分类及神经网络训练方法、装置、设备及存储介质
WO2023040202A1 (fr) Procédé et appareil de reconnaissance faciale, dispositif électronique et support de stockage
CN110781842A (zh) 图像处理方法及装置、电子设备和存储介质
WO2022247091A1 (fr) Procédé et appareil de positionnement de foule, dispositif électronique et support de stockage
TWI770531B (zh) 人臉識別方法、電子設備和儲存介質

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21890524

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 23/08/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21890524

Country of ref document: EP

Kind code of ref document: A1