CN110008813B - Face recognition method and system based on living body detection technology - Google Patents

Face recognition method and system based on living body detection technology Download PDF

Info

Publication number
CN110008813B
CN110008813B CN201910066681.4A CN201910066681A CN110008813B CN 110008813 B CN110008813 B CN 110008813B CN 201910066681 A CN201910066681 A CN 201910066681A CN 110008813 B CN110008813 B CN 110008813B
Authority
CN
China
Prior art keywords
facial
features
thermal imaging
feature
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910066681.4A
Other languages
Chinese (zh)
Other versions
CN110008813A (en
Inventor
丁菁汀
傅秉涛
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Advanced New Technologies Co Ltd
Advantageous New Technologies Co Ltd
Original Assignee
Advanced New Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Advanced New Technologies Co Ltd filed Critical Advanced New Technologies Co Ltd
Priority to CN201910066681.4A priority Critical patent/CN110008813B/en
Publication of CN110008813A publication Critical patent/CN110008813A/en
Application granted granted Critical
Publication of CN110008813B publication Critical patent/CN110008813B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The disclosure provides a face recognition method and a face recognition system based on a living body detection technology. The method comprises the following steps: analyzing an application scene; selecting at least one multi-modal living body detection technique including facial thermal imaging techniques based on the application scenario; receiving a facial thermographic image and a facial photograph; extracting and identifying facial infrared features based on the facial thermographic image; extracting and identifying facial visible light imaging features based on the facial photographs; determining whether facial thermal imaging features and facial visible light imaging features match; and if the facial thermal imaging features and the facial visible light imaging features match, the face recognition is successful. The face recognition method based on the living body detection technology of the present disclosure can be implemented using a multi-modal living body detection technology.

Description

Face recognition method and system based on living body detection technology
Technical Field
The present disclosure relates generally to identity recognition, and more particularly to protection against prosthetic attacks.
Background
In the technical field of identity recognition, a face recognition system takes a unique biological feature of a face as an identity recognition basis. In practical application, the face as an open biological feature is easy to be obtained by a third party in a manner of a prosthesis such as a photo or a video, and then non-self attack is carried out in a manner of the prosthesis by utilizing the resources. The prosthesis may be a picture, video, or the like.
The living body is a truly existing, biologically active biological modality exhibited by a natural person. The prosthesis is a sample which is made by imitating biological characteristics of living bodies, has certain similarity, is false and does not have biological activity and is used for simulating and imitating corresponding living bodies.
At present, face recognition systems are gradually commercialized and move towards an automatic and unsupervised trend, however, how to automatically and efficiently distinguish image authenticity and resist fraud attack to ensure system security has become an urgent problem to be solved in face recognition technology.
In general, living body detection is to determine whether or not biometric information is taken from a legitimate user having a living body when the biometric information is taken from the legitimate user. The method of living body detection is mainly carried out by identifying physiological information on living body, and uses the physiological information as vital sign to distinguish the biological feature forged by non-living matter such as photo, silica gel, plastic, etc.
In addition to "recognizing a person", the face recognition system needs to "recognize true", that is, in front of the system, it is required to prove whether the face of the person is the face of the person or not, and it is also required to prove whether the face is the face of a living body or not, instead of pictures or videos. In the application scenarios of financial payment (especially, unmanned supermarket, automatic teller, etc.), entrance guard, etc., it is very important to judge whether the captured face is a real face or a fake face attack.
Thus, in the art, live detection is used to verify if the user is the real person. Accordingly, there is a need in the art for a method and system that can effectively incorporate biopsy techniques for identification.
Disclosure of Invention
To solve the above technical problems, the present disclosure provides a face recognition scheme based on a living body detection technology, which applies the living body detection technology to effectively prevent a prosthesis attack.
According to an embodiment of the present disclosure, there is provided a face recognition method based on a thermal imaging technology, including: acquiring a real-time thermal imaging image and a face photo of the face; extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images; extracting and identifying facial visible light imaging features based on the facial photographs; determining whether facial thermal imaging features and facial visible light imaging features match; if the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful; and if the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
In an embodiment of the present disclosure, the real-time thermal imaging image of the face is acquired in real-time by far-infrared face recognition technology based on a temperature sensing device.
In another embodiment of the present disclosure, the facial photograph is a static single or multi-frame RGB image obtained from a database.
In yet another embodiment of the present disclosure, the facial photograph is a single or multiple frame RGB image that is dynamically acquired in real-time.
In an embodiment of the present disclosure, the facial thermal imaging features and facial visible light imaging features include global features and local features of the face, respectively.
In another embodiment of the present disclosure, the facial thermal imaging features include vascularity features.
According to an embodiment of the present disclosure, there is provided a face recognition method based on a living body detection technique, including: analyzing an application scene; selecting at least one multi-modality biopsy technique based on the application scenario, the multi-modality biopsy technique comprising at least a facial thermal imaging technique; acquiring a real-time thermal imaging image and a face photo of the face; extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images; extracting and identifying facial visible light imaging features based on the facial photographs; determining whether facial thermal imaging features and facial visible light imaging features match; and if the facial thermal imaging features and the facial visible light imaging features match, then the face recognition is successful; if the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
In an embodiment of the present disclosure, the multi-modality living body detection technique further includes an interactive motion living body detection technique, a three-dimensional image acquisition technique, a near infrared living body detection technique.
In one embodiment of the present disclosure, selecting at least one multi-modal living body detection technique based on an application scenario includes: in addition to selecting facial thermal imaging techniques, one or more of interactive motion biopsy techniques, three-dimensional image acquisition techniques, near infrared biopsy techniques are further selected.
In another embodiment of the present disclosure, the multi-modality living detection technique is selected based on lighting conditions of the application scenario.
In yet another embodiment of the present disclosure, the multi-modality living detection technique is selected based on security requirements of the application scenario.
In one embodiment of the present disclosure, facial thermal imaging images are acquired in real time through far infrared face recognition based on temperature sensing devices.
In another embodiment of the present disclosure, the facial photograph is a static single or multi-frame RGB image obtained from a database.
In yet another embodiment of the present disclosure, the facial photograph is a single or multiple frame RGB image that is dynamically acquired in real-time.
According to an embodiment of the present disclosure, there is provided a face recognition system based on thermal imaging technology, including:
The receiving module is used for receiving the real-time facial thermal imaging image and the facial photo;
an extraction module for: extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging image, and extracting and identifying facial visible light imaging features based on the facial photograph; and
an analysis module for: determining whether the facial thermal imaging features and the facial visible light imaging features match; if the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful; and if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
According to an embodiment of the present disclosure, there is provided a face recognition system based on a living body detection technique, including:
a selection module for analyzing the application scene and selecting at least one multi-modal living body detection technique based on the application scene, the multi-modal living body detection technique including at least a facial thermal imaging technique;
the receiving module is used for receiving the real-time facial thermal imaging image and the facial photo;
an extraction module for extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging image, and extracting and identifying facial visible light imaging features based on the facial photograph;
An analysis module for determining whether the facial thermal imaging features and the facial visible light imaging features match, and if the facial thermal imaging features and the facial visible light imaging features match, face recognition is successful; if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
The face recognition method and the face recognition system based on the living body detection technology can effectively prevent prosthesis attacks, especially in the field of financial payment. The face recognition method based on the living body detection technology of the present disclosure can be implemented using a multi-modal living body detection technology. Notably, the infrared living body detection technology incorporated in the present disclosure enables the technical solution of the present disclosure to get rid of the limitation of illumination conditions, so that the application scene can be expanded into a completely dark scene or a severe natural environment. The visible light imaging technology incorporated in the present disclosure is a visible light-based in vivo detection technology for protection against prosthetic attacks. As will be appreciated by those skilled in the art, as face recognition technologies develop and diversify, the application of the living body detection technology-based face recognition method of the present disclosure will also diversify.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Drawings
The foregoing summary of the disclosure and the following detailed description will be better understood when read in conjunction with the accompanying drawings. It is to be noted that the drawings are merely examples of the claimed invention. In the drawings, like reference numbers indicate identical or similar elements.
Fig. 1 shows an example of a cash register incorporating a thermal imaging camera and its acquired grey-scale map.
Fig. 2 shows a flowchart of a face recognition method based on thermal imaging technology according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of a multi-modal face recognition method based on a liveness detection technique according to another embodiment of the present disclosure.
Fig. 4 illustrates a block diagram of a thermal imaging technology based face recognition system according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of a face recognition system based on a biopsy technique according to another embodiment of the present disclosure.
Detailed Description
In order to make the above objects, features and advantages of the present disclosure more comprehensible, embodiments accompanied with figures are described in detail below.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein, and thus the present disclosure is not limited to the specific embodiments disclosed below.
The present disclosure provides a face recognition method and system based on a living body detection technology.
The biological recognition technology is a technology for identity authentication by utilizing human body biological characteristics. Compared with the traditional identity authentication method, which comprises identity identification objects (such as keys, certificates, ATM cards and the like) and identity identification knowledge (such as user names and passwords), the biological identification technology has the advantages of safety, confidentiality and convenience. The biological characteristic recognition technology has the advantages of difficult forgetting, good anti-counterfeiting and anti-theft performance, portability, availability at any time and any place and the like.
Many kinds of biometric technologies such as fingerprint recognition, palm print (palm geometry) recognition, iris recognition, face recognition, voice recognition, signature recognition, gene recognition, etc. have appeared today.
The face recognition method and system based on the living body detection technology of the present disclosure uses the living body detection technology for prosthesis prevention, so that illegal individuals or institutions cannot use fake faces to make financial payments. The biopsy technique may be incorporated into different hardware devices, in particular payment devices, such as ATM machines, POS machines, personal computers, palm devices, etc. Those skilled in the art will appreciate that the face recognition method and system based on the in-vivo detection technique of the present disclosure may be incorporated into other hardware devices as long as the hardware device is capable of incorporating face recognition techniques. The face recognition method and system based on the living detection technology of the present disclosure may be applied by a service institution (such as a payee or other third party institution). In various embodiments of the present disclosure, a payee will be specifically described as an example, but it will be understood by those skilled in the art that the face recognition method and system based on the living detection technique of the present disclosure may be applied by different institutions or individuals and may be applied to different scenarios.
Face recognition method based on living body detection technology
In the present face recognition technology, the prosthesis attack to be prevented by the living body detection technology includes, for example, photo attack, video attack, and the like. Photo attack is usually prevented by an interactive motion living body detection technology, namely, a plurality of motion instructions are issued, so that a user can make a detection mode of corresponding motion. Video attacks are usually prevented by color texture analysis technology, i.e. the picture quality of video frames is lower and the distortion is higher than for a real person.
The above living detection technique works based on visible light, whose performance is affected by a series of factors such as illumination (e.g., day and night, indoor and outdoor, etc.), shading, makeup, etc., and may also vary with changes in expression, posture, hairstyle, etc., and these effects and changes are relatively difficult to model, describe, and analyze.
The technical scheme of the disclosure relates to face recognition by adopting a multi-mode living body detection technology aiming at different scenes by incorporating a face recognition technology based on infrared images. The face recognition technology based on the infrared image is independent of a visible light source, avoids the influence of illumination and is suitable for preventing various prosthesis attacks.
Face recognition technology based on infrared images is classified into near infrared (wavelength of 0.7-1.0 μm) face recognition and far infrared (wavelength of 8-1000 μm) face recognition. Near infrared face recognition is to install a near infrared light emitting diode with intensity higher than that of ambient light on a camera to ensure illumination, and then the camera uses a long-pass filter to allow near infrared light to pass through but filter visible light, so that an environment-independent near infrared face image is obtained. The near infrared face image only changes monotonically with the distance between the person and the camera. Near infrared face recognition can reduce the influence of ambient illumination on images to a great extent.
Unlike near infrared face recognition, far infrared face recognition is imaged by acquiring thermal radiation emitted from the face. Far infrared images are imaged based on the temperature of the target, also known as thermograms. The facial thermogram is determined by infrared thermal radiation of facial tissues and structures (such as blood vessel size and distribution), and is unique because the blood vessel distribution (venous and arterial distribution of the face) of each person is unique, non-reproducible, and does not change with age.
Acquisition of the thermograms may be accomplished by various temperature sensing devices including modes of thermal imagers (such as far infrared cameras), thermopiles, thermometers, and the like. The signals output by the temperature sensing device come in a variety of forms, including dense thermal imaging images, point signals, and the like, which will be collectively referred to hereinafter as facial thermal imaging images.
Fig. 1 shows an example of a cash register incorporating a thermal imaging camera and its acquired grey-scale map. A cash register (as shown on the left side of fig. 1) implanted with a thermal imaging camera is one example of a temperature sensing device that can acquire a thermogram.
In the application scene of the unmanned supermarket, as payment is completed by a user in a self-service way, the cash register implanted with the thermal imaging camera can be utilized to perform face recognition capable of preventing the attack of the prosthesis under the condition of weak light at night or under the condition of weak light caused by severe daytime weather.
Specifically, when a user checks out before a cash register in which a thermal imaging camera is implanted, the cash register may take a thermal imaging image of the user's face, as shown on the right side of fig. 1. As shown by the gray scale plot, the distribution of gray scale (caloric content) levels associated with the face is significantly different from the distribution of background caloric content levels, whereby background removal is facilitated. This is because the thermal emissivity of the skin of a human face is clearly distinguishable from the thermal emissivity of surrounding scenes and is therefore easily distinguishable from surrounding scenes.
In the same way, in the application scene of the entrance guard with higher safety requirement, under the condition of weak light at night or under the condition of weak light caused by severe daytime weather, a thermal imaging camera can be also arranged at the entrance guard to perform face recognition capable of preventing the attack of the prosthesis.
In the use scenario of ATM with higher security requirements, other living detection technologies, such as interactive living detection technology and three-dimensional image acquisition technology, can be incorporated in addition to facial thermal imaging technology to perform fusion type face recognition. Namely, according to different characteristics and complementarity of the thermal imaging face recognition and the visible light face recognition, the classification results and the recognition results of the face recognition methods of different face recognition methods are fused, so that the performance and the recognition rate of the face recognition are improved.
Fig. 2 illustrates a flow chart 200 of a thermal imaging technology based face recognition method according to an embodiment of the present disclosure.
At 202, a real-time thermal imaging image of a face and a facial photograph are acquired.
As described above, the facial thermal imaging image can be obtained in real time through far infrared face recognition based on the temperature sensing device.
The facial photograph may be stored in a database, such as an identification card photograph or passport photograph associated with an electronic account, a bank account. The facial photographs may also be taken in real time, such as a real-time taken facial photograph, a multi-frame facial photograph accompanying an interactive action, a depth image obtained with a three-dimensional camera, and so forth.
The facial photos may be static or dynamic. The facial photograph may also be a plurality of continuous or discrete video frames.
The facial photograph may be obtained by a conventional camera. The facial photos may also be obtained through visible light-based biopsy techniques, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic system), and so forth.
It can be appreciated that when a more complex background is involved in an application scenario, image preprocessing is generally required for the acquired facial thermal imaging image and the facial photo, that is, modeling (for example, a statistical feature-based method and a knowledge modeling-based method) is required to be performed first on a face, and matching degrees between a region to be detected and a face model are compared, so as to obtain a face region that may exist. In the present disclosure, detection and positioning of a face will not be described in detail, but image features are extracted and identified directly based on the acquired facial thermographic image and facial photograph.
At 204, facial thermographic features are extracted and identified based on the facial thermographic image.
Extraction and identification of facial thermographic features can be accomplished in a variety of ways, such as isotherm matching methods, blood flow graph based methods, physiological structure based methods, traditional statistical identification based methods (principal component analysis PCA, linear discriminant analysis LDA, independent component analysis ICA, etc.), and nonlinear feature subspace based methods.
Taking the isotherm matching method as an example, facial isotherm features are extracted. Facial isotherms essentially reflect vascularity information under the skin of a human face. The isotherm region can be extracted by using a standard template, the shape of the isotherm is analyzed by using a geometric analysis method, the analysis result and the centroid of the face image are features, and the isotherm is represented by using a fractal method.
From facial isotherm features, global and local features of the face can be extracted. For facial thermographic images, global features describe the main feature information, including overall information of contours, distribution of facial organs, etc. While local features describe the detailed features of the face, such as organ features, facial singularity features, like scars, moles, dimples, etc. In facial thermographic images, facial singular features such as scars, dimples, etc., can be extracted with vascularity (e.g., intersection) information. Global features are used for coarse matching and local features are used for fine matching.
Those skilled in the art will appreciate that different methods may be employed to extract and identify facial thermal imaging features for different application scenarios.
At 206, facial visible light imaging features are extracted and identified based on the facial photographs.
Common visible light image features are color features, texture features, shape features, and spatial relationship features.
The color feature is a global feature based on pixel points, all pixels belonging to an image or image area having their own contribution. Texture features are also global features that require statistical calculations in areas containing multiple pixels. Shape features are represented in two classes, one is outline features and the other is region features. The contour features of the image are mainly directed to the outer boundary of the object, whereas the region features of the image are related to the whole shape region. The spatial relationship refers to a mutual spatial position or a relative direction relationship between a plurality of objects segmented in an image, and these relationships may be also classified into a connection/adjacency relationship, an overlapping/overlapping relationship, an inclusion/containment relationship, and the like. The use of spatial relationship features may enhance the descriptive discrimination of image content, but spatial relationship features are often relatively sensitive to rotation, inversion, scale change, etc. of an image or object.
However, the recognition of facial features of a person is different from general image recognition, which is based on the features of a face. The global features of the face describe main feature information, including overall information of skin color, contour, distribution of facial organs, and the like. And the detail features of the face described by the local features, such as organ characteristics, facial singular features, like scars, moles, dimples, etc. The former is used for coarse matching and the latter is used for fine matching.
Extraction and recognition of facial visible imaging features can be achieved in a variety of ways, such as a fixed template matching method based on geometric features, an algebraic feature-based recognition method (e.g., pattern recognition based on K-L transforms, a fisher linear discriminant algorithm), and a neural network learning method based on connection mechanisms (e.g., pca+nn algorithm), among others.
For extraction and recognition of different facial image features, different methods may be employed, and combinations of different features may be extracted in different scenes. Those skilled in the art will appreciate that different features may be extracted and combined appropriately for different application scenarios.
At 208, a determination is made as to whether the facial thermal imaging features and facial visible light imaging features match.
In one embodiment of the present disclosure, in determining whether facial thermal imaging features and facial visible light imaging features match, global features and local features of a face may be integrated and dimensionality reduced (i.e., image elements are projected into a low-dimensional space using linear or non-linear processing methods), building different global and local classifiers.
And correspondingly classifying the facial thermal imaging features obtained from the facial thermal imaging images and the facial visible light imaging features obtained from the facial visible light images according to the global features and the local features, inputting the global classifier and the local classifier correspondingly, and carrying out weighted summation on the similarity output by each classifier to obtain the final similarity.
According to the final similarity being high, the facial thermal imaging characteristics and facial visible light imaging characteristics can be judged to be matched; from the final similarity being low, it may be determined that the facial thermal imaging features and the facial visible light imaging features do not match. It will be appreciated that a similarity threshold may be set, i.e. above the threshold it is determined that it matches, and below the threshold it is determined that it does not match.
In another embodiment of the present disclosure, a non-linear feature subspace-based approach may be employed to determine whether facial thermal imaging features and facial visible light imaging features match.
Firstly, mapping a sample into a feature space by using a kernel function, performing PCA analysis on the feature space, and solving a kernel feature subspace of each face class. And solving the projection length of the face sample to be identified in the nuclear feature subspace of each class, wherein the larger the projection length value is, the smaller the distance between the sample and the feature subspace is. Classifying and identifying the face sample to be identified by utilizing the nearest neighbor criterion.
Those skilled in the art will appreciate that the extracted features may be aligned in different ways for different application scenarios.
At 210, face recognition is successful if the facial thermal imaging features and facial visible light imaging features match. If the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
The face recognition method based on the thermal imaging technology of the present disclosure is actually a multi-modal face recognition method based on the living body detection technology. Which combines far infrared thermal imaging technology and visible light imaging technology. In one embodiment of the present disclosure, the visible light imaging technique may be a visible light-based biopsy technique for protection against prosthetic attacks, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic vision system), and so forth. The combination of the two greatly improves the success rate of preventing the attack of the prosthesis.
Fig. 3 illustrates a flowchart 300 of a multi-modal face recognition method based on a liveness detection technique in accordance with another embodiment of the present disclosure.
At 302, an application scenario is analyzed.
Different application scenarios may have different lighting conditions, different security requirements, and different device configurations, etc. The lighting conditions may vary from scene to scene, and for a 24 hour business scene it may be desirable to employ detection techniques that are independent or less dependent on lighting. Applications that may be in completely dark scenes or in harsh natural environments require detection techniques that do not rely on illumination.
The security requirements may also vary from scene to scene, and in the application scene of an unmanned supermarket, since there are no sales personnel and cashiers present, the types of prosthesis attacks that need to be dealt with may be many, the security requirements are of course higher than those of the business place where the staff is present. Device configuration may be related by budget, user oriented, intended use time, etc.
At 304, at least one multi-modality biopsy technique is selected based on the application scenario, the multi-modality biopsy technique including at least facial thermal imaging techniques.
In an embodiment of the present disclosure, in an application scenario of an unmanned supermarket, in view of no sales person or cashier being present, there may be more kinds of prosthesis attacks that need to be dealt with. Therefore, at least one facial thermal imaging technique may be selected, and at least one of the interactive motion living detection technique, the three-dimensional image acquisition technique, the near infrared living detection technique, and the like may be further selected and combined. Such a combination may be to select one of an interactive motion biopsy technique and a three-dimensional image acquisition technique, and a near infrared biopsy technique to acquire an image.
Those skilled in the art will appreciate that a decision maker (e.g., an investor or operator) may make other different selections and combinations, such as selecting a combination of three or even four techniques for multi-modal in vivo detection.
In another embodiment of the present disclosure, in an application scenario with higher security requirements (such as high-end conference identity authentication), at least a facial thermal imaging technology may also be selected, and at least one of the interactive motion living detection technology, the three-dimensional image acquisition technology, the near infrared living detection technology, and the like may be further selected and combined.
It will be appreciated that the above multi-modal biopsy is based on image fusion techniques. Image fusion is used as an effective value-added technology, and aims to improve the reliability of interpretation and the robustness of a system by using redundant information; the complementary information is used to enhance useful information in the image, improving system performance, i.e., resolution, coverage, response time, confidence.
Image fusion may be performed at multiple levels, namely pixel level fusion, feature level fusion and decision level fusion.
The pixel level fusion is the lowest level fusion process, which is to fuse the original image data from each sensor and then perform feature extraction and attribute decision based on the fused image data. However, pixel-level fusion often requires a certain similarity between the image data to be fused, and strict registration between the images is required (fusion results are sensitive to mismatch), and the data to be fused has the most serious noise (or interference).
Feature level fusion belongs to the middle hierarchy, and is characterized in that after pre-detection, segmentation and feature extraction, the extracted features are combined in a common decision space on the premise that the detection of each sensor is mutually independent, and then the selected targets are optimally classified based on the combined feature vectors. The feature level fusion is mainly used for fusion among heterogeneous sensor images. Because information loss is introduced when feature vectors are extracted from the original image, the accuracy of the fusion result is reduced to a certain extent.
Decision-level fusion is a high-level information fusion and represents a data increment method, namely, each sensor firstly makes independent decisions based on own image data and then combines the decisions to form a final decision, so that the interpretation capability of the image can be enhanced, and the observed target can be better understood. At this time, the accuracy of the fusion result is the worst, but the fusion method is more suitable for fusion between sensor image data with large characteristic difference, such as fusion of a visible light image and an infrared image, fusion of image data and non-image data, and the like.
Those skilled in the art will appreciate that the multimodal in vivo detection of the present disclosure may employ feature level fusion or decision level fusion. Those skilled in the art will also appreciate that the outputs may be weighted differently, in addition to the number of techniques employed being selected and combined.
At 306, a real-time thermal imaging image of the face and a photograph of the face are received.
As described above, the facial thermal imaging image can be obtained in real time through far infrared face recognition based on the temperature sensing device.
The facial photograph may be stored in a database, such as an identification card photograph or passport photograph associated with an electronic account, a bank account. The facial photographs may also be taken in real time, such as a real-time taken facial photograph, a multi-frame facial photograph accompanying an interactive action, a depth image obtained with a three-dimensional camera, and so forth.
The facial photos may be static or dynamic. The facial photograph may also be a plurality of continuous or discrete video frames.
The facial photograph may be obtained by a conventional camera. The facial photos may also be obtained through visible light-based biopsy techniques, such as multi-frame RGB images in interactive motion biopsy techniques, three-dimensional images obtained through 3D image acquisition techniques (e.g., a multi-view stereoscopic system), and so forth.
It can be appreciated that when a more complex background is involved in an application scenario, image preprocessing is generally required for the acquired facial thermal imaging image and the facial photo, that is, modeling (for example, a statistical feature-based method and a knowledge modeling-based method) is required to be performed first on a face, and matching degrees between a region to be detected and a face model are compared, so as to obtain a face region that may exist. In the present disclosure, detection and positioning of a face will not be described in detail, but image features are extracted and identified directly based on the acquired facial thermographic image and facial photograph.
At 308, facial thermographic features are extracted and identified based on the facial thermographic image.
Global features and local features of a face may be extracted from a facial thermographic image. For facial thermographic images, global features describe the main feature information, including overall information of contours, distribution of facial organs, etc. While local features describe the detailed features of the face, such as organ features, facial singularity features, like scars, moles, dimples, etc. In facial thermographic images, facial singular features such as scars, dimples, etc., can be extracted with vascularity (e.g., intersection) information. Global features are used for coarse matching and local features are used for fine matching.
Those skilled in the art will appreciate that different methods may be employed to extract and identify facial thermal imaging features for different application scenarios.
At 310, facial visible light imaging features are extracted and identified based on the facial photographs.
At 312, a determination is made as to whether the facial thermal imaging features and facial visible light imaging features match.
At 314, if the facial thermal imaging features and facial visible light imaging features match, face recognition is successful if the facial thermal imaging features and facial visible light imaging features match. If the facial thermal imaging features and facial visible light imaging features do not match, face recognition fails.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
The face recognition method based on the living body detection technology of the present disclosure can be implemented using a multi-modal living body detection technology. Notably, the thermal imaging living detection technology incorporated in the present disclosure enables the technical solution of the present disclosure to break out of the limitation of illumination conditions, so that the application scene can be expanded into a completely dark scene or a severe natural environment. The visible light imaging techniques incorporated in the present disclosure may be visible light-based in vivo detection techniques for protection against prosthetic attacks. As will be appreciated by those skilled in the art, as face recognition technologies develop and diversify, the application of the living body detection technology-based face recognition method of the present disclosure will also diversify.
Face recognition system based on living body detection technology
Fig. 4 illustrates a block diagram 400 of a thermal imaging technology based face recognition system in accordance with an embodiment of the present disclosure.
The receiving module 402 receives a real-time thermal imaging image of a face and a photograph of the face.
The extraction module 404 extracts and identifies facial thermal imaging features based on the facial real-time thermal imaging images and facial visible light imaging features based on the facial photographs.
The analysis module 406 determines whether the facial thermal imaging features and facial visible light imaging features match.
In one embodiment of the present disclosure, when the analysis module 406 determines whether the facial thermal imaging features and facial visible light imaging features match, the analysis module 406 may integrate the global features and local features of the face and reduce the dimensions, building different global and local classifiers.
The analysis module 406 classifies the facial thermal imaging features from the facial real-time thermal imaging image and the facial visible imaging features from the facial photo according to the global features and the local features, inputs the global classifier and the local classifier, and performs weighted summation on the similarity outputted by each classifier to obtain the final similarity.
The analysis module 406 may determine that the facial thermal imaging features match the facial visible light imaging features based on the final similarity being high; from the final similarity being low, it may be determined that the facial thermal imaging features and the facial visible light imaging features do not match. It will be appreciated that a similarity threshold may be set, i.e. above the threshold it is determined that it matches, and below the threshold it is determined that it does not match.
In another embodiment of the present disclosure, the analysis module 406 may employ a non-linear feature subspace-based approach to determine whether facial thermal imaging features and facial visible light imaging features match.
The analysis module 406 first maps the samples into feature space using kernel functions, performs PCA analysis in the feature space, and solves the kernel feature subspaces for each face class. And solving the projection length of the face sample to be identified in the nuclear feature subspace of each class, wherein the larger the projection length value is, the smaller the distance between the sample and the feature subspace is. Classifying and identifying the face sample to be identified by utilizing the nearest neighbor criterion.
Those skilled in the art will appreciate that the extracted features may be aligned in different ways for different application scenarios.
Further, if the facial thermal imaging features and facial visible light imaging features match, the analysis module 406 determines that face recognition was successful; if the facial thermal imaging features and facial visible light imaging features do not match, the analysis module 406 determines that face recognition failed.
If facial thermal imaging features match facial visible imaging features, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
Fig. 5 illustrates a block diagram 500 of a face recognition system based on a biopsy technique according to another embodiment of the present disclosure.
The selection module 502 analyzes the application scenario. Different application scenarios may have different lighting conditions, different security requirements, and different device configurations.
Further, the selection module 502 selects at least one multi-modality biopsy technique based on the application scenario, the multi-modality biopsy technique including at least facial thermal imaging techniques.
The receiving module 504 receives the real-time thermal imaging image of the face and the facial photograph.
The extraction module 506 extracts and identifies facial thermal imaging features based on the facial thermal imaging images and facial visible light imaging features based on the facial photographs.
The analysis module 508 determines whether the facial thermal imaging features and facial visible light imaging features match.
Further, if the facial thermal imaging features and the facial visible light imaging features match, the analysis module 508 determines that face recognition was successful; if the facial thermal imaging features and facial visible light imaging features do not match, the analysis module 508 determines that face recognition failed.
If the facial thermal imaging features and facial visible imaging features match, then what is identified is a person who holds a legal document, can legally enter, or has an electronic or bank account, and is a "true person".
If the facial thermal imaging features and facial visible light imaging features do not match, then a person who does not hold a legal document, who may legitimately enter, or who has an electronic or banking account, or a "prosthesis" is identified.
Also, the biopsy-based face recognition system of the present disclosure may be implemented using a multi-modal biopsy technique. Notably, the thermal imaging living detection technology incorporated in the present disclosure enables the technical solution of the present disclosure to break out of the limitation of illumination conditions, so that the application scene can be expanded into a completely dark scene or a severe natural environment. The visible light imaging technology incorporated in the present disclosure is a visible light-based in vivo detection technology for protection against prosthetic attacks. As will be appreciated by those skilled in the art, as face recognition technologies develop and diversify, the application of the living body detection technology-based face recognition method of the present disclosure will also diversify.
The steps and modules of the above-described living body detection technology-based face recognition method and system may be implemented in hardware, software, or a combination thereof. If implemented in hardware, the various illustrative steps, modules, and circuits described in connection with this disclosure may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic component, a hardware component, or any combination thereof. A general purpose processor may be a processor, microprocessor, controller, microcontroller, state machine, or the like. If implemented in software, the various illustrative steps, modules, described in connection with this disclosure may be stored on a computer readable medium or transmitted as one or more instructions or code. Software modules implementing various operations of the present disclosure may reside in storage media such as RAM, flash memory, ROM, EPROM, EEPROM, registers, hard disk, removable disk, CD-ROM, cloud storage, etc. A storage medium may be coupled to the processor such that the processor can read information from, and write information to, the storage medium, as well as execute corresponding program modules to implement the various steps of the present disclosure. Moreover, software-based embodiments may be uploaded, downloaded, or accessed remotely via suitable communication means. Such suitable communication means include, for example, the internet, world wide web, intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and thermal imaging communications), electronic communications, or other such communication means.
It is also noted that the embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. Additionally, the order of the operations may be rearranged.
The disclosed methods, apparatus, and systems should not be limited in any way. Rather, the present disclosure encompasses all novel and non-obvious features and aspects of the various disclosed embodiments (both alone and in various combinations and subcombinations with one another). The disclosed methods, apparatus and systems are not limited to any specific aspect or feature or combination thereof, nor do any of the disclosed embodiments require that any one or more specific advantages be present or that certain or all technical problems be solved.
While the embodiments of the present disclosure have been described above with reference to the accompanying drawings, the present disclosure is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many modifications may be made by those of ordinary skill in the art without departing from the spirit of the disclosure and the scope of the claims, which fall within the scope of the present disclosure.

Claims (18)

1. A face recognition method based on a thermal imaging technology comprises the following steps:
acquiring a real-time thermal imaging image and a face photo of the face;
extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images and classifying the facial thermal imaging features into global features and local features;
extracting and identifying facial visible light imaging features based on the facial photographs, and classifying the facial visible light imaging features into global features and local features;
determining whether the facial thermal imaging feature and the facial visible light imaging feature match based on a similarity between a global feature of the facial thermal imaging feature and a global feature of the facial visible light imaging feature and a similarity between a local feature of the facial thermal imaging feature and a local feature of the facial visible light imaging feature;
if the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful; and
if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
2. The method of claim 1, wherein the facial real-time thermal imaging image is acquired in real-time by far-infrared face recognition technology based on a temperature sensing device.
3. The method of claim 1, wherein the facial photograph is a static single or multi-frame RGB image obtained from a database.
4. The method of claim 1, wherein the facial photograph is a single or multiple frame RGB image acquired dynamically in real time.
5. The method of claim 1, wherein the facial thermal imaging features comprise vascularity features.
6. A face recognition method based on a living body detection technology, comprising:
analyzing an application scene;
selecting at least one multi-modal living body detection technology based on the application scene, wherein the multi-modal living body detection technology at least comprises a facial thermal imaging technology;
acquiring a real-time thermal imaging image and a face photo of the face;
extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images and classifying the facial thermal imaging features into global features and local features;
extracting and identifying facial visible light imaging features based on the facial photographs, and classifying the facial visible light imaging features into global features and local features;
determining whether the facial thermal imaging feature and the facial visible light imaging feature match based on a similarity between a global feature of the facial thermal imaging feature and a global feature of the facial visible light imaging feature and a similarity between a local feature of the facial thermal imaging feature and a local feature of the facial visible light imaging feature; and
If the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful;
if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
7. The method of claim 6, wherein the multi-modal biopsy technique further comprises an interactive motion biopsy technique, a three-dimensional image acquisition technique, a near infrared biopsy technique.
8. The method of claim 6, wherein the selecting at least one multi-modal living detection technique based on an application scenario comprises: in addition to selecting facial thermal imaging techniques, one or more of interactive motion biopsy techniques, three-dimensional image acquisition techniques, near infrared biopsy techniques are selected.
9. The method of claim 6, wherein the multi-modal living detection technique is selected based on lighting conditions of an application scene.
10. The method of claim 6, wherein the multi-modal living detection technique is selected based on security requirements of an application scenario.
11. The method of claim 6, wherein the real-time thermal imaging image of the face is acquired in real-time through far-infrared face recognition based on a temperature sensing device.
12. The method of claim 6, wherein the facial photograph is a static single or multi-frame RGB image obtained from a database.
13. The method of claim 6, wherein the facial photograph is a single or multi-frame RGB image acquired dynamically in real-time.
14. A far infrared thermal imaging technology-based face recognition system, comprising:
the receiving module is used for receiving the real-time facial thermal imaging image and the facial photo;
an extraction module for:
extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images and classifying the facial thermal imaging features into global features and local features, and
extracting and identifying facial visible light imaging features based on the facial photographs, and classifying the facial visible light imaging features into global features and local features; and
an analysis module for:
determining whether the facial thermal imaging feature and the facial visible light imaging feature match based on a similarity between a global feature of the facial thermal imaging feature and a global feature of the facial visible light imaging feature and a similarity between a local feature of the facial thermal imaging feature and a local feature of the facial visible light imaging feature;
If the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful; and
if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
15. The system of claim 14, wherein the facial real-time thermal imaging image is acquired in real-time by far-infrared face recognition technology based on a temperature sensing device.
16. The system of claim 14, wherein the facial photograph is a static single or multi-frame RGB image obtained from a database.
17. The system of claim 14, wherein the facial photograph is a single or multiple frame RGB image acquired dynamically in real time.
18. A face recognition system based on a living body detection technique, comprising:
a selection module for:
analyzing application scenarios
Selecting at least one multi-modal living body detection technology based on the application scene, wherein the multi-modal living body detection technology at least comprises a facial thermal imaging technology;
the receiving module is used for receiving the real-time facial thermal imaging image and the facial photo;
an extraction module for:
extracting and identifying facial thermal imaging features based on the facial real-time thermal imaging images and classifying the facial thermal imaging features into global features and local features, and
Extracting and identifying facial visible light imaging features based on the facial photographs, and classifying the facial visible light imaging features into global features and local features;
an analysis module for:
determining whether the facial thermal imaging feature and the facial visible light imaging feature match based on a similarity between a global feature of the facial thermal imaging feature and a global feature of the facial visible light imaging feature and a similarity between a local feature of the facial thermal imaging feature and a local feature of the facial visible light imaging feature, and
if the facial thermal imaging features and the facial visible light imaging features are matched, the face recognition is successful;
if the facial thermal imaging features and the facial visible light imaging features do not match, face recognition fails.
CN201910066681.4A 2019-01-24 2019-01-24 Face recognition method and system based on living body detection technology Active CN110008813B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910066681.4A CN110008813B (en) 2019-01-24 2019-01-24 Face recognition method and system based on living body detection technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910066681.4A CN110008813B (en) 2019-01-24 2019-01-24 Face recognition method and system based on living body detection technology

Publications (2)

Publication Number Publication Date
CN110008813A CN110008813A (en) 2019-07-12
CN110008813B true CN110008813B (en) 2023-06-30

Family

ID=67165543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910066681.4A Active CN110008813B (en) 2019-01-24 2019-01-24 Face recognition method and system based on living body detection technology

Country Status (1)

Country Link
CN (1) CN110008813B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110503007B (en) * 2019-07-31 2023-04-07 成都甄识科技有限公司 Living animal monitoring method, device and system based on thermal imager
CN110717428A (en) * 2019-09-27 2020-01-21 上海依图网络科技有限公司 Identity recognition method, device, system, medium and equipment fusing multiple features
CN111653012B (en) * 2020-05-29 2022-06-07 浙江大华技术股份有限公司 Gate control method, gate and device with storage function
CN111967296B (en) * 2020-06-28 2023-12-05 北京中科虹霸科技有限公司 Iris living body detection method, access control method and device
CN111811663A (en) * 2020-07-21 2020-10-23 太仓光电技术研究所 Temperature detection method and device based on video stream
CN112016482B (en) * 2020-08-31 2022-10-25 成都新潮传媒集团有限公司 Method and device for distinguishing false face and computer equipment
CN113627263B (en) * 2021-07-13 2023-11-17 支付宝(杭州)信息技术有限公司 Exposure method, device and equipment based on face detection
CN113642404B (en) * 2021-07-13 2024-06-25 季华实验室 Target recognition detection association method, device, medium and computer program product
CN115761827A (en) * 2021-08-31 2023-03-07 北京新氧科技有限公司 Cosmetic progress detection method, device, equipment and storage medium
CN114882551A (en) * 2022-04-14 2022-08-09 支付宝(杭州)信息技术有限公司 Face recognition processing method, device and equipment based on machine and tool dimensions
CN114863517B (en) * 2022-04-22 2024-06-07 支付宝(杭州)信息技术有限公司 Risk control method, device and equipment in face recognition
CN117994865B (en) * 2024-04-01 2024-07-02 杭州海康威视数字技术股份有限公司 Binocular face matching method and device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834901B (en) * 2015-04-17 2018-11-06 北京海鑫科金高科技股份有限公司 A kind of method for detecting human face, apparatus and system based on binocular stereo vision
CN105513221B (en) * 2015-12-30 2018-08-14 四川川大智胜软件股份有限公司 A kind of ATM machine antifraud apparatus and system based on three-dimensional face identification
CN108764058B (en) * 2018-05-04 2021-05-25 吉林大学 Double-camera face in-vivo detection method based on thermal imaging effect
CN109192302A (en) * 2018-08-24 2019-01-11 杭州体光医学科技有限公司 A kind of face's multi-modality images acquisition processing device and method

Also Published As

Publication number Publication date
CN110008813A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN110008813B (en) Face recognition method and system based on living body detection technology
US12014571B2 (en) Method and apparatus with liveness verification
CN108985134B (en) Face living body detection and face brushing transaction method and system based on binocular camera
CN110326001B (en) System and method for performing fingerprint-based user authentication using images captured with a mobile device
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
US9076048B2 (en) Biometric identification, authentication and verification using near-infrared structured illumination combined with 3D imaging of the human ear
Deb et al. Look locally infer globally: A generalizable face anti-spoofing approach
Akhtar et al. Face spoof attack recognition using discriminative image patches
Pravallika et al. SVM classification for fake biometric detection using image quality assessment: Application to iris, face and palm print
CN113574537A (en) Biometric identification using composite hand images
US20220277311A1 (en) A transaction processing system and a transaction method based on facial recognition
Pinto et al. Counteracting presentation attacks in face, fingerprint, and iris recognition
Gomez-Barrero et al. Towards multi-modal finger presentation attack detection
Galdi et al. PROTECT: Pervasive and useR fOcused biomeTrics bordEr projeCT–a case study
Kumar et al. Rank level integration of face based biometrics
Benlamoudi Multi-modal and anti-spoofing person identification
Solomon Face anti-spoofing and deep learning based unsupervised image recognition systems
Ramalingam et al. Fundamentals and advances in 3D face recognition
SulaimanAlshebli et al. The cyber security biometric authentication based on liveness face-iris images and deep learning classifier
JP2010009377A (en) Verification system, verification method, program and storage medium
Chugh An accurate, efficient, and robust fingerprint presentation attack detector
CN113128269A (en) Living body detection method based on image style migration information fusion
Chiesa Revisiting face processing with light field images
Al-Rashid Biometrics Authentication: Issues and Solutions
Bhat et al. Prevention of spoofing attacks in FR based attendance system using liveness detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40010669

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Innovative advanced technology Co.,Ltd.

Address before: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant before: Advanced innovation technology Co.,Ltd.

Effective date of registration: 20200924

Address after: Cayman Enterprise Centre, 27 Hospital Road, George Town, Grand Cayman, British Islands

Applicant after: Advanced innovation technology Co.,Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant