CN114842543B - Three-dimensional face recognition method and device, electronic equipment and storage medium - Google Patents
Three-dimensional face recognition method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114842543B CN114842543B CN202210615124.5A CN202210615124A CN114842543B CN 114842543 B CN114842543 B CN 114842543B CN 202210615124 A CN202210615124 A CN 202210615124A CN 114842543 B CN114842543 B CN 114842543B
- Authority
- CN
- China
- Prior art keywords
- point cloud
- cloud data
- dimensional face
- layer
- face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 238000005259 measurement Methods 0.000 claims abstract description 18
- 230000009466 transformation Effects 0.000 claims description 34
- 238000001514 detection method Methods 0.000 claims description 27
- 239000011159 matrix material Substances 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 18
- 238000011176 pooling Methods 0.000 claims description 15
- 239000013598 vector Substances 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 11
- 238000002372 labelling Methods 0.000 claims description 4
- 238000011524 similarity measure Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 description 6
- 238000009499 grossing Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000007781 pre-processing Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 230000007547 defect Effects 0.000 description 2
- 210000000887 face Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 210000000720 eyelash Anatomy 0.000 description 1
- ZZUFCTLCJUWOSV-UHFFFAOYSA-N furosemide Chemical compound C1=C(Cl)C(S(=O)(=O)N)=CC(C(O)=O)=C1NCC1=CC=CO1 ZZUFCTLCJUWOSV-UHFFFAOYSA-N 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 210000003786 sclera Anatomy 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/20—Finite element generation, e.g. wire-frame surface description, tesselation
- G06T17/205—Re-meshing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/169—Holistic features and representations, i.e. based on the facial image taken as a whole
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20132—Image cropping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a three-dimensional face recognition method, which comprises the following steps: acquiring three-dimensional face point cloud data to be identified; inputting the three-dimensional face point cloud data to be identified into an RP-net network model to obtain local-global face characteristics; and carrying out similarity measurement on the local-global face characteristics and face characteristics of a candidate face to obtain a similarity measurement value, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to the similarity measurement value. Compared with the prior art, the three-dimensional face recognition method provided by the invention has the advantages that the characteristic extraction is carried out on the three-dimensional face point cloud data through the RP-net network model, and the local characteristic and the global characteristic of the face are captured simultaneously in the characteristic extraction of the RP-net network model, wherein the local characteristic can describe the face characteristic in more detail, and the recognition accuracy rate of the three-dimensional face can be improved.
Description
Technical Field
The present invention relates to the field of three-dimensional face recognition technologies, and in particular, to a three-dimensional face recognition method, apparatus, electronic device, and storage medium.
Background
With the continuous improvement of the informatization degree of the current society, related information security problems are also more and more emphasized, and all information security is finally independent of personal identity authentication. Whether it is personal privacy information, property security, or government confidential documents and administrative rights, the identity of the relevant personnel needs to be authenticated to ensure security. The traditional identity authentication modes such as certificates, passwords, seals, cards and the like have the defects and hidden dangers, such as the certificates, cards and the like are easy to damage or lose, passwords are easy to confuse and forget and the like, and the emerging biological recognition technology has the incomparable advantages of the traditional identity recognition and authentication technology due to the advantages of reliability and convenience, and has been widely paid attention to and used by society.
Among them, the three-dimensional face recognition technology has unique advantages, mainly expressed in:
(1) The three-dimensional face recognition has the characteristics of constant illumination and posture, the three-dimensional shape data of the face can be regarded as not changing along with the change of light and sight, and accessories such as makeup have great influence on the two-dimensional image data, but have no obvious influence on the three-dimensional data.
(2) Three-dimensional data has a well-defined spatial shape representation, so its information is more abundant than two-dimensional images.
In the existing three-dimensional face recognition method, the three-dimensional face point cloud data are used for representing the position information and the depth information of the three-dimensional image, and the three-dimensional face point cloud data are classified and recognized through a depth learning model, but the existing depth learning model has insufficient feature extraction details on the three-dimensional face point cloud data, and the recognition rate on the three-dimensional face recognition is low.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art, and provides a three-dimensional face recognition method which can simultaneously extract local features and global features of three-dimensional face point cloud data and improve the recognition rate of three-dimensional face recognition.
The invention is realized by the following technical scheme: a three-dimensional face recognition method comprises the following steps:
acquiring three-dimensional face point cloud data to be identified;
Inputting the three-dimensional face point cloud data to be identified into an RP-net network model to obtain local-global face characteristics, wherein the RP-net network model comprises a sampling module, a grouping module and a characteristic extraction module, and the sampling module is used for sampling the three-dimensional face point cloud data to be identified to obtain a plurality of key points;
The grouping module is used for obtaining a plurality of circular areas by taking each key point as a circle center and taking a characteristic extraction radius value r as a radius; grouping the three-dimensional face point cloud data to be identified according to the circular region division to obtain a plurality of key point cloud data sets;
The feature extraction module comprises a first transformation layer, a local feature marking layer, a first MLP layer, a second transformation layer, a second MLP layer and a pooling layer, wherein the first transformation layer is used for carrying out alignment operation on each key point cloud data set; the local feature labeling layer is used for rotating the key point cloud data set output by the first transformation layer around three-dimensional coordinate axes by T rotation angles { θ k }, k=1, 2, & gt, and T to obtain a rotation point cloud data set Q' x(θk),Q′y(θk),Q′z(θk corresponding to the three-dimensional coordinate axes; respectively projecting the rotating point cloud data set Q' x(θk),Q′y(θk),Q′z(θk) on three planes where three coordinate axes are located to obtain corresponding projection point cloud data sets; dividing the projection point cloud data set into N b×Nb grids in average, and calculating the point cloud data amount in each grid to obtain a corresponding distribution matrix D x,Dy,Dz; obtaining a corresponding center moment mu mn and shannon entropy e according to the distribution matrix D x,Dy,Dz; obtaining sub-feature descriptors f x(θk),fy(θk),fz(θk corresponding to three-dimensional coordinate axes according to the central moment mu mn and the shannon entropy e respectively; the sub-feature descriptors f x(θk),fy(θk),fz(θk) are aggregated to obtain local feature descriptors f; the first MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the first transformation layer according to the local feature descriptor f; the second transformation layer is used for carrying out alignment operation on the key point cloud data set output by the first MLP layer; the second MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the second transformation layer; the pooling layer is used for carrying out maximum pooling operation on the key point cloud data set output by the second MLP layer to obtain the local-global face characteristics;
And carrying out similarity measurement on the local-global face characteristics and face characteristics of a candidate face to obtain a similarity measurement value, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to the similarity measurement value.
Compared with the prior art, the three-dimensional face recognition method provided by the invention has the advantages that the characteristic extraction is carried out on the three-dimensional face point cloud data through the RP-net network model, and the local characteristic and the global characteristic of the face are captured simultaneously in the characteristic extraction of the RP-net network model, wherein the local characteristic can describe the face characteristic in more detail, and the recognition accuracy rate of the three-dimensional face can be improved.
Further, after the three-dimensional face point cloud data to be identified is obtained, the method further comprises the steps of:
performing horizontal slicing on the three-dimensional face point cloud data to be identified to obtain a plurality of horizontal contour maps;
For each horizontal contour map, placing a plurality of detection points on a horizontal contour line, setting a detection circle by taking each detection point as a circle center, acquiring detection distances from two intersection points of the detection circle and the horizontal contour line to the corresponding detection points, and determining the detection point corresponding to the largest detection distance as a nose tip candidate point;
Determining the nose tip candidate point with the largest detection distance as a nose tip point;
and calculating the distance between each data point in the three-dimensional face point cloud data to be recognized and the nose point, and cutting out the data points with the distance larger than the preset distance.
Further, after the three-dimensional face point cloud data to be identified is obtained, the method further comprises the steps of: and aiming at each data point in the three-dimensional face point cloud data to be identified, acquiring the median coordinate of the data point in the same field, and replacing the coordinate of the data point with the median coordinate.
Further, for each data point in the three-dimensional face point cloud data to be identified, acquiring the median coordinate of the data point in the same field, and after replacing the coordinate of the data point with the median coordinate, further comprising the steps of: and filling the holes of the three-dimensional face point cloud data to be identified through tertiary interpolation.
Further, after the three-dimensional face point cloud data to be identified is obtained, the method further comprises the steps of: and acquiring a normal vector of each data point in the three-dimensional face point cloud data to be identified, and adding the normal vector into the three-dimensional face point cloud data to be identified.
Further, the expression of the central moment mu mn is
Wherein,D (i, j) represents the ith row and jth column of the distribution matrix D, d= [ D x,Dy,Dz],μmn=[μ11,μ12,μ21,μ22 ].
Further, the similarity measure is a nearest neighbor distance ratio.
Based on the same inventive concept, the invention also provides a three-dimensional face recognition device, comprising:
The data acquisition module is used for acquiring three-dimensional face point cloud data to be identified;
The characteristic acquisition module is used for inputting the three-dimensional face point cloud data to be identified into an RP-net network model to obtain local-global face characteristics, wherein the RP-net network model comprises a sampling module, a grouping module and a characteristic extraction module, and the sampling module is used for sampling the three-dimensional face point cloud data to be identified to obtain a plurality of key points;
The grouping module is used for obtaining a plurality of circular areas by taking each key point as a circle center and taking a characteristic extraction radius value r as a radius; grouping the three-dimensional face point cloud data to be identified according to the circular region division to obtain a plurality of key point cloud data sets;
The feature extraction module comprises a first transformation layer, a local feature marking layer, a first MLP layer, a second transformation layer, a second MLP layer and a pooling layer, wherein the first transformation layer is used for carrying out alignment operation on each key point cloud data set; the local feature labeling layer is used for rotating the key point cloud data set output by the first transformation layer around three-dimensional coordinate axes by T rotation angles { θ k }, k=1, 2, & gt, and T to obtain a rotation point cloud data set Q' x(θk),Q′y(θk),Qz′(θk corresponding to the three-dimensional coordinate axes; respectively projecting the rotating point cloud data set Q' x(θk),Q′y(θk),Q′z(θk) on three planes where three coordinate axes are located to obtain corresponding projection point cloud data sets; dividing the projection point cloud data set into N b×Nb grids in average, and calculating the point cloud data amount in each grid to obtain a corresponding distribution matrix D x,Dy,Dz; obtaining a corresponding center moment mu mn and shannon entropy e according to the distribution matrix D x,Dy,Dz; obtaining sub-feature descriptors f x(θk),fy(θk),fz(θk corresponding to three-dimensional coordinate axes according to the central moment mu mn and the shannon entropy e respectively; the sub-feature descriptors f x(θk),fy(θk),fz(θk) are aggregated to obtain local feature descriptors f; the first MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the first transformation layer according to the local feature descriptor f; the second transformation layer is used for carrying out alignment operation on the key point cloud data set output by the first MLP layer; the second MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the second transformation layer; the pooling layer is used for carrying out maximum pooling operation on the key point cloud data set output by the second MLP layer to obtain the local-global face characteristics;
and the matching module is used for carrying out similarity measurement on the local-global face characteristics and the face characteristics of a candidate face to obtain a similarity measurement value, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to the similarity measurement value.
Based on the same inventive concept, the present invention also provides an electronic device, including:
A processor;
a memory for storing a computer program for execution by the processor;
Wherein the processor, when executing the computer program, implements the steps of the above method.
Based on the same inventive concept, the present invention also provides a computer-readable storage medium, characterized in that a computer program is stored thereon, which computer program, when executed, implements the steps of the above-mentioned method.
For a better understanding and implementation, the present invention is described in detail below with reference to the drawings.
Drawings
Fig. 1 is a schematic flow chart of a three-dimensional face recognition method according to an embodiment;
FIG. 2 is a schematic diagram of a network architecture of an RP-net network model according to an embodiment;
Fig. 3 is a schematic structural diagram of a three-dimensional face recognition device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the following detailed description of the embodiments of the present application will be given with reference to the accompanying drawings.
It should be understood that the described embodiments are merely some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the application as detailed in the accompanying claims.
In the description of the present application, it should be understood that the terms "first," "second," "third," and the like are used merely to distinguish between similar objects and are not necessarily used to describe a particular order or sequence, nor should they be construed to indicate or imply relative importance. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances. Furthermore, in the description of the present application, unless otherwise indicated, "a plurality" means two or more. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
Please refer to fig. 1, which is a flow chart illustrating a three-dimensional face recognition method according to the present embodiment, the method includes steps S1 to S3:
step S1, three-dimensional face point cloud data to be identified are obtained, and preprocessing is carried out on the three-dimensional face point cloud data.
The three-dimensional face point cloud data is a representation mode of three-dimensional face information, the three-dimensional face point cloud data comprises a plurality of data points, each data point corresponds to a three-dimensional coordinate in a three-dimensional coordinate system, and the data points with the three-dimensional coordinates jointly form a three-dimensional face of the three-dimensional face. The three-dimensional coordinate system comprises a three-dimensional coordinate system, wherein three axes in the three-dimensional coordinate system are an x axis, a y axis and a z axis, planes of the x axis and the y axis are xy planes, planes of the x axis and the z axis are xz planes, and planes of the y axis and the z axis are yz planes.
The three-dimensional face point cloud data can be acquired by scanning the face through special scanning equipment, and can be stored in any computer storage medium such as a memory.
Preprocessing the three-dimensional face point cloud image, which specifically may include a face clipping operation, a face smoothing operation and a data enhancement operation, where the face clipping operation is used to, when the three-dimensional face point cloud data to be identified includes redundant data points except a face, such as a body below a shoulder, remove the redundant data points to obtain three-dimensional face point cloud data mainly including a face, so as to improve the subsequent identification efficiency, and specifically includes steps S111 to S114:
s111: carrying out horizontal slicing on three-dimensional face point cloud data to be identified to obtain horizontal contour diagrams of a plurality of three-dimensional faces;
In a preferred embodiment, step S111 may also uniformly interpolate the resulting horizontal profile to fill the holes in the horizontal profile.
S112: in each horizontal contour map, a plurality of detection points are placed on the horizontal contour line at a certain density, a detection circle with a fixed radius is arranged by taking each detection point as a circle center, the detection distance h from two intersection points of the detection circle and the horizontal contour line to the corresponding detection point is obtained, and the detection point corresponding to the maximum detection distance h is determined to be a nose tip candidate point;
S113: and carrying out refinement screening on the nose tip candidate points to determine nose tip points.
In an alternative implementation, the nose tip candidate point may be considered as a set of data points on the bridge of the nose, with the nose tip candidate point corresponding to the maximum detection distance h being determined to be the nose tip point.
In an alternative implementation, the tip candidate points may be refined by a random sample consensus (RANSAC) algorithm.
S114: and calculating the distance between each point in the three-dimensional face point cloud data to be identified and the nose tip point, cutting out the points with the distances larger than the preset distance, and cutting out the three-dimensional face point cloud data mainly comprising faces.
In one implementation, the preset distance in step S114 may be set to 90 mm.
The face smoothing operation is used for eliminating noise spikes in three-dimensional face point cloud data to be identified, particularly noise spikes easily occur in the areas of eyes, nose tips, teeth and the like, and the subsequent identification accuracy can be improved by eliminating the noise spikes. Specifically, the method comprises the step S121: and acquiring the median coordinate of each data point in the three-dimensional face point cloud data to be identified in one field, and replacing the data point coordinate with the median coordinate.
Since the noise spike may also cause a hole to appear on the three-dimensional surface of the three-dimensional face point cloud data and a hole caused by specular reflection of the face dark area, the open mouth, the mask, the sclera, the pupil, the eyelash, etc. when the three-dimensional face point cloud data is acquired, in a preferred embodiment, the face smoothing operation may further include step S122: filling of the missing holes was performed by cubic interpolation. Thus, the lack of holes can be eliminated to improve the recognition accuracy.
The data enhancement operation is used for acquiring normal vectors of each data point in the three-dimensional face point cloud data to be identified, and adding the normal vectors of each data point in the three-dimensional face point cloud data. The normal vector of the data point can display more local features, which is beneficial to the extraction and recognition of the face features. Specifically, the normal vector of each data point is the planar normal vector described by the data point, and can be obtained by fitting the local neighboring points of each data point by minimizing the cost function. A three-dimensional face point cloud data including m data points is expressed as p= [ P 1,p2,...,ps]T, wherein the three-dimensional coordinates of the i data point are expressed as P i=[pix,piy,piz]T, the normal vector of the i data point is expressed as n i=[nix,niy,niz]T, n ix,niy,niz is the normal component of the data point P i in the x, y and z channels of the three-dimensional coordinate system, l adjacent point sets around the data point P i are expressed as Q i=[pi1,pi2,...,pil]T, the normal vector of the data point P i and the adjacent point set Q i form a plane vector point multiplied by 0 through the minimized cost function minA, even if the normal vector of the data point P i is perpendicular to the plane formed by the adjacent point set Q i, and the expression of the minimized cost function minA is as follows:
in specific implementation, a matrix with the field of 5×5 is selected for data enhancement operation, and the obtained m×n×6 three-dimensional face point cloud data is obtained, wherein m and n are the data range size of the three-dimensional face point cloud data.
And S2, inputting the preprocessed three-dimensional face point cloud data in the step S1 into an RP-net network model to obtain local-global face characteristics.
Please refer to fig. 2, which is a schematic diagram of a network structure of an RP-net network model of the present embodiment, where the RP-net network model includes a sampling module, a grouping module and a feature extraction module, and the sampling module is configured to downsample three-dimensional face point cloud data, and specifically includes the steps of: the sampling is performed by the furthest point sampling (Farthest Point Sampling, FPS) to obtain a plurality of key points, wherein the initial key points can be random data points or nasal tip points obtained in the step S113.
The grouping module is used for grouping the three-dimensional face point cloud data according to the key points acquired by the sampling module, specifically, each key point is used as a circle center, and the feature extraction radius value r is used as a radius to acquire a plurality of circular areas; dividing the three-dimensional face point cloud data into a plurality of key point cloud data sets according to the circular area, and sequentially inputting each key point cloud data set into the feature extraction layer. When the RP-net network model is trained, the feature extraction radius value r in the grouping module can be set to a plurality of different ladder values, and the final feature extraction radius value r is determined according to the recognition rate of the model.
The feature extraction module is used for extracting features of each key point cloud data set output by the grouping module. Specifically, the feature extraction layer comprises a first transformation layer, a local feature marking layer, a first MLP layer, a second transformation layer, a second MLP layer and a pooling layer, wherein the first transformation layer is used for carrying out alignment operation on each key point cloud data set through T-net and matrix multiplication (matrix multiply).
The local feature marking layer is used for obtaining local feature descriptors of the key point cloud data set output by the first transformation layer, and marking the local features of the key point cloud data set through the local feature descriptors. The method specifically comprises the following steps: rotating the key point cloud data set by T rotation angles { θ k }, k=1, 2, & gt, T around the x-axis, y-axis and z-axis of the three-dimensional coordinate system, respectively, to obtain three corresponding sets of rotation point cloud data sets Q' x(θk),Q′y(θk),Q′z(θk); each rotation point cloud data set Q' x(θk),Q′y(θk),Q′z(θk) is projected on xy, xz and yz planes respectively to obtain projection point cloud data sets corresponding to the three planesProjection point cloud dataset/>The method comprises the steps of equally dividing the three sets of rotating point cloud data into N b×Nb grids, calculating the point cloud data in each grid, and obtaining a distribution matrix D x,Dy,Dz with the size of N b×Nb corresponding to three sets of rotating point cloud data sets Q' x(θk),Q′y(θk),Q′z(θk); carrying out normalization operation on the distribution matrix D x,Dy,Dz to realize invariance to the resolution change of the grid; the distribution matrix D x,Dy,Dz is compressed to obtain a center moment mu mn and shannon entropy e, wherein the expression of the center moment mu mn is as follows:
Wherein, D (i, j) represents the ith row and jth column of the distribution matrix D, d= [ D x,Dy,Dz ]. In this embodiment, μ mn=[μ11,μ12,μ21,μ22 ].
The expression of shannon entropy e is
Deriving a sub-feature descriptor f x(θk) rotated about the x-axis, a sub-feature descriptor f y(θk) rotated about the y-axis, and a sub-feature descriptor f z(θk) rotated about the z-axis from the center moment μ mn and shannon entropy e;
Sub-feature descriptor f x(θk), sub-feature descriptor f y(θk) and sub-feature descriptor f z(θk) are aggregated to obtain a local feature descriptor f, wherein the expression of the local feature descriptor f is as follows
f={fx(θk),fy(θk),fz(θk)}
The first MLP layer is a multi-layer perceptron and is used for carrying out dimension lifting operation on each key point cloud data set according to the local feature descriptor f.
The second transformation layer is used for carrying out alignment operation on the key point cloud data set output by the first MLP layer through T-net and matrix multiplication.
The second MLP layer is a multi-layer perceptron, and is configured to perform a second dimension-lifting operation on the key point cloud data set output by the second transformation layer, where in a specific implementation, a feature dimension after the second dimension-lifting is 1024.
The pooling layer is used for carrying out maximum pooling operation on the key point cloud data set output by the second MLP layer to obtain local-global face characteristics.
And S3, carrying out similarity measurement on the local-global face features obtained in the step S2 and face features of a candidate face, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to a similarity measurement result.
In one implementation, a Nearest Neighbor Distance Ratio (NNDR) may be used to measure the similarity between the local-global face feature and the face feature of the candidate face, and the obtained nearest neighbor distance ratio is compared with a preset comparison threshold, and when the nearest neighbor distance ratio is greater than the preset threshold, the face corresponding to the three-dimensional face point cloud data to be identified may be determined to be the corresponding candidate face. In a specific implementation, the preset comparison threshold may be set to a plurality of different step values, and the final preset comparison threshold is determined according to the recognition accuracy of the result.
The following is an experiment performed on the disclosed three-dimensional face dataset Bosphorus dataset through the three-dimensional face recognition method, the three-dimensional face recognition method based on voxel representation of depth image and the large-scale 3D face recognition method according to the embodiment of the invention, wherein the experimental environment comprises a normal environment and a low-light environment, and the recognition rate under the corresponding experimental environment is obtained. Among the three-dimensional face recognition methods (3 d face recognition based on volumetric representation of range image) based on voxel representation of depth images are Koushik Dutta, debotosh Bhattacharjee, mita Nasipuri, and the methods disclosed by Anik Poddar in 2019 on Advanced Computing AND SYSTEMS for Security; the large scale 3D face recognition method (Towards large-scale 3D face recognition) is a method disclosed in 2016International Conference on Digital Image Computing:Techniques and Applications (DICTA) in 2016 by Syed Zulqarnain Gilani and Ajmal Mian. As shown in table 1, compared with two three-dimensional face recognition methods in the prior art, the three-dimensional face recognition method of the embodiment has improved recognition rate in both normal environment and low-light environment.
TABLE 1
Compared with the prior art, the method and the device for identifying the three-dimensional human face point cloud data have the advantages that the characteristic extraction is carried out on the three-dimensional human face point cloud data through the RP-net network model, and the local characteristic and the global characteristic of the human face are captured simultaneously in the characteristic extraction of the RP-net network model, wherein the local characteristic can describe the human face characteristic in more detail, and the accuracy rate of identifying the three-dimensional human face can be improved. In addition, the surface normal vector is added into the three-dimensional face point cloud data input into the RP-net network model, so that the three-dimensional face feature under the dim light or dark environment can be enhanced, and the recognition accuracy is higher in the dim light and dark environment.
Based on the same inventive concept, the invention also provides a three-dimensional face recognition device. Referring to fig. 3, which is a schematic structural diagram of a three-dimensional face recognition device according to the present embodiment, the device includes a data acquisition module 10, a feature acquisition module 20, and a matching module 30, where the data acquisition module 10 is configured to acquire three-dimensional face point cloud data to be recognized, and perform preprocessing on the three-dimensional face point cloud data, where the preprocessing includes a face clipping operation, a face smoothing operation, and a data enhancing operation, and specific steps of the face clipping operation, the face smoothing operation, and the data enhancing operation are the same as those of the face clipping operation, the face smoothing operation, and the data enhancing operation described in the above method embodiments, and are not repeated herein;
The feature obtaining module 20 is configured to input the preprocessed three-dimensional face point cloud data in the data obtaining module 10 into an RP-net network model to obtain local-global face features, where the network structure and the included procedure of the RP-net network model are identical to those of the RP-net network model described in the above method embodiment, and are not described herein again.
The matching module 30 is configured to measure similarity between the local-global face feature obtained in the feature obtaining module 20 and a face feature of a candidate face, and determine whether the three-dimensional face point cloud data to be identified is the candidate face according to the similarity measurement result.
For device embodiments, reference is made to the description of method embodiments for relevant details, since they substantially correspond to the method embodiments.
Based on the same inventive concept, the present invention also provides an electronic device, which may be a terminal device such as a server, a desktop computing device, or a mobile computing device (e.g., a laptop computing device, a handheld computing device, a tablet computer, a netbook, etc.). The electronic device comprises one or more processors and a memory, wherein the processors are used for executing the three-dimensional face recognition method of the program implementation method embodiment; the memory is used for storing a computer program executable by the processor. The electronic device may further comprise a display screen for displaying the search result image obtained by the processor.
Based on the same inventive concept, the present invention further provides a computer readable storage medium, corresponding to the foregoing three-dimensional face recognition method embodiment, having stored thereon a computer program, which when executed by a processor, implements the steps of the three-dimensional face recognition method described in any of the foregoing embodiments.
The present application may take the form of a computer program product embodied on one or more storage media (including, but not limited to, magnetic disk storage, CD-ROM, optical storage, etc.) having program code embodied therein. Computer-usable storage media include both permanent and non-permanent, removable and non-removable media, and information storage may be implemented by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to: phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, may be used to store information that may be accessed by the computing device.
The above examples illustrate only a few embodiments of the invention, which are described in detail and are not to be construed as limiting the scope of the invention. It should be noted that modifications and improvements can be made by those skilled in the art without departing from the spirit of the invention, and the invention is intended to encompass such modifications and improvements.
Claims (10)
1. The three-dimensional face recognition method is characterized by comprising the following steps of:
acquiring three-dimensional face point cloud data to be identified;
Inputting the three-dimensional face point cloud data to be identified into an RP-net network model to obtain local-global face characteristics, wherein the RP-net network model comprises a sampling module, a grouping module and a characteristic extraction module, and the sampling module is used for sampling the three-dimensional face point cloud data to be identified to obtain a plurality of key points;
The grouping module is used for obtaining a plurality of circular areas by taking each key point as a circle center and taking a characteristic extraction radius value r as a radius; grouping the three-dimensional face point cloud data to be identified according to the circular region division to obtain a plurality of key point cloud data sets;
The feature extraction module comprises a first transformation layer, a local feature marking layer, a first MLP layer, a second transformation layer, a second MLP layer and a pooling layer, wherein the first transformation layer is used for carrying out alignment operation on each key point cloud data set; the local feature labeling layer is used for rotating the key point cloud data set output by the first transformation layer around three-dimensional coordinate axes by T rotation angles { θ k }, k=1, 2, & gt, and T to obtain a rotation point cloud data set Q' x(θk),Q′y(θk),Q′z(θk corresponding to the three-dimensional coordinate axes; respectively projecting the rotating point cloud data set Q' x(θk),Q′y(θk),Q′z(θk) on three planes where three coordinate axes are located to obtain corresponding projection point cloud data sets; dividing the projection point cloud data set into N b×Nb grids in average, and calculating the point cloud data amount in each grid to obtain a corresponding distribution matrix D x,Dy,Dz; obtaining a corresponding center moment mu mn and shannon entropy e according to the distribution matrix D x,Dy,Dz; obtaining sub-feature descriptors f x(θk),fy(θk),fz(θk corresponding to three-dimensional coordinate axes according to the central moment mu mn and the shannon entropy e respectively; the sub-feature descriptors f x(θk),fy(θk),fz(θk) are aggregated to obtain local feature descriptors f; the first MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the first transformation layer according to the local feature descriptor f; the second transformation layer is used for carrying out alignment operation on the key point cloud data set output by the first MLP layer; the second MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the second transformation layer; the pooling layer is used for carrying out maximum pooling operation on the key point cloud data set output by the second MLP layer to obtain the local-global face characteristics;
And carrying out similarity measurement on the local-global face characteristics and face characteristics of a candidate face to obtain a similarity measurement value, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to the similarity measurement value.
2. The method according to claim 1, further comprising the step of, after obtaining three-dimensional face point cloud data to be identified:
performing horizontal slicing on the three-dimensional face point cloud data to be identified to obtain a plurality of horizontal contour maps;
For each horizontal contour map, placing a plurality of detection points on a horizontal contour line, setting a detection circle by taking each detection point as a circle center, acquiring detection distances from two intersection points of the detection circle and the horizontal contour line to the corresponding detection points, and determining the detection point corresponding to the largest detection distance as a nose tip candidate point;
Determining the nose tip candidate point with the largest detection distance as a nose tip point;
and calculating the distance between each data point in the three-dimensional face point cloud data to be recognized and the nose point, and cutting out the data points with the distance larger than the preset distance.
3. The method according to claim 1, further comprising the step of, after obtaining three-dimensional face point cloud data to be identified: and aiming at each data point in the three-dimensional face point cloud data to be identified, acquiring the median coordinate of the data point in the same field, and replacing the coordinate of the data point with the median coordinate.
4. A method according to claim 3, characterized in that: for each data point in the three-dimensional face point cloud data to be identified, acquiring the median coordinate of the data point in the same field, and after replacing the coordinate of the data point with the median coordinate, further comprising the steps of: and filling the holes of the three-dimensional face point cloud data to be identified through tertiary interpolation.
5. The method according to claim 1, further comprising the step of, after obtaining three-dimensional face point cloud data to be identified: and acquiring a normal vector of each data point in the three-dimensional face point cloud data to be identified, and adding the normal vector into the three-dimensional face point cloud data to be identified.
6. The method of claim 1, wherein the expression of the central moment μ mn is
Wherein,D (i, j) represents the ith row and jth column of the distribution matrix D, d= [ D x,Dy,Dz],μmn=[μ11,μ12,μ21,μ22 ].
7. The method of claim 1, wherein the similarity measure is a nearest neighbor distance ratio.
8. A three-dimensional face recognition device, comprising:
The data acquisition module is used for acquiring three-dimensional face point cloud data to be identified;
The characteristic acquisition module is used for inputting the three-dimensional face point cloud data to be identified into an RP-net network model to obtain local-global face characteristics, wherein the RP-net network model comprises a sampling module, a grouping module and a characteristic extraction module, and the sampling module is used for sampling the three-dimensional face point cloud data to be identified to obtain a plurality of key points;
The grouping module is used for obtaining a plurality of circular areas by taking each key point as a circle center and taking a characteristic extraction radius value r as a radius; grouping the three-dimensional face point cloud data to be identified according to the circular region division to obtain a plurality of key point cloud data sets;
The feature extraction module comprises a first transformation layer, a local feature marking layer, a first MLP layer, a second transformation layer, a second MLP layer and a pooling layer, wherein the first transformation layer is used for carrying out alignment operation on each key point cloud data set; the local feature labeling layer is used for rotating the key point cloud data set output by the first transformation layer around three-dimensional coordinate axes by T rotation angles { θ k }, k=1, 2, & gt, and T to obtain a rotation point cloud data set Q' x(θk),Q′y(θk),Q′z(θk corresponding to the three-dimensional coordinate axes; respectively projecting the rotating point cloud data set Q' x(θk),Q′y(θk),Q′z(θk) on three planes where three coordinate axes are located to obtain corresponding projection point cloud data sets; dividing the projection point cloud data set into N b×Nb grids in average, and calculating the point cloud data amount in each grid to obtain a corresponding distribution matrix D x,Dy,Dz; obtaining a corresponding center moment mu mn and shannon entropy e according to the distribution matrix D x,Dy,Dz; obtaining sub-feature descriptors f x(θk),fy(θk),fz(θk corresponding to three-dimensional coordinate axes according to the central moment mu mn and the shannon entropy e respectively; the sub-feature descriptors f x(θk),fy(θk),fz(θk) are aggregated to obtain local feature descriptors f; the first MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the first transformation layer according to the local feature descriptor f; the second transformation layer is used for carrying out alignment operation on the key point cloud data set output by the first MLP layer; the second MLP layer is used for carrying out dimension lifting operation on the key point cloud data set output by the second transformation layer; the pooling layer is used for carrying out maximum pooling operation on the key point cloud data set output by the second MLP layer to obtain the local-global face characteristics;
and the matching module is used for carrying out similarity measurement on the local-global face characteristics and the face characteristics of a candidate face to obtain a similarity measurement value, and judging whether the three-dimensional face point cloud data to be identified is the candidate face or not according to the similarity measurement value.
9. An electronic device, comprising:
A processor;
a memory for storing a computer program for execution by the processor;
Wherein the processor, when executing the computer program, implements the steps of the method of any of claims 1-7.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method of any of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210615124.5A CN114842543B (en) | 2022-06-01 | 2022-06-01 | Three-dimensional face recognition method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210615124.5A CN114842543B (en) | 2022-06-01 | 2022-06-01 | Three-dimensional face recognition method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114842543A CN114842543A (en) | 2022-08-02 |
CN114842543B true CN114842543B (en) | 2024-05-28 |
Family
ID=82572657
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210615124.5A Active CN114842543B (en) | 2022-06-01 | 2022-06-01 | Three-dimensional face recognition method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114842543B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105243374A (en) * | 2015-11-02 | 2016-01-13 | 湖南拓视觉信息技术有限公司 | Three-dimensional human face recognition method and system, and data processing device applying same |
CN110147721A (en) * | 2019-04-11 | 2019-08-20 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face identification method, model training method and device |
WO2020199693A1 (en) * | 2019-03-29 | 2020-10-08 | 中国科学院深圳先进技术研究院 | Large-pose face recognition method and apparatus, and device |
WO2021051539A1 (en) * | 2019-09-18 | 2021-03-25 | 平安科技(深圳)有限公司 | Face recognition method and apparatus, and terminal device |
-
2022
- 2022-06-01 CN CN202210615124.5A patent/CN114842543B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105243374A (en) * | 2015-11-02 | 2016-01-13 | 湖南拓视觉信息技术有限公司 | Three-dimensional human face recognition method and system, and data processing device applying same |
WO2020199693A1 (en) * | 2019-03-29 | 2020-10-08 | 中国科学院深圳先进技术研究院 | Large-pose face recognition method and apparatus, and device |
CN110147721A (en) * | 2019-04-11 | 2019-08-20 | 阿里巴巴集团控股有限公司 | A kind of three-dimensional face identification method, model training method and device |
WO2021051539A1 (en) * | 2019-09-18 | 2021-03-25 | 平安科技(深圳)有限公司 | Face recognition method and apparatus, and terminal device |
Also Published As
Publication number | Publication date |
---|---|
CN114842543A (en) | 2022-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Feixas et al. | A unified information-theoretic framework for viewpoint selection and mesh saliency | |
JP6681729B2 (en) | Method for determining 3D pose of object and 3D location of landmark point of object, and system for determining 3D pose of object and 3D location of landmark of object | |
Lin et al. | Line segment extraction for large scale unorganized point clouds | |
Gao et al. | 3D model comparison using spatial structure circular descriptor | |
Thompson et al. | Three-dimensional model matching from an unconstrained viewpoint | |
Huang et al. | A coarse-to-fine algorithm for matching and registration in 3D cross-source point clouds | |
Lian et al. | Rectilinearity of 3D meshes | |
KR101798041B1 (en) | Device for 3 dimensional object recognition and pose estimation and method for the same | |
CN112889091A (en) | Camera pose estimation using fuzzy features | |
KR101548928B1 (en) | Invariant visual scene and object recognition | |
US8711210B2 (en) | Facial recognition using a sphericity metric | |
AU2018202767B2 (en) | Data structure and algorithm for tag less search and svg retrieval | |
Kiforenko et al. | A performance evaluation of point pair features | |
JP2014081347A (en) | Method for recognition and pose determination of 3d object in 3d scene | |
JP2013012190A (en) | Method of approximating gabor filter as block-gabor filter, and memory to store data structure for access by application program running on processor | |
Zhang et al. | KDD: A kernel density based descriptor for 3D point clouds | |
US20100223299A1 (en) | 3d object descriptors | |
Logoglu et al. | Cospair: colored histograms of spatial concentric surflet-pairs for 3d object recognition | |
JP2014032623A (en) | Image processor | |
Yin et al. | [Retracted] Virtual Reconstruction Method of Regional 3D Image Based on Visual Transmission Effect | |
CN114842543B (en) | Three-dimensional face recognition method and device, electronic equipment and storage medium | |
Rani et al. | Digital image forgery detection under complex lighting using Phong reflection model | |
WO2022034678A1 (en) | Image augmentation apparatus, control method, and non-transitory computer-readable storage medium | |
Kordelas et al. | Viewpoint independent object recognition in cluttered scenes exploiting ray-triangle intersection and SIFT algorithms | |
Sintunata et al. | Skewness map: estimating object orientation for high speed 3D object retrieval system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |