CN116883472A - Face nursing system based on face three-dimensional image registration - Google Patents

Face nursing system based on face three-dimensional image registration Download PDF

Info

Publication number
CN116883472A
CN116883472A CN202311153799.3A CN202311153799A CN116883472A CN 116883472 A CN116883472 A CN 116883472A CN 202311153799 A CN202311153799 A CN 202311153799A CN 116883472 A CN116883472 A CN 116883472A
Authority
CN
China
Prior art keywords
point
face
dimensional
key
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311153799.3A
Other languages
Chinese (zh)
Other versions
CN116883472B (en
Inventor
郭勇
常阳
李圣玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Daizhuang Hospital Shandong Ankang Hospital Jining Mental Health Center
Original Assignee
Shandong Deyixin Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Deyixin Information Technology Co ltd filed Critical Shandong Deyixin Information Technology Co ltd
Priority to CN202311153799.3A priority Critical patent/CN116883472B/en
Publication of CN116883472A publication Critical patent/CN116883472A/en
Application granted granted Critical
Publication of CN116883472B publication Critical patent/CN116883472B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The application relates to the field of three-dimensional image registration, in particular to a face nursing system based on face three-dimensional image registration. The system comprises: the three-dimensional acquisition module, the three-dimensional key point fusion feature vector detection module and the three-dimensional registration module are used for obtaining left-eye corner key points and right-eye corner key points according to coordinates and curvature information of each three-dimensional contour point cloud of an eye area; obtaining nose tip positioning probability factors of each point cloud according to the point cloud corner values of the nose area and the distance information between each point cloud corner value and each point cloud corner key point, and further obtaining nose tip key points; combining the positions of the nose tip key points to obtain left and right mouth corner key points; extracting feature vectors of all key points and fusion feature vectors by combining gray level differences of all key points and all point clouds in the local space neighborhood; and obtaining the face registration error rate according to the fusion feature vector of each key point, and completing the analysis of the face nursing result. Thereby realizing the accurate registration of the three-dimensional face image and providing reliable information for the face nursing.

Description

Face nursing system based on face three-dimensional image registration
Technical Field
The application relates to the field of three-dimensional image registration, in particular to a face nursing system based on face three-dimensional image registration.
Background
At the moment of rapid development of informatization, the human face plays a key role in transmitting identity, information, emotion, intention and the like, the traditional two-dimensional image is limited by pixel dimensions, so that the traditional two-dimensional image cannot represent the information quantity transmitted by a real human face, and various two-dimensional human face algorithms have the phenomena of excessive performance and overflow of calculation force due to the improvement of the performance of hardware equipment, so that more and more students are devoted to the research of three-dimensional human face related technologies, and the research of the three-dimensional human face is widely applied to the fields of three-dimensional game development, face recognition, human face nursing and the like.
The traditional three-dimensional image registration method generally needs to search the matched image area in a global scope, needs a large amount of computing resources and time, and has the problem of low overall registration accuracy.
In summary, the application provides a face care system based on face three-dimensional image registration, which completes three-dimensional face image registration by extracting feature points of important positions of faces to obtain feature vectors representing the feature points, thereby improving the accuracy and efficiency of three-dimensional face registration.
Disclosure of Invention
In order to solve the technical problems, the application aims to provide a face care system based on face three-dimensional image registration, which adopts the following technical scheme:
the application provides a face care system based on face three-dimensional image registration, which comprises:
the three-dimensional acquisition module is used for acquiring three-dimensional face images with different visual angles;
the three-dimensional key point fusion feature vector detection module is used for acquiring left and right eye areas, a nose area and a mouth area of the three-dimensional face image; three-dimensional contour point clouds of the left eye region and the right eye region are respectively extracted, and left eye corner key points and right eye corner key points of each eye are obtained according to coordinates and curvature information of each three-dimensional contour point cloud of the eye region; acquiring angular point values of each point cloud in the nose area, and acquiring nose tip positioning probability factors of each point cloud according to the angular point values, coordinate information and distance information between each point cloud in the nose area and each eye corner key point; taking the point cloud with the maximum nasal tip positioning probability factor in the nasal area as a nasal tip key point; combining position information of the nose tip key points to obtain an upper lip peak point and a lower boundary point of the mouth, and obtaining left and right mouth corner key points according to the upper lip peak point and the lower boundary point of the mouth;
obtaining feature vectors of all the key points according to gray level differences of all the key points and all the point clouds in the local space neighborhood; obtaining fusion feature vectors of the key points according to the feature vectors of the key points;
and the three-dimensional registration module obtains a face registration error rate according to the fusion feature vectors of the key points, and combines the face registration error rate to complete the analysis of the face care result.
Further, the left eye corner key points include:
for each eye;
and obtaining the sum value and the inner angle of the three-dimensional coordinates of each three-dimensional contour point cloud in the eye area, calculating the sum of the sum value and the inner angle, marking the sum as a second sum value, and taking the three-dimensional contour point cloud with the smallest corresponding second sum value in the eye area as the key point of the left eye angle.
Further, the right eye corner key points include:
for each eye;
and obtaining the sum value and curvature of the three-dimensional coordinates of each three-dimensional contour point cloud in the eye area, calculating the sum of the sum value and the curvature, recording the sum as a first sum value, and taking the three-dimensional contour point cloud corresponding to the maximum first sum value in the eye area as a right eye corner key point.
Further, the nose tip positioning probability factor of each point cloud is obtained according to the point cloud corner value, the coordinate information and the distance information between each point cloud corner value and each point cloud corner key point in the nose area, specifically:
for each point cloud in the nose area;
calculating the product result of the point cloud angular point value and the Z coordinate value, obtaining the minimum value of the Euclidean distance between the point cloud and the three-dimensional coordinate of each eye angle key point, calculating the square of the difference value between the minimum value and the distance threshold value, and obtaining the calculation result of an exponential function taking the natural constant as the base number and the opposite number of the square as the index;
and taking the product of the product result and the calculation result as a nose tip positioning probability factor of the point cloud.
Further, the step of combining the position information of the nose tip key point to obtain the upper lip peak point and the lower boundary point of the mouth includes:
detecting a three-dimensional contour of a mouth area, and recording a three-dimensional contour point cloud of the outermost layer of the mouth as a mouth outer contour point cloud;
traversing each point cloud downwards from the corresponding vertical line of the horizontal coordinate of the tip key point, and taking the first mouth outline point cloud in the mouth area on the vertical line as the upper lip peak point of the mouth; and continuously traversing the point clouds downwards along the vertical line from the upper lip peak point of the mouth, and taking the second mouth outline point cloud as the lower boundary point of the mouth.
Further, the obtaining the key points of the left and right mouth angles according to the upper lip peak point and the lower boundary point of the mouth includes:
and taking the midpoint of the connecting line of the upper lip peak point and the lower boundary point of the mouth as a lip center point, taking the lip center point as a starting point, and respectively obtaining mouth outline point clouds on the left side and the right side in the horizontal direction as key points of the left mouth corner and the right mouth corner.
Further, the obtaining the feature vector of each key point according to the gray level difference between each key point and each point cloud in the local spatial neighborhood includes:
for each key point;
acquiring a local space n multiplied by n neighborhood of a key point by taking the key point as a center, wherein n is a preset local space size;
marking the point cloud with the difference value between the gray value of the point cloud in the local space neighborhood and the gray value of the key point lower than the gray threshold value as 0; marking the point cloud with the difference value between the gray value of the point cloud in the local space neighborhood and the gray value of the key point higher than the gray threshold value as 1;
and taking a vector formed by cloud values of each point in the local space neighborhood as a characteristic vector of the key point.
Further, the obtaining the fusion feature vector of each key point according to the feature vector of each key point includes:
and adding the feature vectors obtained by the key points in the three-dimensional facial images with different visual angles to obtain the fusion feature vector of each key point.
Further, the obtaining the face registration error rate according to the fusion feature vector of each key point includes:
and calculating cosine similarity between the fusion feature vector of each key point and the fusion feature vector of each corresponding key point in the standard three-dimensional face model, obtaining a calculation result of an exponential function taking the opposite number of the cosine similarity as an index based on a natural constant, and taking the normalized sum value of the calculation results of all the key points as a face registration error rate, wherein the face registration error rate and the cosine similarity form a negative correlation.
Further, the combined face registration error rate complete face care result analysis includes:
when the face registration error rate is higher than the error rate threshold, the face care effect is not obvious;
when the face registration error rate is below the error rate threshold, the face care effect is significant.
The application has the following beneficial effects:
the application extracts facial features based on analysis of key feature points of the face, extracts feature points of key positions of the face from different angles and acquires feature vectors comprehensively representing the feature points, improves the registration accuracy of three-dimensional facial images, and solves the problems of high feature matching error rate, low efficiency and the like caused by incomplete facial feature extraction. The application has higher face image registration accuracy, high registration efficiency and high speed.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions and advantages of the prior art, the following description will briefly explain the drawings used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the application, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a block diagram of a face care system based on face three-dimensional image registration according to one embodiment of the present application;
fig. 2 is a schematic diagram of each key point of a three-dimensional face image with different view angles.
Detailed Description
In order to further describe the technical means and effects adopted by the application to achieve the preset aim, the following detailed description refers to the specific implementation, structure, characteristics and effects of a face care system based on face three-dimensional image registration according to the application with reference to the accompanying drawings and the preferred embodiment. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs.
The following specifically describes a specific scheme of a face care system based on face three-dimensional image registration provided by the application with reference to the accompanying drawings.
Referring to fig. 1, a block diagram of a face care system based on face three-dimensional image registration according to an embodiment of the present application is shown, the system includes: the system comprises a three-dimensional acquisition module 101, a three-dimensional key point fusion feature vector detection module 102 and a three-dimensional registration module 103.
The three-dimensional acquisition module 101 acquires three-dimensional face images for face registration by using a 3D camera, acquires three-dimensional face images of the same face at different angles, and preprocesses the three-dimensional face images.
Firstly, three-dimensional face images are shot from three angles of the right front side, the left side and the right side of the same face by utilizing a 3D camera to obtain three-dimensional face images, the three-dimensional face images are respectively converted into gray images, and the three-dimensional images are subjected to denoising processing by utilizing Gaussian filtering, wherein the Gaussian filtering denoising and three-dimensional image acquisition processes are all existing known technologies, specific three-dimensional image acquisition camera models, shooting angles and control enforcers of visual angles are set by themselves, deployment enforcers of light sources in the image acquisition process can be installed by themselves according to actual conditions, and the three-dimensional image acquisition process enforcers can be set according to actual conditions.
According to the method, three-dimensional face images for face image registration are acquired through the 3D camera, and three-dimensional face images of the same face information under three angles are obtained and used as key data for three-dimensional face image registration.
The three-dimensional key point fusion feature vector detection module 102 extracts each key point of the three-dimensional face images, and performs feature fusion on the extracted features of the key points of each three-dimensional face image to obtain fusion feature vectors of each key point.
Three-dimensional image registration is the process of aligning two or more three-dimensional images so that they are compared or fused in the same coordinate system. According to the embodiment, the three-dimensional face image is matched under different shooting angles according to the characteristic information of each three-dimensional point of the three-dimensional face image, and the three-dimensional face image has obvious characteristic points such as a nose, eyes, a mouth and the like, so that the characteristic points are selected, the accuracy of image matching can be improved, the calculated amount of image matching can be reduced, the matching speed is improved, and the extraction process of the three-dimensional face image matching key points is specifically as follows:
firstly, modeling a three-dimensional face image, establishing a face coordinate system by taking the transverse direction of the face as the x-axis, the longitudinal direction as the y-axis and the front and rear depth as the z-axis, and then performing rough positioning processing on the face to avoid redundant calculation and possible deviation, and in addition, the approximate position of the face feature can be extracted relatively well, so that the operation amount is simplified. In this embodiment, a semantic segmentation network is used to perform preliminary segmentation extraction on left and right eye regions, a nose region and a mouth region in a three-dimensional face image, and it should be noted that the process is a known technology and is not in the protection scope of this embodiment, and detailed description thereof is omitted here.
Because of the constancy of the face structure, information such as relative positions and spatial relations of organs can be introduced, first, left and right eye corner points are positioned, first, for left and right eye areas, because eyes have inconsistent color information and edge information are rich, the embodiment adopts a three-dimensional contour extraction algorithm to obtain rough contours of the left and right eye areas, and then positions left and right eye corners of each eye according to information of three-dimensional contour point clouds on the left and right eye contours. The three-dimensional contour extraction algorithm and the three-dimensional contour extraction process are not in the protection scope of the embodiment, and can be realized by the prior art, and the embodiment is not described in detail. It should be noted that, in the frontal three-dimensional facial image, each eye area can see two left and right eyes, and in the lateral facial image, only the left and right eyes of one eye can be analyzed, so in this embodiment, the left and right eyes of each eye are positioned first, and, taking any one eye as an example, the left and right eye corner key points are obtained, and the specific expression for extracting the left and right eye corner key points is:
in the method, in the process of the application,as a key point of the right eye corner,respectively representing three-dimensional coordinates of the three-dimensional contour point cloud k,is the curvature of the three-dimensional contour point cloud k,as a key point of the left eye corner,for the corresponding inner angle of the three-dimensional outline point cloud k in the three-dimensional outline of the eye, the construction logic of the eye corner key points is as follows: aiming at the three-dimensional outline of the eye, selecting a point cloud with the maximum curvature and the maximum sum of three-dimensional coordinates on the outline as a right eye corner jointA key point; and selecting a point cloud with the minimum sum of three-dimensional coordinates on the outline and the minimum corresponding inner angle in the three-dimensional outline as a left eye corner key point. Wherein, the liquid crystal display device comprises a liquid crystal display device,is the first sum.
The method described in the embodiment is repeated to obtain the left and right corner key points of the left eye and the right corner key points of the right eye in each three-dimensional face image, and it should be noted that, considering that the analysis in this embodiment is three face images of the left side, the right side and the front side, the left and right corner key points are positioned for each eye contour, the front side image can clearly position the left and right corner key points of two eyes, and the left and right images can position only the left and right corner key points of eyes in the identified eye contour. Thus, the positioning extraction of the corner key points of the eye area is completed.
After the right and left corner key points are determined, the position of the nose tip is determined according to the positions of the corner key points and the relative distance between organs, corner detection is performed on the nose area in the three-dimensional face image, and corner values of each point cloud in the nose area are obtained, wherein a lot of existing algorithms for corner detection can be selected by an implementer, the existing harris corner detection algorithm is adopted in the embodiment, the specific corner detection process is the existing known technology, and the corner values of each point cloud are marked as the corner values of each point cloud without being described in detail in the protection scope of the embodimentFrom this, the nose tip location probability factor of each point cloud in the nose area is calculated, and the specific expression of the nose tip location probability factor is:
in the method, in the process of the application,the probability factor is located for the tip of the point cloud i,for the euclidean distance of the three-dimensional coordinates of the corner of the eye key point to point cloud i,as a threshold value of the distance,for the Z coordinate value of the point cloud i,for the corner value of the point cloud i,in this embodiment, the distance threshold is not limited, and the operator can set the distance threshold by himself. The construction logic of the nose tip positioning probability factor is that the nose tip is the most convex point of the face, so that the larger the point cloud depth value is, namely the larger the Z coordinate value is, the larger the corresponding point cloud nose tip positioning probability factor is; the nose tip is not only protruding and is located at the vertex of the nose, compared with other organs of the face, the nose tip is the area with the most obvious angle, so that the larger the point cloud corner value is, the larger the nose tip positioning probability factor of the corresponding point cloud is, meanwhile, the embodiment considers that the larger the nose tip positioning probability factor of the corresponding point cloud is if the Euclidean distance between the point cloud and the corner key point closest to the point cloud is close to the distance threshold value because a fixed distance exists between the three-vestibule five-eye organs.
Repeating the method in the embodiment to obtain the nose tip positioning probability factor of each point cloud in the nose area, and taking the point cloud with the largest corresponding nose tip positioning probability factor as the nose tip key point, wherein the nose tip key point can be positioned according to the method in the embodiment for three-dimensional face images. Thus, the extraction of the key points of the nose tip is completed.
Further, in this embodiment, in order to improve the accuracy of extracting and positioning the key points, the position of the mouth in the three-dimensional facial image is to be positioned according to the position of the nose tip, taking into consideration the spatial relationship between the five sense organs. For the mouth region, the embodiment determines the peak point of the upper lip of the mouth according to the position of the key point of the nose tip, performs contour detection on the mouth region to obtain the approximate three-dimensional contour of the mouth, considers that the mouth of a human body is divided into the upper lip and the lower lip, and simultaneously lips are arranged on the lips, so that the approximate three-dimensional contour of the mouth comprises a plurality of irrelevant contour lines such as lip contour lines, and the embodiment only extracts the key points, and the irrelevant contour lines affect the accurate positioning of the peak point of the upper lip and the lower boundary point of the mouth, so that for the approximate three-dimensional contour of the detected mouth, the embodiment only analyzes the three-dimensional contour point cloud of the outermost layer in the approximate three-dimensional contour of the mouth, and marks the three-dimensional contour point cloud of the outermost layer as the external contour point cloud of the mouth. The three-dimensional contour detection and extraction process is not within the protection range of the present embodiment, which is a known technology, and the present embodiment will not be described in detail.
So far, the external contour information of the mouth can be obtained and used for extracting the upper lip peak and the lower boundary point of the mouth, and the specific process is as follows: traversing each point cloud downwards from the corresponding vertical line of the abscissa of the nose tip key point by taking the abscissa of the nose tip key point as a reference, acquiring a first mouth outline point cloud in a mouth area, ensuring the point cloud to be the same as the abscissa of the nose tip key point, and marking the point cloud as an upper lip peak point; and then, continuously traversing each point cloud downwards along the corresponding vertical line of the horizontal coordinate of the tip key point from the upper lip peak point to obtain a second mouth outline point cloud, marking the corresponding point cloud as a lower boundary point, taking the midpoint of the connecting line of the upper lip peak point and the lower boundary point as a lip center point, taking the lip center point as a starting point, and respectively obtaining mouth outline point clouds on the left side and the right side in the horizontal direction, and marking the mouth outline point clouds as left and right mouth corner key points. Thus far, the right and left mouth corner key points can be obtained according to the method of the embodiment, so as to analyze facial features as the key points of the face.
Repeating the method to obtain the key points of the mouth corners of the mouth area in each three-dimensional face image, as shown in fig. 2. Similarly, it should be noted that the mouth area is different in each three-dimensional face image, and according to the method of the embodiment, the extracted corner key points can be identified by obtaining the mouth area in each three-dimensional face image.
Therefore, each key point in each three-dimensional face image can be extracted and positioned, the face information can be represented through each key point, the system calculation cost is reduced, the detection speed is improved, meanwhile, the accurate registration of the face information can be realized according to the extracted key points, and the problem of large matching error caused by excessive cloud data of the related points can be avoided.
Then registering the key points of the face to be positioned, and acquiring local space of the key points by taking the key points as the center for the key points extracted from each three-dimensional face image because the three-dimensional face images of the same person at different angles are acquired in the embodimentA neighborhood, whereinIn order to preset the local space size, it should be noted that n is guaranteed to be an odd number, in this embodimentThe practitioner may set the setting according to the actual situation, and the present embodiment is not limited to this.
Further, extracting local features of each key point according to the local space neighborhood of each key point, firstly setting a gray threshold value, and marking point clouds with difference values between the point cloud gray values in the local space neighborhood and the gray values of the key points lower than the gray threshold value as 0; marking the point cloud with the difference value between the gray value of the point cloud in the local space neighborhood and the gray value of the key point higher than the gray threshold value as 1, and then forming the point cloud value of each point in the local space neighborhood into oneBit vector, marked as characteristic vector of key point, each element in the vector is 0 or 1, the gray threshold is set to 10 in the embodiment, and the practitioner can set according to actual situationThe gray threshold is fixed, and this embodiment is not limited.
In order to improve the feature characterization accuracy of the key points, the embodiment performs corresponding key point registration on the three acquired three-dimensional face images, and extracts the fusion feature vector of each key point according to the feature vector of each key point in the three-dimensional face images. Because each organ of the face shot by the front image is complete and is not blocked, and the images shot on the left side and the right side can only acquire the information of part of key points clearly, in order to ensure the integrity and the accuracy of the characteristic information of each key point, the characteristic vector of each key point of the three-dimensional face image on the left side and the three-dimensional face image on the right side is fused with the characteristic vector of each key point corresponding to the front three-dimensional face image, the specific fusion mode is two-vector addition, taking the key point of the left eye corner of the left eye of the front three-dimensional face image as an example, and the fusion characteristic vector of the key points is as follows:
in the method, in the process of the application,is a fusion feature vector of left eye corner key points of the left eye,is the left eye corner key point feature vector of the left eye of the frontal three-dimensional facial image,for the left corner key point feature vector of the left eye of the left view image, the logic constructed by fusing the feature vectors is to fuse the feature vectors of the key points with the deviation of the left and right views, so that the features of each key point can be acquired more accurately.
The method is repeated to obtain the fusion feature vectors of other key points of the face, and it is required to say that the key points of the nose tip position can be clearly seen in three shooting angles and have three feature vectors, so that the fusion feature vectors of the nose tip position finally are added for the three feature vectors to obtain the fusion feature vectors of the nose tip, and the fusion feature vectors of the other key points are fusion feature vectors of the corresponding key points which can be extracted from each three-dimensional face image.
The three-dimensional registration module 103 performs three-dimensional image registration by combining the fusion feature vectors of the key points of the face of the photographed image, completes three-dimensional face image registration, and provides reliable face information for face care.
Through the steps, fusion feature vectors of all key points of the face are obtained, a bitwise exclusive or result of the fusion feature vectors of all the face of the user and the fusion feature vectors of the key points corresponding to the standard three-dimensional face model is calculated, then the number of 1 in the exclusive or result is counted, the total number of 1 is recorded as a face registration error rate, and the face registration error rate is expressed as follows:
in the method, in the process of the application,for the accuracy of the face registration,is the firstThe fusion feature vector of the individual keypoints,is the fusion feature vector of the key point i in the standard three-dimensional face model,a function is calculated for the cosine similarity,for the purpose of the normalization operation,is the number of key points. Error rate of facial registrationThe smaller the value of (2) is, the better the face care effect is, and the photographed three-dimensional face image is more fit with a standard three-dimensional face model. It should be noted that, the standard three-dimensional face model implementer can acquire according to actual situations, and is not in the protection scope of the present embodiment, which is not limited in this embodiment. Meanwhile, the extraction of each key point and the extraction of the fusion feature vector of the standard three-dimensional face model are the same as the method in the embodiment, and are not repeated here.
According to the embodiment, facial care is analyzed according to the obtained registration result, an error rate threshold is set, and when the facial registration error rate is higher than the error rate threshold, the face care effect is not obvious, and the face is required to be continuously cared correspondingly; when the face registration error rate is below the error rate threshold, then face care is indicated to be effective and effective. The error rate of facial registration constructed according to the embodiment can provide reference information for relevant personnel to carry out facial care so that the relevant personnel can accurately know the facial condition of a patient to take targeted care measures. The error rate threshold value implementation may be set by the user, and in this embodiment, the error rate threshold value is set to 0.5.
In summary, according to the embodiment of the application, the method can realize accurate registration of the three-dimensional face image, and the problems of incomplete face information extraction and low registration accuracy in the two-dimensional face registration process are avoided. The embodiment of the application extracts facial features based on analysis of the key feature points of the face, extracts the feature points of key positions of the face from different angles and acquires the feature vectors of the feature points comprehensively represented, improves the registration accuracy of the three-dimensional face image, and solves the problems of high feature matching error rate, low efficiency and the like caused by incomplete facial feature extraction. The embodiment of the application has higher face image registration accuracy, high registration efficiency and high speed.
It should be noted that: the sequence of the embodiments of the present application is only for description, and does not represent the advantages and disadvantages of the embodiments. The processes depicted in the accompanying drawings do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
In this specification, each embodiment is described in a progressive manner, and the same or similar parts of each embodiment are referred to each other, and each embodiment mainly describes differences from other embodiments.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; the technical solutions described in the foregoing embodiments are modified or some of the technical features are replaced equivalently, so that the essence of the corresponding technical solutions does not deviate from the scope of the technical solutions of the embodiments of the present application, and all the technical solutions are included in the protection scope of the present application.

Claims (10)

1. A face care system based on face three-dimensional image registration, the system comprising:
the three-dimensional acquisition module is used for acquiring three-dimensional face images with different visual angles;
the three-dimensional key point fusion feature vector detection module is used for acquiring left and right eye areas, a nose area and a mouth area of the three-dimensional face image; three-dimensional contour point clouds of the left eye region and the right eye region are respectively extracted, and left eye corner key points and right eye corner key points of each eye are obtained according to coordinates and curvature information of each three-dimensional contour point cloud of the eye region; acquiring angular point values of each point cloud in the nose area, and acquiring nose tip positioning probability factors of each point cloud according to the angular point values, coordinate information and distance information between each point cloud in the nose area and each eye corner key point; taking the point cloud with the maximum nasal tip positioning probability factor in the nasal area as a nasal tip key point; combining position information of the nose tip key points to obtain an upper lip peak point and a lower boundary point of the mouth, and obtaining left and right mouth corner key points according to the upper lip peak point and the lower boundary point of the mouth;
obtaining feature vectors of all the key points according to gray level differences of all the key points and all the point clouds in the local space neighborhood; obtaining fusion feature vectors of the key points according to the feature vectors of the key points;
and the three-dimensional registration module obtains a face registration error rate according to the fusion feature vectors of the key points, and combines the face registration error rate to complete the analysis of the face care result.
2. A face care system based on facial three-dimensional image registration as recited in claim 1, wherein said left eye corner key comprises:
for each eye;
and obtaining the sum value and the inner angle of the three-dimensional coordinates of each three-dimensional contour point cloud in the eye area, calculating the sum of the sum value and the inner angle, marking the sum as a second sum value, and taking the three-dimensional contour point cloud with the smallest corresponding second sum value in the eye area as the key point of the left eye angle.
3. A face care system based on facial three-dimensional image registration as recited in claim 1, wherein said right eye corner key comprises:
for each eye;
and obtaining the sum value and curvature of the three-dimensional coordinates of each three-dimensional contour point cloud in the eye area, calculating the sum of the sum value and the curvature, recording the sum as a first sum value, and taking the three-dimensional contour point cloud corresponding to the maximum first sum value in the eye area as a right eye corner key point.
4. The face care system based on face three-dimensional image registration according to claim 1, wherein the nose tip positioning probability factor of each point cloud is obtained according to the point cloud point value, coordinate information and distance information between each point cloud point and each corner key point in the nose area, specifically:
for each point cloud in the nose area;
calculating the product result of the point cloud angular point value and the Z coordinate value, obtaining the minimum value of the Euclidean distance between the point cloud and the three-dimensional coordinate of each eye angle key point, calculating the square of the difference value between the minimum value and the distance threshold value, and obtaining the calculation result of an exponential function taking the natural constant as the base number and the opposite number of the square as the index;
and taking the product of the product result and the calculation result as a nose tip positioning probability factor of the point cloud.
5. The face care system based on face three-dimensional image registration according to claim 1, wherein the obtaining the upper lip peak point and the lower boundary point of the mouth by combining the position information of the tip key point comprises:
detecting a three-dimensional contour of a mouth area, and recording a three-dimensional contour point cloud of the outermost layer of the mouth as a mouth outer contour point cloud;
traversing each point cloud downwards from the corresponding vertical line of the horizontal coordinate of the tip key point, and taking the first mouth outline point cloud in the mouth area on the vertical line as the upper lip peak point of the mouth; and continuously traversing the point clouds downwards along the vertical line from the upper lip peak point of the mouth, and taking the second mouth outline point cloud as the lower boundary point of the mouth.
6. The face care system based on three-dimensional image registration of a face according to claim 5, wherein the obtaining the left and right mouth corner key points according to the upper lip peak point and the lower boundary point of the mouth comprises:
and taking the midpoint of the connecting line of the upper lip peak point and the lower boundary point of the mouth as a lip center point, taking the lip center point as a starting point, and respectively obtaining mouth outline point clouds on the left side and the right side in the horizontal direction as key points of the left mouth corner and the right mouth corner.
7. The face care system based on face three-dimensional image registration according to claim 1, wherein the obtaining the feature vector of each key point according to the gray level difference between each key point and each point cloud in the local spatial neighborhood comprises:
for each key point;
obtaining local space of key points by taking key points as centersA neighborhood, wherein n is a preset local spatial dimension;
marking the point cloud with the difference value between the gray value of the point cloud in the local space neighborhood and the gray value of the key point lower than the gray threshold value as 0; marking the point cloud with the difference value between the gray value of the point cloud in the local space neighborhood and the gray value of the key point higher than the gray threshold value as 1;
and taking a vector formed by cloud values of each point in the local space neighborhood as a characteristic vector of the key point.
8. The face care system based on face three-dimensional image registration according to claim 1, wherein the obtaining the fusion feature vector of each key point according to the feature vector of each key point comprises:
and adding the feature vectors obtained by the key points in the three-dimensional facial images with different visual angles to obtain the fusion feature vector of each key point.
9. The face care system based on three-dimensional image registration of face according to claim 1, wherein the obtaining the face registration error rate from the fused feature vector of each key point comprises:
and calculating cosine similarity between the fusion feature vector of each key point and the fusion feature vector of each corresponding key point in the standard three-dimensional face model, obtaining a calculation result of an exponential function taking the opposite number of the cosine similarity as an index based on a natural constant, and taking the normalized sum value of the calculation results of all the key points as a face registration error rate, wherein the face registration error rate and the cosine similarity form a negative correlation.
10. A face care system based on face three-dimensional image registration as defined in claim 1 wherein said combined face registration error rate complete face care result analysis comprises:
when the face registration error rate is higher than the error rate threshold, the face care effect is not obvious;
when the face registration error rate is below the error rate threshold, the face care effect is significant.
CN202311153799.3A 2023-09-08 2023-09-08 Face nursing system based on face three-dimensional image registration Active CN116883472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311153799.3A CN116883472B (en) 2023-09-08 2023-09-08 Face nursing system based on face three-dimensional image registration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311153799.3A CN116883472B (en) 2023-09-08 2023-09-08 Face nursing system based on face three-dimensional image registration

Publications (2)

Publication Number Publication Date
CN116883472A true CN116883472A (en) 2023-10-13
CN116883472B CN116883472B (en) 2023-11-14

Family

ID=88257289

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311153799.3A Active CN116883472B (en) 2023-09-08 2023-09-08 Face nursing system based on face three-dimensional image registration

Country Status (1)

Country Link
CN (1) CN116883472B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035711A (en) * 2006-10-20 2008-04-24 연세대학교 산학협력단 Global feature extraction method for 3d face recognition
WO2018099276A1 (en) * 2016-11-30 2018-06-07 阿里巴巴集团控股有限公司 Identity authentication method and apparatus, and computing device
CN108615016A (en) * 2018-04-28 2018-10-02 北京华捷艾米科技有限公司 Face critical point detection method and face critical point detection device
CN111523407A (en) * 2020-04-08 2020-08-11 上海涛润医疗科技有限公司 Face recognition system and method and medical care recording system based on face recognition
WO2022111512A1 (en) * 2020-11-26 2022-06-02 杭州海康威视数字技术股份有限公司 Facial liveness detection method and apparatus, and device
CN115830663A (en) * 2022-05-27 2023-03-21 深圳市安华光电技术股份有限公司 Face three-dimensional key point extraction method and device and model creation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20080035711A (en) * 2006-10-20 2008-04-24 연세대학교 산학협력단 Global feature extraction method for 3d face recognition
WO2018099276A1 (en) * 2016-11-30 2018-06-07 阿里巴巴集团控股有限公司 Identity authentication method and apparatus, and computing device
CN108615016A (en) * 2018-04-28 2018-10-02 北京华捷艾米科技有限公司 Face critical point detection method and face critical point detection device
CN111523407A (en) * 2020-04-08 2020-08-11 上海涛润医疗科技有限公司 Face recognition system and method and medical care recording system based on face recognition
WO2022111512A1 (en) * 2020-11-26 2022-06-02 杭州海康威视数字技术股份有限公司 Facial liveness detection method and apparatus, and device
CN115830663A (en) * 2022-05-27 2023-03-21 深圳市安华光电技术股份有限公司 Face three-dimensional key point extraction method and device and model creation method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈立生;王斌斌;: "基于几何特征与深度数据的三维人脸识别", 电脑知识与技术, no. 08 *

Also Published As

Publication number Publication date
CN116883472B (en) 2023-11-14

Similar Documents

Publication Publication Date Title
US10198623B2 (en) Three-dimensional facial recognition method and system
Papazov et al. Real-time 3D head pose and facial landmark estimation from depth images using triangular surface patch features
CN107403168B (en) Face recognition system
Gu et al. Feature points extraction from faces
US20180005018A1 (en) System and method for face recognition using three dimensions
CN104915656B (en) A kind of fast human face recognition based on Binocular vision photogrammetry technology
CN104933389B (en) Identity recognition method and device based on finger veins
CN101398886A (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN104573634A (en) Three-dimensional face recognition method
CN106355139B (en) Face method for anti-counterfeit and device
US11132531B2 (en) Method for determining pose and for identifying a three-dimensional view of a face
CN109948400A (en) It is a kind of to be able to carry out the smart phone and its recognition methods that face characteristic 3D is identified
CN110909634A (en) Visible light and double infrared combined rapid in vivo detection method
CN112257641A (en) Face recognition living body detection method
CN115035546B (en) Three-dimensional human body posture detection method and device and electronic equipment
CN104166995B (en) Harris-SIFT binocular vision positioning method based on horse pace measurement
Boukamcha et al. Automatic landmark detection and 3D Face data extraction
CN110188630A (en) A kind of face identification method and camera
CN116883472B (en) Face nursing system based on face three-dimensional image registration
CN111753781A (en) Real-time 3D face living body judgment method based on binocular infrared
CN108694348B (en) Tracking registration method and device based on natural features
Niese et al. A stereo and color-based method for face pose estimation and facial feature extraction
Jiménez et al. Face tracking and pose estimation with automatic three-dimensional model construction
CN109214352A (en) Dynamic human face retrieval method based on 2D camera 3 dimension imaging technology
CN112364711B (en) 3D face recognition method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240205

Address after: 272000 No.1, JiDai Road, Jining City, Shandong Province

Patentee after: Shandong Daizhuang Hospital (Shandong Ankang Hospital, Jining Mental Health Center)

Country or region after: China

Address before: Room 310, 3rd Floor, Research Building, No. 3, No. 554 Zhengfeng Road, High tech Zone, Jinan City, Shandong Province, 250000

Patentee before: Shandong Deyixin Information Technology Co.,Ltd.

Country or region before: China