CN116705303A - Facial profile recognition method, device, terminal and storage medium - Google Patents

Facial profile recognition method, device, terminal and storage medium Download PDF

Info

Publication number
CN116705303A
CN116705303A CN202310722579.1A CN202310722579A CN116705303A CN 116705303 A CN116705303 A CN 116705303A CN 202310722579 A CN202310722579 A CN 202310722579A CN 116705303 A CN116705303 A CN 116705303A
Authority
CN
China
Prior art keywords
face
target
coordinate information
target object
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310722579.1A
Other languages
Chinese (zh)
Inventor
张旺
周宸
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202310722579.1A priority Critical patent/CN116705303A/en
Publication of CN116705303A publication Critical patent/CN116705303A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Image Analysis (AREA)

Abstract

The embodiment of the application relates to the field of digital medical treatment, and particularly provides a facial dysmorphism identification method, a facial dysmorphism identification device, terminal equipment and a storage medium. The method comprises the following steps: inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object; determining the geometric characteristics of a target area corresponding to an area to be analyzed in a face image of a target object from the geometric characteristics of the face; determining target coordinate information corresponding to a target face key point from coordinate information of the face key point of the geometric feature of the target area, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, wherein the neighborhood face key point is adjacent to the target face key point; calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information; and determining the facial dysmorphism type of the target object according to the dispersion degree.

Description

Facial profile recognition method, device, terminal and storage medium
Technical Field
The present application relates to the field of digital medical treatment, and in particular, to a facial profile recognition method, a facial profile recognition device, a terminal device, and a storage medium.
Background
In the diagnosis of traditional Chinese medicine, facial diagnosis can be divided into four categories of facial swelling, cheek swelling, facial cutting and cheekbone rise and facial distortion. When the above symptoms occur, the facial state of the patient is abnormal, and even the symptoms affecting the normal life of the patient occur. Therefore, if the intervention can be timely and early performed in the early stage of the attack, the method can be better helpful for patients. In addition, the detection result of the surface shape abnormality can be used as auxiliary information for clinical diagnosis, so that doctors can be better helped to diagnose diseases.
When the facial dysmorphism of the user is identified manually, the accurate identification can be realized only by relying on abundant clinical experience, but the gap between the number of users and the number of doctors is huge, and a doctor with abundant experience needs more time and cost for growth. Therefore, the method has important significance on how to realize accurate identification of the facial profile of the user.
Disclosure of Invention
The embodiment of the application mainly aims to provide a facial dysmorphism identification method, a facial dysmorphism identification device, terminal equipment and a storage medium, aiming at improving the accuracy of facial dysmorphism of a user and further better helping doctors to diagnose diseases.
In a first aspect, an embodiment of the present application provides a facial profile recognition method, applied to a terminal device, including:
the face image of a target object is acquired, the face image is input into a three-dimensional face reconstruction network, the face geometric characteristics of the target object are obtained, the face geometric characteristics at least comprise coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model after the target object is reconstructed.
And determining the geometric characteristics of the target area corresponding to the area to be analyzed in the face image of the target object from the geometric characteristics of the face.
And determining target coordinate information corresponding to a target face key point from the coordinate information of the face key point of the geometric feature of the target region, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, wherein the neighborhood face key point is adjacent to the target face key point.
Calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric features of the target area according to the distance information.
And determining the facial dysmorphism type of the target object according to the dispersion degree.
In a second aspect, an embodiment of the present application further provides a facial profile recognition device, including:
the data acquisition module is used for acquiring a face image of a target object, inputting the face image into a three-dimensional face reconstruction network to obtain the face geometric feature of the target object, wherein the face geometric feature at least comprises coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model reconstructed by the target object.
And the data processing module is used for determining the geometric characteristics of the target area corresponding to the area to be analyzed in the face image of the target object from the geometric characteristics of the face.
The data collection module is used for determining target coordinate information corresponding to a target face key point from the coordinate information of the face key point of the geometric feature of the target area, determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, and the neighborhood face key point is adjacent to the target face key point.
And the data calculation module is used for calculating the distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information.
And the data analysis module is used for determining the facial dysmorphism category of the target object according to the dispersion degree.
In a third aspect, an embodiment of the present application further provides a terminal device, where the terminal device includes a processor, a memory, a computer program stored on the memory and executable by the processor, and a data bus for implementing a connection communication between the processor and the memory, where the computer program, when executed by the processor, implements the steps of any of the facial profile recognition methods as provided in the present specification.
In a fourth aspect, an embodiment of the present application further provides a storage medium for computer readable storage, where the storage medium stores one or more programs, and the one or more programs are executable by one or more processors to implement the steps of the facial profile recognition method as provided in any one of the present specification.
The embodiment of the application provides a facial profile recognition method, a facial profile recognition device, terminal equipment and a storage medium, wherein the facial profile recognition method obtains the facial geometric characteristics of a target object through a three-dimensional reconstruction network by a facial image, and further obtains the geometric characteristics of a target area corresponding to an area to be analyzed in the facial image of the target object according to the facial geometric characteristics of the target object; selecting target coordinate information corresponding to a plurality of target face key points from the target region geometric features corresponding to the region to be analyzed, determining neighborhood coordinate information of neighborhood face key points of the target face key points according to the target face key points, calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the target region geometric features according to the distance information; and further determining the facial dysmorphism type of the target object according to the dispersion degree. Therefore, the judgment of the facial dysmorphism type can be realized by utilizing the distance information obtained by calculating the target coordinate information and the neighborhood coordinate information in the facial geometric features of the user, the accuracy of facial dysmorphism of the user is improved, and the judgment result of the facial dysmorphism type is further used as auxiliary information for clinical diagnosis, so that a doctor can be better helped to diagnose diseases, and meanwhile, the user can be helped to intervene timely and early in the early stage of the attack, and better treatment is given to the user.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic flow chart of a facial profile recognition method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a three-dimensional face model reconstructed by the facial dysmorphism recognition method provided by the embodiment of the application;
fig. 3 is a schematic flow chart of three-dimensional face reconstruction of a face image by using a preset model in the face abnormal shape recognition method provided by the embodiment of the application;
FIG. 4 is a schematic diagram of face keypoints and neighborhood face keypoints in a three-dimensional face model;
fig. 5 is a schematic block diagram of a facial profile recognition device according to an embodiment of the present application;
fig. 6 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The flow diagrams depicted in the figures are merely illustrative and not necessarily all of the elements and operations/steps are included or performed in the order described. For example, some operations/steps may be further divided, combined, or partially combined, so that the order of actual execution may be changed according to actual situations.
It is to be understood that the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
The embodiment of the application provides a facial profile identification method, a facial profile identification device, terminal equipment and a storage medium. The facial profile recognition method can be applied to terminal equipment, wherein the terminal equipment can be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, a wearable device or a server, and the server can be an independent server or a server cluster.
The embodiment of the application provides a method, a device, terminal equipment and a storage medium for facial profile recognition, wherein the facial profile recognition method obtains the facial geometric characteristics of a target object through a three-dimensional reconstruction network by a facial image, and further obtains the geometric characteristics of a target area corresponding to an area to be analyzed in the facial image of the target object according to the facial geometric characteristics of the target object; selecting target coordinate information corresponding to a plurality of target face key points from the target region geometric features corresponding to the region to be analyzed, determining neighborhood coordinate information of neighborhood face key points of the target face key points according to the target face key points, calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the target region geometric features according to the distance information; and further determining the facial dysmorphism type of the target object according to the dispersion degree. Therefore, the judgment of the facial dysmorphism type can be realized by utilizing the distance information obtained by calculating the target coordinate information and the neighborhood coordinate information in the facial geometric features of the user, the accuracy of facial dysmorphism of the user is improved, and the judgment result of the facial dysmorphism type is further used as auxiliary information for clinical diagnosis, so that a doctor can be better helped to diagnose diseases, and meanwhile, the user can be helped to intervene timely and early in the early stage of the attack, and better treatment is given to the user.
Some embodiments of the application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a flow chart of a facial dysmorphism recognition method according to an embodiment of the application.
As shown in fig. 1, the face shape recognition method includes steps S1 to S5.
Step S1: the face image of a target object is acquired, the face image is input into a three-dimensional face reconstruction network, the face geometric characteristics of the target object are obtained, the face geometric characteristics at least comprise coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model after the target object is reconstructed.
Illustratively, the face image of the target object is acquired, and the face image is input into a three-dimensional face reconstruction network, so that the face geometric characteristics of the three-dimensional target object corresponding to the face image of the two-dimensional target object are obtained. Compared with a two-dimensional plane, more information can be reflected in the three-dimensional space, and the target object can be observed from any view point more intuitively, comprehensively and conveniently, so that the three-dimensional visual effect is achieved. The facial geometric features at least comprise coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model after target object reconstruction.
For example, a face photograph may be taken using a cell phone without strict restrictions on pose, expression, illumination, background environment, etc. And inputting the face image into a three-dimensional face reconstruction network to obtain a reconstructed three-dimensional face model which is expressed as face key points and face patches under a plurality of Euclidean coordinate systems of a three-dimensional space, wherein each face patch consists of three face key points. The number of face key points and face patches is determined by a three-dimensional reconstruction algorithm, for example, 1220 vertexes, 2403 triangles are acquired by using an iPhone mobile phone in an ARKitFace dataset, a three-dimensional face model is shown in fig. 2, the number of the face key points and face patches after the specific three-dimensional reconstruction algorithm and three-dimensional reconstruction is not limited, the face key points are shown in fig. 2 at 101, and the face patches are shown in fig. 2 at 102.
In some embodiments, the acquiring a face image of a target object, and inputting the face image into a three-dimensional face reconstruction network to obtain a facial geometric feature of the target object includes: receiving a face image of a target object sent by an image acquisition device, wherein the face image is an image obtained by the image acquisition device through acquiring an object picture of the target object and intercepting a face region of the target object from the object picture; and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
The image acquisition device acquires an object picture of the target object, obtains face position information in the object picture through image processing, further intercepts a face area of the object picture according to the face position information to obtain a face image of the target object, sends the face image of the target object to the terminal equipment, and after receiving the face image of the target object sent by the image acquisition device, the terminal equipment inputs the face image to the three-dimensional face reconstruction network to obtain the face geometric feature of the target object.
For example, the image acquisition device comprises an image acquisition module and an image processing module, wherein the image acquisition module is mainly used for shooting a picture of a target object, and the image processing module is mainly used for processing the image acquisition module to obtain the picture so as to obtain a face image of the target object. The image acquisition module can be a mobile phone or a camera, acquires an object picture of a target object, and sends the acquired object picture to the image processing module. And the image processing module is used for carrying out face detection processing on the object picture after receiving the object picture of the target object obtained by the image acquisition module, so as to obtain the position and the size of the face in the object picture. And intercepting a face area of the object picture according to the position and the size of the face in the object picture to obtain a face image of the object, and sending the face image of the object to the terminal equipment.
Optionally, the face detection may scale the object picture to different sizes, and then scan the object picture with windows of the same size, that is, select a window frame selected area on the object picture as an observation object, and sequentially slide the window to update the window frame selected area; then, extracting features of the window frame selected region to obtain a feature vector corresponding to the window frame selected region; and judging whether the window frame selected area exactly contains a face or not according to the feature vector, if the window frame selected area exactly contains a face, converting the information of the window frame selected area into the position and the size of the face in the object picture, otherwise, continuing to slide the window and updating the window frame selected area.
Optionally, judging whether the window frame selected area contains exactly one face according to the feature vector may be regarded as classifying the window frame selected area according to the feature vector, where the classification includes a face window and a non-face window.
And intercepting the object picture according to the position and the size of the face in the object picture to obtain a face image of the object, transmitting the face image to terminal equipment, and inputting the face image into a three-dimensional face reconstruction network after the terminal equipment receives the face image of the object transmitted by the image acquisition device to obtain the face geometric feature of the object.
In some embodiments, inputting the face image into a three-dimensional face reconstruction network to obtain facial geometric features of the target object, including: inputting the face image to an image feature extraction network of the three-dimensional face reconstruction network, obtaining two-dimensional feature information of the face image and initializing a three-dimensional face model for the face image; and continuously adjusting the three-dimensional face model by utilizing the two-dimensional characteristic information based on a graph convolution neural network of the three-dimensional face reconstruction network to obtain a target three-dimensional face model, wherein the target three-dimensional face model is composed of facial geometric characteristics of the target object.
Illustratively, the three-dimensional image information is obtained according to the two-dimensional face image, and compared with a two-dimensional plane, the three-dimensional space can reflect more information and is more visual and comprehensively accepted by people. Extracting features of the face image through an image feature extraction network, and initializing a three-dimensional face model of the face image; and continuously adjusting the initialized three-dimensional face model according to the two-dimensional characteristic information by using a graph convolution neural network of the three-dimensional face reconstruction network, adding details to the three-dimensional model from a thick mode to a thin mode, and further deforming the initial three-dimensional face model into a target three-dimensional face model.
For example, as shown in fig. 3, a fixed-size ellipsoid (e.g., with triaxial radii of 0.2m, and 0.8m, respectively) is initialized for an arbitrarily input face image as an initial three-dimensional face model; then, carrying out feature extraction on the face image through a multi-layer convolutional neural network; and designing a cascade grid deformation module, wherein the grid deformation module consists of a graph rolling network, each node in the grid deformation module is concentrated with two-dimensional projection image features by utilizing a perception feature pool layer, the node state of the graph rolling network in the three-dimensional grid is adjusted by utilizing the two-dimensional image features, and finally, the initial ellipsoid is continuously deformed according to the features until the initial ellipsoid is close to a real three-dimensional face model.
In some embodiments, acquiring a face image of a target object, and inputting the face image into a three-dimensional face reconstruction network to obtain a facial geometric feature of the target object, further comprising: obtaining an object picture of a target object from an image acquisition device, and sending the object picture to the terminal equipment; after the object picture is received, determining a face area of a target object from the object picture, and intercepting the face area to obtain a face image of the target object; and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
The image acquisition device acquires an object picture of the target object, the image acquisition device sends the acquired object picture to the terminal equipment, the terminal equipment acquires face position information in the object picture after receiving the object picture, further, a face area of the object picture is intercepted according to the face position information to obtain a face image of the target object, and further, the face image is input into the three-dimensional face reconstruction network to obtain the face geometric feature of the target object.
For example, the image acquisition device may be a mobile phone or a camera, acquire an object picture of a target object, and send the acquired object picture to the terminal device. After receiving the object picture of the target object obtained by the image acquisition module, the terminal equipment carries out face detection processing on the object picture to obtain the position and the size of the face in the object picture. And intercepting a face region of the object picture according to the position and the size of the face in the object picture to obtain a face image of the object, and inputting the face image into a three-dimensional face reconstruction network to obtain the face geometric feature of the object.
Optionally, the face detection may scale the object picture to different sizes, and then scan the object picture with windows of the same size, that is, select a window frame selected area on the object picture as an observation object, and sequentially slide the window to update the window frame selected area; then, extracting features of the window frame selected region to obtain a feature vector corresponding to the window frame selected region; and judging whether the window frame selected area exactly contains a face or not according to the feature vector, if the window frame selected area exactly contains a face, converting the information of the window frame selected area into the position and the size of the face in the object picture, otherwise, continuing to slide the window and updating the window frame selected area.
Optionally, judging whether the window frame selected area contains exactly one face according to the feature vector may be regarded as classifying the window frame selected area according to the feature vector, where the classification includes a face window and a non-face window.
Step S2: and determining the geometric characteristics of the target area corresponding to the area to be analyzed in the face image of the target object from the geometric characteristics of the face.
For example, facial diagnosis in the diagnosis of traditional Chinese medicine can be classified into four categories, namely facial swelling, cheek swelling, facial paresthesia, facial distortion, and different facial abnormalities correspond to different facial areas, and therefore, when judging whether a target subject suffers from a certain facial abnormality, the corresponding areas need to be analyzed.
For example, the geometric features of the target region corresponding to the region to be analyzed when the surface shape abnormality is a facial swelling are two cheek regions of the face; and the geometrical characteristics of the target area corresponding to the area to be analyzed when the surface shape abnormality is the deviation of the mouth and the eyes comprise the area around the mouth and the area around the eyes.
For example, the facial region may be divided into regions according to the facial structure, and the left face and the right face may be obtained by dividing the facial region by a vertical middle line of the nose, and then divided into different regions by using horizontal lines according to eyebrows, eyes, nose, and mouth, and further, when performing abnormal facial shape recognition, the corresponding divided regions are set as the regions to be analyzed, so that the geometric features of the target region corresponding to the regions to be analyzed may be obtained.
Step S3: and determining target coordinate information corresponding to a target face key point from the coordinate information of the face key point of the geometric feature of the target region, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, wherein the neighborhood face key point is adjacent to the target face key point.
Taking face swelling around cheekbones as an example, assuming that the visual appearance of face edema is round and spherical, the normal direction trend of key points in the geometric features of the corresponding target area tends to be discrete, if one key point and a plurality of neighborhood key points are selected, the degree of dispersion of the face key points in the geometric features of the target area can be obtained by analyzing the key points and the plurality of neighborhood key points.
For example, the periphery of the cheekbones is selected as an area to be analyzed, one of the points is selected as a key point according to the geometric characteristics of the corresponding target area corresponding to the area to be analyzed, and 26 points around the point are selected as the corresponding neighborhood face key points. The number of the neighboring face key points can be selected according to the specific situation of the target area to be analyzed, and is not limited to 26, as shown in fig. 4, the first color circle represents the selected point as the selected face key point, the second color circle represents the selected point as the neighboring face key point adjacent to the selected face key point, wherein the colors of the first color circle and the second color circle can be set according to the needs, for example, the first color circle is a gray circle, and the second color circle is a white circle.
In some embodiments, determining target coordinate information corresponding to a target face key point from coordinate information of the face key point of the target region geometric feature, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the face geometric feature of the target object according to the target coordinate information, includes: determining first target coordinate information corresponding to a first target face key point from the coordinate information of the face key point of the geometric feature of the target area; determining first neighborhood coordinate information corresponding to a first neighborhood face key point from the face geometric features of the target object according to the first target coordinate information; determining second target coordinate information corresponding to a second target face key point from the coordinate information of the face key point of the geometric feature of the target area; determining second neighborhood coordinate information corresponding to a second neighborhood face key point from the facial geometric features of the target object according to the second target coordinate information; and forming the first target coordinate information corresponding to the first target face key point and the second target coordinate information corresponding to the second target face key point into target coordinate information corresponding to the target face key point.
Taking the face swelling around the cheekbones as an example, if one key point and a plurality of neighborhood key points are selected, the dispersion degree of the face key points in the geometric characteristics of the target area can be obtained by analyzing the key points and a plurality of neighborhood key points, and the random is realized when the key points are selected, so that the error judgment rate of random selection is reduced, therefore, the key points and a plurality of neighborhood key points corresponding to the key points are required to be selected for multiple times, and further, whether the target object has the surface shape abnormality is comprehensively judged according to the analysis result of the key points selected multiple times.
For example, if the target object selects the key points and the neighborhood coordinate information of the key points in the geometric feature of the target area to be analyzed of the target object for the first time, the problem that the target object does not have surface shape abnormality is obtained through analysis. If the target object has the problem of surface shape abnormality by analysis when the target object selects key points and a plurality of neighborhood coordinate information of the key points in the geometric characteristics of the target area to be analyzed of the target object for the second time, if only one key point is selected for analysis, the judgment result has contingency, and the accuracy of surface shape abnormality identification is greatly reduced. Therefore, multiple key points and multiple neighborhood key points corresponding to the key points are needed to be selected, whether the target object has the surface shape abnormality or not is comprehensively judged according to analysis results of the key points selected multiple times, and accuracy of surface shape abnormality identification is improved.
Step S4: calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric features of the target area according to the distance information.
Taking the face edema around the cheekbones as an example, assuming that the visual appearance of the face edema is round and approximately spherical, the normal direction of the corresponding region tends to be scattered, selecting key points and obtaining target coordinate information corresponding to the key points, calculating distance information between the normal of the target coordinate information and the normal of all the neighborhood coordinate information if the neighborhood key points and the corresponding neighborhood coordinate information, and indicating that the region is approximately spherical if the distance information is sufficiently scattered and is not excessively scattered. And further determining the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information.
In some embodiments, calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric feature of the target area according to the distance information includes: calculating cosine distances between the normal of the target coordinate information and the normal of each neighborhood coordinate information, calculating a cosine mean value and a cosine variance corresponding to the cosine distances, and taking the cosine mean value and the cosine variance as the distance information; and calculating the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information and the preset value.
Taking the face edema around the cheekbones as an example, assuming that the visual appearance of the face edema is round and approximately spherical, the normal direction of the corresponding region tends to be scattered, selecting key points and obtaining target coordinate information corresponding to the key points, calculating the distance between the normal of the target coordinate information and the normal of all the neighborhood coordinate information if the neighborhood key points and the corresponding neighborhood coordinate information, and indicating that the region is approximately spherical if the distance is sufficiently scattered and is not excessively scattered, namely the face edema. The cosine distance can be used as a distance measure, and further the cosine mean value and the cosine difference are calculated by using the result of the cosine distance to be used as a measure of the dispersion degree. If the normal of the keypoint is substantially co-directional with the normal of the neighboring keypoint, then the cosine distance is approximately 0, the variance is approximately 0, and the region is approximately planar; if the normal of the keypoint differs significantly from the normal direction of the neighboring keypoint, the cosine distance is relatively large and the variance may be large, the area being similar to a corner edge.
For example, as shown in fig. 4, the points corresponding to gray circles represent key points, and the points corresponding to white circles neighbor the face key points. And detecting the selected key point and 26 neighborhood key points of the selected key point for the facial swelling near the cheekbones of the right face. The key point and the neighborhood key point form 26 pairs of normal lines, cosine distances of each pair of normal lines, namely included angles, theta1 and theta2 … theta26 are calculated, wherein theta_i= < normal_center, normal_neighbor_i >, < > represents inner products, normal_center represents the normal line direction of the key point, and normal_neighbor_i represents the normal line direction of the neighborhood key point. The normal line of the key point is obtained by vector sum calculation of the normal line of the surface, the normal line of the surface is obtained by vector product of the side vectors, and the side vectors are obtained by coordinate information difference between points, namely, the coordinate information difference between the key point and each neighborhood key. After obtaining theta1, theta2 … theta26, the means and variances of 26 theta were calculated.
Alternatively, to increase robustness to noise, the mean and variance may be calculated after the maximum and minimum of 26 theta values are removed when the mean and variance of 26 theta values are calculated.
Step S5: and determining the facial dysmorphism type of the target object according to the dispersion degree.
For example, in the facial diagnosis in the diagnosis of traditional Chinese medicine, facial abnormalities can be classified into four categories, namely facial swelling, cheek swelling, facial parcels, facial distortion, and facial distortion, the target region geometric feature is selected differently according to each facial abnormality, and the degree of dispersion calculated from the target region geometric feature is different, and the condition for judging the facial abnormalities is different.
For example, when the two sides of the mouth are selected as the target areas, the cosine mean and the cosine variance of the respective areas of the two sides of the mouth are calculated, and when the difference between the cosine mean and the cosine variance of the two sides is larger than a certain threshold value, the symmetry of the two sides of the mouth is poor, so that the abnormal surface shape including the deviation of the eyes and mouth can be judged.
In some embodiments, the terminal device stores an association relationship between a facial profile class and a dispersion degree, and determines the facial profile class of the target object according to the dispersion degree, including: and determining the facial special-shaped category of the target object according to the dispersion degree and the association relation, and outputting the facial special-shaped category.
The facial profile is classified according to actual requirements according to the pictures and experience according to the facial profile pictures accumulated in clinical work, a numerical range represented by the dispersion degree corresponding to the facial profile type is obtained according to the facial profile pictures under the facial profile type, the association relationship between the facial profile type and the dispersion degree is built, the association relationship between the facial profile type and the dispersion degree is stored in the terminal equipment, and the association relationship between the facial profile type and the dispersion degree is continuously adjusted and optimized along with the increase of clinical data. When face abnormal detection is carried out on the face image of the target object, all face abnormal categories in the face image are required to be detected, and then the face abnormal categories of the target object are judged.
Referring to fig. 5, fig. 5 is a facial profile recognition device 200 according to an embodiment of the present application, which is applied to a terminal device, where the facial profile recognition device 200 includes:
the data acquisition module 201 is configured to acquire a face image of a target object, and input the face image to a three-dimensional face reconstruction network to obtain a facial geometric feature of the target object, where the facial geometric feature at least includes coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by coordinate information of three adjacent face key points, and the plurality of face patches form a three-dimensional face model after reconstruction of the target object.
The data processing module 202 is configured to determine, from the facial geometric features, a target region geometric feature corresponding to a region to be analyzed in a face image of the target object.
The data collection module 203 is configured to determine, from coordinate information of a face key point of the geometric feature of the target area, target coordinate information corresponding to the target face key point, and determine, from the geometric feature of the face of the target object, neighborhood coordinate information corresponding to a neighborhood face key point according to the target coordinate information, where the neighborhood face key point is adjacent to the target face key point.
The data calculation module 204 is configured to calculate distance information between the normal line of the target coordinate information and the normal line of each of the neighborhood coordinate information, and determine the degree of dispersion of the face key points in the geometric feature of the target region according to the distance information.
And the data analysis module 205 is configured to determine a facial dysmorphism category of the target object according to the dispersion degree.
In some embodiments, the data acquisition module 201 performs, in acquiring a face image of a target object and inputting the face image into a three-dimensional face reconstruction network, obtaining a facial geometry of the target object:
Receiving a face image of a target object sent by an image acquisition device, wherein the face image is an image obtained by the image acquisition device through acquiring an object picture of the target object and intercepting a face region of the target object from the object picture;
and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
In some embodiments, the data acquisition module 201 performs, in inputting the face image into a three-dimensional face reconstruction network, obtaining a facial geometric feature of the target object:
inputting the face image to an image feature extraction network of the three-dimensional face reconstruction network, obtaining two-dimensional feature information of the face image and initializing a three-dimensional face model for the face image;
and continuously adjusting the three-dimensional face model by utilizing the two-dimensional characteristic information based on a graph convolution neural network of the three-dimensional face reconstruction network to obtain a target three-dimensional face model, wherein the target three-dimensional face model is composed of facial geometric characteristics of the target object.
In some embodiments, the data acquisition module 201 further performs, when acquiring a face image of a target object and inputting the face image into a three-dimensional face reconstruction network, obtaining a facial geometric feature of the target object:
Obtaining an object picture of a target object from an image acquisition device, and sending the object picture to the terminal equipment;
after the object picture is received, determining a face area of a target object from the object picture, and intercepting the face area to obtain a face image of the target object;
and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
In some embodiments, the data collection module 203 performs, in a process of determining, from coordinate information of face key points of the geometric feature of the target region, target coordinate information corresponding to a target face key point, and determining, from the geometric feature of the face of the target object, neighborhood coordinate information corresponding to a neighborhood face key point according to the target coordinate information:
determining first target coordinate information corresponding to a first target face key point from the coordinate information of the face key point of the geometric feature of the target area;
determining first neighborhood coordinate information corresponding to a first neighborhood face key point from the face geometric features of the target object according to the first target coordinate information;
determining second target coordinate information corresponding to a second target face key point from the coordinate information of the face key point of the geometric feature of the target area;
Determining second neighborhood coordinate information corresponding to a second neighborhood face key point from the facial geometric features of the target object according to the second target coordinate information;
and forming the first target coordinate information corresponding to the first target face key point and the second target coordinate information corresponding to the second target face key point into target coordinate information corresponding to the target face key point.
In some embodiments, the data calculation module 204 performs, in calculating distance information between the normal line of the target coordinate information and the normal line of each of the neighborhood coordinate information, and determining the degree of dispersion of the face key points in the geometric feature of the target region according to the distance information:
calculating cosine distances between the normal of the target coordinate information and the normal of each neighborhood coordinate information, calculating a cosine mean value and a cosine variance corresponding to the cosine distances, and taking the cosine mean value and the cosine variance as the distance information;
and calculating the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information and the preset value.
In some embodiments, the terminal device stores an association relationship between the facial profile class and the dispersion degree, and the data analysis module 205 performs, in determining the facial profile class of the target object according to the dispersion degree:
And determining the facial special-shaped category of the target object according to the dispersion degree and the association relation, and outputting the facial special-shaped category.
It should be noted that, for convenience and brevity of description, specific working processes of the above-described apparatus may refer to corresponding processes in the foregoing facial profile recognition method embodiments, and are not described herein again.
Referring to fig. 6, fig. 6 is a schematic block diagram of a structure of a terminal device according to an embodiment of the present application.
As shown in fig. 6, the terminal device 300 includes a processor 301 and a memory 302, the processor 301 and the memory 302 being connected by a bus 303, such as an I2C (Inter-integrated Circuit) bus.
In particular, the processor 301 is used to provide computing and control capabilities, supporting the operation of the entire server. The processor 301 may be a central processing unit (Central Processing Unit, CPU), the processor 301 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field-programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Specifically, the Memory 302 may be a Flash chip, a Read-Only Memory (ROM) disk, an optical disk, a U-disk, a removable hard disk, or the like.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of a portion of the structure associated with an embodiment of the present application and is not intended to limit the terminal device to which an embodiment of the present application is applied, and that a particular terminal device may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
The processor 301 is configured to execute a computer program stored in the memory, and implement the facial profile recognition method provided in any one of the embodiments of the present application when the computer program is executed.
In some embodiments, the processor 301 is configured to run a computer program stored in a memory, apply to a terminal device, and implement the following steps when executing the computer program:
acquiring a face image of a target object, inputting the face image into a three-dimensional face reconstruction network to obtain a face geometric feature of the target object, wherein the face geometric feature at least comprises coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model reconstructed by the target object;
Determining the geometric characteristics of a target area corresponding to an area to be analyzed in a face image of the target object from the geometric characteristics of the face;
determining target coordinate information corresponding to a target face key point from coordinate information of the face key point of the geometric feature of the target region, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, wherein the neighborhood face key point is adjacent to the target face key point;
calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric features of the target area according to the distance information;
and determining the facial dysmorphism type of the target object according to the dispersion degree.
In some embodiments, the processor 301 performs, in acquiring a face image of a target object and inputting the face image into a three-dimensional face reconstruction network, obtaining a facial geometry of the target object:
receiving a face image of a target object sent by an image acquisition device, wherein the face image is an image obtained by the image acquisition device through acquiring an object picture of the target object and intercepting a face region of the target object from the object picture;
And inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
In some embodiments, the processor 301 performs, in inputting the face image into a three-dimensional face reconstruction network, obtaining a facial geometric feature of the target object:
inputting the face image to an image feature extraction network of the three-dimensional face reconstruction network, obtaining two-dimensional feature information of the face image and initializing a three-dimensional face model for the face image;
and continuously adjusting the three-dimensional face model by utilizing the two-dimensional characteristic information based on a graph convolution neural network of the three-dimensional face reconstruction network to obtain a target three-dimensional face model, wherein the target three-dimensional face model is composed of facial geometric characteristics of the target object.
In some embodiments, the processor 301 further performs, in the process of acquiring the face image of the target object and inputting the face image into the three-dimensional face reconstruction network to obtain the facial geometric feature of the target object:
obtaining an object picture of a target object from an image acquisition device, and sending the object picture to the terminal equipment;
After the object picture is received, determining a face area of a target object from the object picture, and intercepting the face area to obtain a face image of the target object;
and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
In some embodiments, the processor 301 determines, from the coordinate information of the face key points of the geometric feature of the target region, target coordinate information corresponding to the face key points of the target, and determines, from the geometric feature of the face of the target object, neighborhood coordinate information corresponding to the neighborhood face key points according to the target coordinate information, performing:
determining first target coordinate information corresponding to a first target face key point from the coordinate information of the face key point of the geometric feature of the target area;
determining first neighborhood coordinate information corresponding to a first neighborhood face key point from the face geometric features of the target object according to the first target coordinate information;
determining second target coordinate information corresponding to a second target face key point from the coordinate information of the face key point of the geometric feature of the target area;
Determining second neighborhood coordinate information corresponding to a second neighborhood face key point from the facial geometric features of the target object according to the second target coordinate information;
and forming the first target coordinate information corresponding to the first target face key point and the second target coordinate information corresponding to the second target face key point into target coordinate information corresponding to the target face key point.
In some embodiments, the processor 301 performs, in calculating distance information between the normal line of the target coordinate information and the normal line of each of the neighborhood coordinate information, and determining the degree of dispersion of the face key points in the geometric feature of the target region according to the distance information:
calculating cosine distances between the normal of the target coordinate information and the normal of each neighborhood coordinate information, calculating a cosine mean value and a cosine variance corresponding to the cosine distances, and taking the cosine mean value and the cosine variance as the distance information;
and calculating the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information and the preset value.
In some embodiments, the terminal device stores an association relationship between the facial profile class and the degree of dispersion, and the processor 301 performs, in determining the facial profile class of the target object according to the degree of dispersion:
And determining the facial special-shaped category of the target object according to the dispersion degree and the association relation, and outputting the facial special-shaped category.
It should be noted that, for convenience and brevity of description, a specific working process of the terminal device described above may refer to a corresponding process in the foregoing embodiment of the facial profile recognition method, which is not described herein again.
The embodiment of the application also provides a storage medium for computer readable storage, the storage medium storing one or more programs, the one or more programs being executable by one or more processors to implement the steps of any of the facial profile recognition methods provided in the embodiments of the present application.
The storage medium may be an internal storage unit of the terminal device of the foregoing embodiment, for example, a memory of the terminal device. The storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk provided on the terminal device, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, functional modules/units in the apparatus disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware embodiment, the division between the functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed cooperatively by several physical components. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as known to those skilled in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
It should be understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations. It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. The present application is not limited to the above embodiments, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the scope of the present application, and these modifications and substitutions are intended to be included in the scope of the present application. Therefore, the protection scope of the application is subject to the protection scope of the claims.

Claims (10)

1. A facial profile recognition method applied to a terminal device, the method comprising:
acquiring a face image of a target object, inputting the face image into a three-dimensional face reconstruction network to obtain a face geometric feature of the target object, wherein the face geometric feature at least comprises coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model reconstructed by the target object;
determining the geometric characteristics of a target area corresponding to an area to be analyzed in a face image of the target object from the geometric characteristics of the face;
determining target coordinate information corresponding to a target face key point from coordinate information of the face key point of the geometric feature of the target region, and determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, wherein the neighborhood face key point is adjacent to the target face key point;
calculating distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric features of the target area according to the distance information;
And determining the facial dysmorphism type of the target object according to the dispersion degree.
2. The method according to claim 1, wherein the acquiring the face image of the target object and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometry of the target object comprises:
receiving a face image of a target object sent by an image acquisition device, wherein the face image is an image obtained by the image acquisition device through acquiring an object picture of the target object and intercepting a face region of the target object from the object picture;
and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
3. The method according to claim 2, wherein said inputting the face image into a three-dimensional face reconstruction network, to obtain facial geometric features of the target object, comprises:
inputting the face image to an image feature extraction network of the three-dimensional face reconstruction network, obtaining two-dimensional feature information of the face image and initializing a three-dimensional face model for the face image;
And continuously adjusting the three-dimensional face model by utilizing the two-dimensional characteristic information based on a graph convolution neural network of the three-dimensional face reconstruction network to obtain a target three-dimensional face model, wherein the target three-dimensional face model is composed of facial geometric characteristics of the target object.
4. The method of claim 1, wherein the acquiring a face image of a target object and inputting the face image into a three-dimensional face reconstruction network to obtain a facial geometry of the target object further comprises:
obtaining an object picture of a target object from an image acquisition device, and sending the object picture to the terminal equipment;
after the object picture is received, determining a face area of a target object from the object picture, and intercepting the face area to obtain a face image of the target object;
and inputting the face image into a three-dimensional face reconstruction network to obtain the facial geometric characteristics of the target object.
5. The method according to claim 1, wherein the determining, from the coordinate information of the face key points of the geometric feature of the target area, the target coordinate information corresponding to the target face key point, and determining, from the geometric feature of the face of the target object according to the target coordinate information, the neighborhood coordinate information corresponding to the neighborhood face key point includes:
Determining first target coordinate information corresponding to a first target face key point from the coordinate information of the face key point of the geometric feature of the target area;
determining first neighborhood coordinate information corresponding to a first neighborhood face key point from the face geometric features of the target object according to the first target coordinate information;
determining second target coordinate information corresponding to a second target face key point from the coordinate information of the face key point of the geometric feature of the target area;
determining second neighborhood coordinate information corresponding to a second neighborhood face key point from the facial geometric features of the target object according to the second target coordinate information;
and forming the first target coordinate information corresponding to the first target face key point and the second target coordinate information corresponding to the second target face key point into target coordinate information corresponding to the target face key point.
6. The method according to claim 1, wherein calculating distance information between the normal line of the target coordinate information and the normal line of each of the neighborhood coordinate information, and determining the degree of dispersion of the face key points in the geometric feature of the target region according to the distance information comprises:
Calculating cosine distances between the normal of the target coordinate information and the normal of each neighborhood coordinate information, calculating a cosine mean value and a cosine variance corresponding to the cosine distances, and taking the cosine mean value and the cosine variance as the distance information;
and calculating the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information and the preset value.
7. The method according to claim 1, wherein the terminal device stores an association relationship between a facial profile class and the degree of dispersion, and the determining the facial profile class of the target object according to the degree of dispersion includes:
and determining the facial special-shaped category of the target object according to the dispersion degree and the association relation, and outputting the facial special-shaped category.
8. A facial profile recognition device, comprising:
the data acquisition module is used for acquiring a face image of a target object, inputting the face image into a three-dimensional face reconstruction network, and obtaining face geometric features of the target object, wherein the face geometric features at least comprise coordinate information of a plurality of preset face key points and a plurality of face patches, each face patch is determined by the coordinate information of three adjacent face key points, and the face patches form a three-dimensional face model reconstructed by the target object;
The data processing module is used for determining the geometric characteristics of the target area corresponding to the area to be analyzed in the face image of the target object from the geometric characteristics of the face;
the data collection module is used for determining target coordinate information corresponding to a target face key point from the coordinate information of the face key point of the geometric feature of the target area, determining neighborhood coordinate information corresponding to a neighborhood face key point from the geometric feature of the face of the target object according to the target coordinate information, and the neighborhood face key point is adjacent to the target face key point;
the data calculation module is used for calculating the distance information between the normal line of the target coordinate information and the normal line of each neighborhood coordinate information, and determining the dispersion degree of the face key points in the geometric characteristics of the target area according to the distance information;
and the data analysis module is used for determining the facial dysmorphism category of the target object according to the dispersion degree.
9. A terminal device, characterized in that the terminal device comprises a processor and a memory;
the memory is used for storing a computer program;
the processor is configured to execute the computer program and to implement the facial profile recognition method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer-readable storage medium, which when executed by one or more processors causes the one or more processors to perform the steps of facial profile recognition as recited in any one of claims 1-7.
CN202310722579.1A 2023-06-16 2023-06-16 Facial profile recognition method, device, terminal and storage medium Pending CN116705303A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310722579.1A CN116705303A (en) 2023-06-16 2023-06-16 Facial profile recognition method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310722579.1A CN116705303A (en) 2023-06-16 2023-06-16 Facial profile recognition method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN116705303A true CN116705303A (en) 2023-09-05

Family

ID=87843009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310722579.1A Pending CN116705303A (en) 2023-06-16 2023-06-16 Facial profile recognition method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN116705303A (en)

Similar Documents

Publication Publication Date Title
US11120254B2 (en) Methods and apparatuses for determining hand three-dimensional data
US11747898B2 (en) Method and apparatus with gaze estimation
EP4075324A1 (en) Face recognition method and face recognition device
WO2020000908A1 (en) Method and device for face liveness detection
Haindl et al. Unsupervised detection of non-iris occlusions
US20170076446A1 (en) System and method for assessing wound
US20160162673A1 (en) Technologies for learning body part geometry for use in biometric authentication
WO2020087838A1 (en) Blood vessel wall plaque recognition device, system and method, and storage medium
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
US11625954B2 (en) Method and apparatus with liveness testing
WO2016069463A2 (en) A system and method for the analysis and transmission of data, images and video relating to mammalian skin damage conditions
US20120147167A1 (en) Facial recognition using a sphericity metric
CN110069989B (en) Face image processing method and device and computer readable storage medium
US12026600B2 (en) Systems and methods for target region evaluation and feature point evaluation
JP6071002B2 (en) Reliability acquisition device, reliability acquisition method, and reliability acquisition program
CN111091075A (en) Face recognition method and device, electronic equipment and storage medium
US20230022554A1 (en) Automatic pressure ulcer measurement
WO2021114623A1 (en) Method, apparatus, computer device, and storage medium for identifying persons having deformed spinal columns
KR102434703B1 (en) Method of processing biometric image and apparatus including the same
KR101961462B1 (en) Object recognition method and the device thereof
CN109087240B (en) Image processing method, image processing apparatus, and storage medium
CN111210423A (en) Breast contour extraction method, system and device of NIR image
CN112800966B (en) Sight tracking method and electronic equipment
CN106406507B (en) Image processing method and electronic device
US20230100976A1 (en) Assessment of facial paralysis and gaze deviation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination