CN115312200A - Method and system for predicting physiological and biochemical indexes and constructing prediction model - Google Patents

Method and system for predicting physiological and biochemical indexes and constructing prediction model Download PDF

Info

Publication number
CN115312200A
CN115312200A CN202110491808.4A CN202110491808A CN115312200A CN 115312200 A CN115312200 A CN 115312200A CN 202110491808 A CN202110491808 A CN 202110491808A CN 115312200 A CN115312200 A CN 115312200A
Authority
CN
China
Prior art keywords
physiological
face
biochemical
data set
biochemical index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110491808.4A
Other languages
Chinese (zh)
Inventor
汪思佳
彭倩倩
章吟奇
刘宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Nutrition and Health of CAS
Original Assignee
Shanghai Institute of Nutrition and Health of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Nutrition and Health of CAS filed Critical Shanghai Institute of Nutrition and Health of CAS
Priority to CN202110491808.4A priority Critical patent/CN115312200A/en
Publication of CN115312200A publication Critical patent/CN115312200A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Databases & Information Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a physiological and biochemical index prediction technology and discloses a method and a system for predicting physiological and biochemical indexes and constructing a prediction model. The construction method of the prediction model comprises the following steps: the method comprises the steps of obtaining a data set of a plurality of three-dimensional sample faces, wherein the data set comprises at least one physiological and biochemical index corresponding to each three-dimensional sample face and a point cloud data set of a preset face area corresponding to each physiological and biochemical index, and analyzing and processing the data set to construct a physiological and biochemical index prediction model. The method and the device can predict the physiological and biochemical indexes of the user more accurately, and provide basis for individual self-health monitoring and disease prevention.

Description

Method and system for predicting physiological and biochemical indexes and constructing prediction model
Technical Field
The application relates to a physiological and biochemical index prediction technology, in particular to a physiological and biochemical index prediction technology based on a human face three-dimensional image.
Background
The face can reflect much information of an individual, such as ancestorHealth status, psychology, mood, etc. With the development and progress of 3D imaging technology and image processing, facial images are used in many fields, such as rare disease diagnosis, skull development, psychology, and human health and aging [3-5] . There are several studies on the analysis of facial images associated with diseases [6-14] . For example, patients with Williams' syndrome often have significant facial features, such as a bulging cheek, upturned nose, broad mouth, etc. Peter Hammond et al, after analyzing the average faces of 130 williams syndrome patients and 317 healthy people, found that the average face of the patient population had thinner cheeks and a convex lower lip, which was consistent with the facial features of the patients found in clinical diagnosis. Then, the method uses Closest Mean, SVM and LDA to respectively carry out classification prediction on four parts of the whole face, eyes, nose and mouth, and the classification accuracy rate reaches more than 90 percent [11] . In another study, also hosted by Peter Hammond, average face analysis of 43 woff-hayes syndromes and 141 healthy people revealed clinically detectable features such as wide eye distance, convex eye sockets, etc. in this group. The accuracy of classification prediction of four parts of a whole face, eyes, a nose and a mouth by using three methods of Closest Mean, SVM and LDA is more than 98 percent [10]
In addition, many researchers believe that the personality intrinsic to an individual is also reflected to some extent in the face. For example, husele et al (2017) have revealed using 3D facial images that multiple dimensions of personality traits all have different degrees of correlation with facial features [15] . In the field of human health and aging, wen and Guo (2013) discovered that BMI is related to multiple facial features using 2D facial images [16] . Stephen et al (2017) discovered that 2D facial images can predict BMI, body fat percentage and blood pressure well using GMM [17] . Chenweiyang et al (2015) established individual predicted ages from 3D facial images, and found that the individual average predicted age differed from the actual age by ± 6 years. Accordingly, it is defined that a rapid aging group and a slow aging group, and by correlation analysis with a plurality of biochemical indexes, it is found that the predicted age can be obtained at the biochemical index levelSupport for [3] . Compared with the traditional face measurement method (manual measurement), the 2D and 3D images have the advantages of quick acquisition and easy storage [18] . In recent years, 3D face imaging systems have been rapidly developed. 3dMD,3D VECTRA H1 convenient to use, factor of safety is higher, and image acquisition is fast, receives more and more favouring in the 3D image research. A three-dimensional photogrammetric system (3 DMD) simulates the principle of binocular vision, takes images from two or more angles by using a camera, applies a software algorithm to process and combine the angle images to form a three-dimensional shape with depth, length and width information [19] . The 3D VECTRA H1 is a handheld imaging system and is suitable for flexible sampling environments. The method adopts digital close-range photogrammetry, three images of the same object are shot at different positions and directions, and accurate three-dimensional coordinates of the face are obtained after image processing, matching, analysis and calculation, the measurement principle is a triangle intersection method, and the measurement geometric resolution is 0.8mm. The 3D VECTRA H1 system does not need to be corrected, during collection, a volunteer stands in front of the device, an operator shoots 3 pictures from different angles, and the VECTRA software is used for automatically processing and synthesizing 3D face images. There is a need in the art to develop methods and systems that enable more efficient and accurate prediction of physiological and biochemical indicators.
Disclosure of Invention
The application aims to provide a method and a system for constructing and predicting a physiological and biochemical index prediction model, which can more effectively and accurately predict physiological and biochemical indexes of a user and provide a basis for individual self-health monitoring and disease prevention.
The application discloses a method for constructing a physiological and biochemical index prediction model based on a human face three-dimensional image, which comprises the following steps of:
(a) Acquiring a data set of a plurality of three-dimensional sample faces, wherein the data set comprises at least one physiological and biochemical index corresponding to each three-dimensional sample face and a point cloud data set of a preset face area corresponding to each physiological and biochemical index;
(b) And analyzing and processing the data set to construct a physiological and biochemical index prediction model.
In a preferred embodiment, the step (b) further comprises the following sub-steps:
based on the data set, extracting a preset number of features by using a feature extraction sub-model, and performing regression analysis on the preset number of features and each physiological and biochemical index corresponding to the preset number of features one by one to obtain a significant associated feature corresponding to each physiological and biochemical index;
and a prediction regression sub-model established by utilizing each physiological and biochemical index and the corresponding significant correlation characteristics thereof, wherein the physiological and biochemical index prediction model comprises the characteristic extraction sub-model and the prediction regression sub-model.
In a preferred embodiment, the step (b) further comprises the following sub-steps:
and constructing and training a deep learning model by taking the at least one physiological and biochemical index as output sample data and the point cloud data set of the preset face area corresponding to each physiological and biochemical index as input sample data based on the data set, and training to obtain the physiological and biochemical index prediction model.
In a preferred example, the data set of the plurality of three-dimensional sample faces further includes individual attribute information corresponding to each three-dimensional sample face.
In a preferred embodiment, the individual attribute information is selected from one or more of the following groups:
age, sex, height, weight, waist circumference, hip circumference.
In a preferred embodiment, the at least one physiological-biochemical indicator includes one or more of:
triglyceride TG, blood glucose GLU, creatinine CREA, uric acid UA, glutamic pyruvic transaminase ALT.
In a preferred embodiment, the feature extraction submodel is a partial least squares regression analysis model, a principal component analysis model, a support vector machine or a deep learning model.
In a preferred embodiment, the step (a) further includes the following sub-steps:
and preprocessing each three-dimensional sample face to obtain corresponding full-face point cloud data, wherein the point cloud data set of the preset face area is the full-face point cloud data.
In a preferred embodiment, the step (a) further includes the following sub-steps:
preprocessing each three-dimensional sample face to obtain corresponding full-face point cloud data;
and performing association analysis on each three-dimensional coordinate point data in the full-face point cloud data and each corresponding physiological and biochemical index, extracting a point cloud data set which is obviously associated with each physiological and biochemical index, wherein the point cloud data set of the preset face area is the extracted point cloud data set which is obviously associated with each physiological and biochemical index.
The application also discloses a method for predicting physiological and biochemical indexes based on the human face three-dimensional image, which comprises the following steps:
constructing a physiological and biochemical index prediction model according to the construction method described above;
and acquiring a point cloud data set of the three-dimensional image of the face to be predicted, inputting the acquired point cloud data set into the constructed physiological and biochemical index prediction model, and outputting to obtain the physiological and biochemical index of the face to be predicted.
The application also discloses a system for predicting physiological and biochemical indexes based on the human face three-dimensional image, which comprises the following steps:
the acquisition module is used for acquiring a three-dimensional image of the face to be predicted and converting the three-dimensional image into a corresponding point cloud data set;
a physiological and biochemical index prediction model module, wherein the physiological and biochemical index prediction model is constructed according to the construction method of any one of claims 1 to 7 and is used for inputting a point cloud data set of the three-dimensional image of the human face to be predicted to obtain the physiological and biochemical index of the three-dimensional image of the human face to be predicted;
and the output module is used for outputting the physiological and biochemical indexes of the three-dimensional image of the human face to be predicted.
The application also discloses a system for constructing a physiological and biochemical index model based on the human face three-dimensional image, which comprises the following steps:
a memory for storing computer executable instructions; and (c) a second step of,
a processor for implementing the steps in the construction method as described in the foregoing when executing the computer-executable instructions.
The application also discloses a system for predicting physiological and biochemical indexes based on the human face three-dimensional image, which comprises the following steps:
a memory for storing computer executable instructions; and (c) a second step of,
a processor for implementing the steps in the prediction method as described hereinbefore when executing the computer executable instructions.
The present application also discloses a computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, implement the steps in the method as described above.
The embodiment of the application at least comprises the following advantages and beneficial effects:
the method comprises the steps of analyzing and processing a plurality of data sets of three-dimensional sample faces, wherein the data sets comprise at least one physiological and biochemical index corresponding to each three-dimensional sample face and a point cloud data set of a preset face area corresponding to each physiological and biochemical index, and constructing a physiological and biochemical index prediction model.
Furthermore, the method utilizes the methods of Support Vector Machine (SVM), principal Component Analysis (PCA), partial least squares regression analysis (PLS), deep learning and the like to screen the characteristics related to the physiological and biochemical indexes, and utilizes the screened characteristics to establish a prediction model of the physiological and biochemical indexes.
Furthermore, through correlation analysis of part of physiological and biochemical indexes and 3D face image data, the remarkable correlation between the physiological and biochemical indexes and face features is found, a point cloud data set which is remarkably correlated with each physiological and biochemical index is extracted, the extracted point cloud data set which is remarkably correlated with each physiological and biochemical index is used as a point cloud data set of the preset face area, a physiological and biochemical index prediction model is constructed on the basis of the point cloud data set, and the prediction precision of the prediction model is improved.
Furthermore, based on the data sets of the multiple three-dimensional sample faces, a preset number of features are extracted by using the feature extraction submodel, regression analysis is carried out on the preset number of features and each physiological and biochemical index corresponding to the preset number of features one by one to obtain a significant associated feature corresponding to each physiological and biochemical index, a physiological and biochemical index prediction model is constructed by using a prediction regression submodel established by each physiological and biochemical index and the corresponding significant associated feature, and the prediction precision of the prediction model is improved again.
Further, the data set adds the individual attribute information of the sample on the basis of the point cloud data, so that the prediction effect can be further increased.
It is to be understood that within the scope of the present invention, the above-described features of the present invention and those specifically described below (e.g., in the examples) may be combined with each other to form new or preferred embodiments.
Drawings
Fig. 1 shows a flow chart of a method for constructing a physiological and biochemical index prediction model based on a three-dimensional image of a human face according to a first embodiment of the present application.
Fig. 2 shows a schematic structural diagram of a prediction system based on physiological and biochemical indexes of a three-dimensional image of a human face according to a third embodiment of the present application.
FIG. 3 shows a schematic diagram of a 3D face coordinate system; the origin is a nose tip point, the x axis represents a direction in which the front face extends to the left and right sides with the nose tip point as the center, the y axis represents a direction in which the front face extends to the upper and lower sides (forehead and chin) with the nose tip point as the center, and the z axis represents a direction in which the front face is sunken and raised to the inner and outer sides.
Fig. 4 shows a process of physiological and biochemical indicator predictive modeling based on 3D image data.
Fig. 5 shows the correlation of facial features extracted based on partial least squares with systolic blood pressure (corrected for age, gender, BMI).
Fig. 6 shows the correlation of facial features extracted based on partial least squares with diastolic blood pressure (corrected for age, gender, BMI).
Fig. 7 shows the correlation of facial features extracted based on partial least squares with total cholesterol (corrected for age, gender, BMI).
Fig. 8 shows the correlation of facial features extracted based on partial least squares with high density lipoproteins (corrected for age, gender, BMI).
Fig. 9 shows the correlation of facial features extracted based on partial least squares with low density lipoproteins (corrected for age, gender, BMI).
Fig. 10 shows correlation of facial features extracted based on partial least squares with triglycerides (corrected for age, gender, BMI).
Fig. 11 shows the correlation of facial features extracted based on partial least squares with glucose (corrected for age, gender, BMI).
FIG. 12 shows the results of the PLS-based 13 physiological and biochemical indicators prediction models in the discovery and validation sets.
FIG. 13 shows the results of 13 PCA-based prediction models of physiological and biochemical indicators in the discovery and validation sets.
FIG. 14 shows a PLS based mapping of the extracted features of the triglyceride prediction model to a 3D face; wherein A) extracting features for PLS as x-axis mapping coefficients; b) Extracting a y-axis mapping coefficient of the features for the PLS; c) And extracting a characteristic mapping coefficient in a z axis for the PLS, wherein a three-dimensional coordinate system takes a nose tip point as an origin, and the arrow directions of the x axis, the y axis and the z axis represent the positive direction.
Fig. 15 shows triglyceride full-face correlation analysis results P-values; where light pink represents the points where significant associations in the set were found (P <1.54x 10-6), dark pink represents the points where the P value in the validation set is less than 0.05, and brown represents the points where the P value in the validation set is less than the Bonferroni correction threshold.
FIG. 16 shows triglyceride full-face correlation analysis result effect values; where red represents a positive effect and blue represents a negative effect.
Figure 17 shows total cholesterol whole face correlation analysis results P-values; wherein, light pink represents the point of significant association in the discovery set (P <1.54 × 10-6), and dark pink represents the point of P value less than 0.05 in the verification set. No verification set P values less than the Bonferroni correction threshold point were found.
Fig. 18 shows the low density lipoprotein full-face association analysis result P-value; wherein, light pink represents the point of significant association in the discovery set (P <1.54 × 10-6), and dark pink represents the point of P value less than 0.05 in the verification set. No verification set P values less than the Bonferroni correction threshold point were found.
Fig. 19 shows a causal inference of triglyceride versus 3D face image extraction principal component.
FIG. 20 shows the modeling effect of PLS prediction based on FaWAS significant points.
FIG. 21 shows the modeling effect of PCA prediction based on FaWAS saliency points.
Fig. 22 shows the prediction result of physiological and biochemical indicators of a 3D face image based on deep learning.
Detailed Description
In the following description, numerous technical details are set forth in order to provide a better understanding of the present application. However, it will be understood by those of ordinary skill in the art that the claimed embodiments may be practiced without these specific details and with various changes and modifications based on the following embodiments. Materials and methods.
Term(s) for
As used herein, the term "physiological-biochemical indicator" is a combination of a physiological indicator and a biochemical indicator. The physiological and biochemical indexes reflect the rules and mechanisms of organism substance metabolism, energy conversion, growth and development and the like, and the regulation and control as well as the influence of the internal and external environmental conditions of the organism on the life activities of the organism, and are important manifestations of human health.
As used herein, the term "point cloud data" refers to the three-dimensional coordinates of points in a three-dimensional face image.
As used herein, the term "three-dimensional sample face" refers to the same as a three-dimensional or 3D facial image.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The first embodiment of the present application relates to a method for constructing a physiological and biochemical index prediction model based on a three-dimensional image of a human face, the flow of which is shown in fig. 1, and the method comprises the following steps:
step 101, acquiring a data set of a plurality of three-dimensional sample faces, wherein the data set comprises at least one physiological and biochemical index corresponding to each three-dimensional sample face and a point cloud data set of a preset face area corresponding to each physiological and biochemical index;
and 102, analyzing and processing the data set to construct a physiological and biochemical index prediction model.
The at least one physiological-biochemical indicator in step 101 may include one or more of the following:
triglyceride (TG), blood Glucose (GLU), creatinine (CREA), UREA (UREA), uric Acid (UA), glutamic-pyruvic transaminase (ALT), systolic Blood Pressure (SBP), diastolic Blood Pressure (DBP), total Cholesterol (TCHO), high-density lipoprotein (HDL), low-density lipoprotein (LDL), total Bilirubin (TBIL), apolipoprotein A1 (ApoA 1) apolipoprotein B (ApoB), glutamic-oxaloacetic transaminase (GOT), alkaline phosphatase (AKP), lactate Dehydrogenase (LDH), and the like.
The "point cloud data set of the preset face area" in step 101 may be a preset face point cloud data set at any position or in any predefined area. For example, each three-dimensional sample face may be preprocessed to obtain corresponding full-face point cloud data, and the point cloud data set of the preset face area is the full-face point cloud data. For another example, each three-dimensional sample face may be preprocessed to obtain corresponding full-face point cloud data, each three-dimensional coordinate point data in the full-face point cloud data and each corresponding physiological and biochemical index may be subjected to correlation analysis, a point cloud data set having a significant correlation with each physiological and biochemical index is extracted, and the point cloud data set of the preset face area is the extracted point cloud data set having a significant correlation with each physiological and biochemical index. As another example, a point cloud data set may be used that specifies a corresponding region for each physiological-biochemical indicator based on a pre-divided facial structure (e.g., without limitation, a face is divided into a plurality of regions according to facial organs, etc.).
Optionally, the data set of the plurality of three-dimensional sample faces in step 101 further includes individual attribute information corresponding to each three-dimensional sample face. Wherein the individual attribute information includes, for example and without limitation, one or more of the following: age, sex, height, weight, waist circumference, hip circumference, etc. This may enhance the prediction effect. The implementation of step 102 is various. In one embodiment, this step 102 is further implemented as: based on the data set (including the point cloud data set of the preset face area corresponding to each physiological and biochemical index, or including the point cloud data set of the preset face area corresponding to each physiological and biochemical index and the individual attribute information of the sample at the same time), extracting a preset number of features by using a feature extraction submodel, performing regression analysis on the preset number of features and each physiological and biochemical index corresponding to the preset number of features one by one to obtain a significant associated feature corresponding to each physiological and biochemical index, and establishing a prediction regression submodel by using each physiological and biochemical index and the corresponding significant associated feature thereof, wherein the physiological and biochemical index prediction model comprises the feature extraction submodel and the prediction regression submodel. In another embodiment, this step 102 is further implemented as: based on the data set, the at least one physiological and biochemical index is used as output sample data, the point cloud data set of the preset face area corresponding to each physiological and biochemical index is used as input sample data to construct and train a deep learning model, and the physiological and biochemical index prediction model is obtained through training. It should be noted that the prediction model constructed based on the former embodiment is better than the prediction model constructed based on the latter embodiment, and the prediction result is more accurate, which is verified in the embodiments mentioned below. In yet another embodiment, the step 102 is further implemented as: based on the data set, the at least one physiological and biochemical index is used as output sample data, the point cloud data set of the preset face area corresponding to each physiological and biochemical index and the individual attribute information of the sample are used as input sample data to construct and train a deep learning model, and the physiological and biochemical index prediction model is obtained through training.
The feature extraction sub-model is, for example, but not limited to, a partial least squares regression analysis model, a principal component analysis model, a support vector machine, a deep learning model, or the like.
The second embodiment of the application relates to a method for predicting physiological and biochemical indexes based on a human face three-dimensional image, which comprises the following steps A and B:
in the step A, a physiological and biochemical index prediction model is constructed according to the construction method of the physiological and biochemical index prediction model based on the three-dimensional human face image in the first embodiment, namely, the technical details in the first embodiment can be applied to the step A in the embodiment;
and B, acquiring a point cloud data set of the three-dimensional image of the face to be predicted, inputting the acquired point cloud data set into the constructed physiological and biochemical index prediction model, and outputting to obtain the physiological and biochemical index of the face to be predicted.
A third embodiment of the present application relates to a system for predicting physiological and biochemical indicators based on a three-dimensional image of a human face, as shown in fig. 2, including:
and the acquisition module is used for acquiring the three-dimensional image of the face to be predicted and converting the three-dimensional image into a corresponding point cloud data set.
For example, the obtaining module is further configured to pre-process the three-dimensional image of the human face to be predicted to obtain corresponding full-face point cloud data. For another example, the obtaining module is further configured to extract a point cloud data set that has a significant association with each physiological and biochemical index in the full-face point cloud data of the three-dimensional image of the human face to be predicted, that is, a point cloud data set corresponding to the three-dimensional image of the human face to be predicted. For another example, the acquiring module is further configured to obtain a point cloud data set of a face region (for example, but not limited to, a face is divided into one or more regions according to facial organs) pre-specified for each physiological and biochemical indicator, that is, a point cloud data set corresponding to a three-dimensional image of a human face to be predicted.
The physiological and biochemical index prediction model module is used for inputting the point cloud data set of the three-dimensional image of the human face to be predicted to obtain the physiological and biochemical indexes of the three-dimensional image of the human face to be predicted, and the physiological and biochemical index prediction model is constructed according to the construction method of the physiological and biochemical index model based on the three-dimensional image of the human face, which is related to the first embodiment of the application, namely the technical details in the first embodiment can be applied to the embodiment.
And the output module is used for outputting the physiological and biochemical indexes of the three-dimensional image of the human face to be predicted.
Optionally, the physiological and biochemical index prediction model includes the feature extraction submodel and a prediction regression submodel, specifically, a preset number of features are extracted by using the feature extraction submodel based on the data set, regression analysis is performed on the preset number of features and each corresponding physiological and biochemical index one by one to obtain a significant associated feature corresponding to each physiological and biochemical index, and the prediction regression submodel is established by using each physiological and biochemical index and the corresponding significant associated feature thereof. The feature extraction submodel is, for example, but not limited to, a partial least squares regression analysis model, a principal component analysis model, a support vector machine, a deep learning model, or the like.
Optionally, the physiological and biochemical index prediction model is obtained by deep learning model training, specifically, based on the data set, the deep learning model is constructed and trained by using the at least one physiological and biochemical index as output sample data and the point cloud data set of the preset facial region corresponding to each physiological and biochemical index as input sample data, and the physiological and biochemical index prediction model is obtained by training.
The physiological and biochemical indicators in the present embodiment include one or more of the following:
triglyceride (TG), blood Glucose (GLU), creatinine (CREA), UREA (UREA), uric Acid (UA), glutamic-pyruvic transaminase (ALT), systolic Blood Pressure (SBP), diastolic Blood Pressure (DBP), total Cholesterol (TCHO), high-density lipoprotein (HDL), low-density lipoprotein (LDL), total Bilirubin (TBIL), apolipoprotein A1 (ApoA 1) apolipoprotein B (ApoB), glutamic-oxaloacetic transaminase (GOT), alkaline phosphatase (AKP), lactate Dehydrogenase (LDH), and the like.
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Experimental procedures without specific conditions noted in the following examples, generally according to conventional conditions, or according to conditions recommended by the manufacturer.
In the embodiment, 3D facial image information and 13 items of physiological and biochemical index data of 4809 samples are collected by using 3dMD and 3D VECTRAH1, the correlation analysis of partial physiological and biochemical indexes and the 3D facial image data is carried out to find that the physiological and biochemical indexes have obvious correlation with facial features, and a prediction model which takes the 3D facial image point cloud data as input and the physiological and biochemical indexes as output is constructed by combining various dimension reduction methods and deep learning, so that a basis is provided for individual self-health monitoring and disease prevention.
Materials and methods
Facial image information of 4809 samples from three regions (Jiangsu Tezhou, guangxi Nanning, henan Zheng) were collected using a three-dimensional photogrammetry system (3dMD, 3D VECTRA H1). In addition, 13 physiological and biochemical data (systolic blood pressure SBP, diastolic blood pressure DBP, cholesterol CH, triglyceride TG, high-density lipoprotein HDL, low-density lipoprotein LDL, blood glucose GLU, glutamic-pyruvic transaminase ALT, direct bilirubin DBIL, creatinine CREA, UREA UREA, uric acid UA, total bilirubin TB) and information such as age and sex were collected from these samples.
1. 3D data pre-processing
The original three-dimensional image taken by the 3DMD three-dimensional camera system is first processed by the software 3 dMDacquistion and 3 dMDviewer. Then, format conversion is performed on the 3D data, and marker point identification of 17 anatomical feature points is performed. And mapping the sample face and the reference face by a thin plate spline function (TPS) method. Finally, all sample faces are translated, rotated, corrected to a uniform coordinate space by local generalized pervasive analysis (pGPA). The 3D VECTRA H1 synthesized 3D image does not need to be corrected. Three-dimensional original data acquired by different sample individuals and different instruments have different point cloud densities and corresponding feature point information, and three-dimensional features cannot be uniformly acquired, so that the uniform standardized processing of three-dimensional faces needs to be realized by utilizing a computer technology, which is called as preprocessing. The preprocessing process mainly comprises the steps of 3D template face construction, face model registration and quality control.
a) Constructing 3D template faces
Construction of 3D template Face (Mask Face) was performed using MeshLab software based on mesmonk method (White JD et al, 2019). The method comprises the following specific steps: (1) the average face of 1000 east asian population samples was first calculated. The three-dimensional coordinate system determination of the average face is determined by the (Guo J et al, 2013) method. The nose tip point is determined through spherical fitting, then a plane is determined according to two lines of two external canthus connecting lines and two lines of two labial angulus connecting lines, the normal line of the nose tip on the plane is a z-axis, and the projection of the central lines of the external canthus and the labial angulus connecting lines on the two sides on the plane is a y-axis. So that the projection of the nose tip point on the plane is determined as the origin. A line on the plane passing through the origin and perpendicular to the y and z axes is defined as the x axis. (2) And correcting the posture of the average face, and setting the nose tip point as an origin. (3) And taking the point cloud information of a half face by taking a longitudinal axis as a boundary, and carrying out down-sampling on the point cloud to about 4000 dense feature points by using a Poisson-disk sampling method (Poisson-disk sampling). And (4) the characteristic points obtained by down-sampling are symmetrical to the other half face, so that a complete and symmetrical template face is obtained. (4) The adjacent points or the repeat points are merged with the interval of 2mm as a boundary. And performing surface reconstruction on the point cloud by using a rolling sphere method (Ball rolling) to obtain 15598 surfaces. (5) Outliers were manually removed to give a template face with a dot spacing of about 2mm, containing 7906 dense dots, 15598 faces.
b) 3D face model registration (standardization)
Using MeshMonk software to perform 3D face registration, mapping the template face to the acquired 3D sample face, and after rigid alignment, non-rigid alignment and generalized normal analysis processing, uniformly registering original face dense point data (raw data) into a point cloud format consisting of 7906 dense feature points to complete 3D face registration (figure 6). The method comprises the following specific steps:
(1) and configuring the environment required by the MeshMonk. Including Matlab 2018, visual Studio 2017, eigen 3.3.4, etc.
(2) And finding a nose tip point on the face by a Sphere fitting (Sphere fitting) method, and setting the coordinate of the nose tip point as an origin. And for the sample with the failed spherical fitting of the nose tip point, manually marking the nose tip point and then performing subsequent operation.
(3) Rigid registration (Rigid registration) is performed by using an Iterative Closest Point (ICP) algorithm, and a template face is attached to the sample face so as to keep the template face consistent with the sample face in a spatial position.
(4) And (2) performing Non-rigid registration (Non-rigid registration) by using a Thin Plate Spline (TPS) algorithm, deforming the template face, and then attaching the template face to the sample face for registration, wherein the deformed template face is the aligned sample face.
(5) All registered samples are translated, rotated, and scaled to the same coordinate space using global Generalized algorithms Analysis (GPA). Finally, each normalized registered sample face consists of 7906 dense feature points. The obtained dense point cloud after normalization can be used for analysis such as subsequent phenotype extraction. For each face, the x-axis represents the direction in which the front face extends to the left and right sides with the nose apex as the center, the y-axis represents the direction in which the front face extends to the upper and lower sides (forehead and chin) with the nose apex as the center, and the z-axis represents the direction in which the front face is concave and convex to the inner and outer sides.
c) Facial quality control
For the three-dimensional face image subjected to standardized unified processing, quality control is required, and the interference of samples with registration failure or large-area missing on subsequent analysis results is prevented. Abnormal faces caused by mapping errors can be detected by Mahalanobis distance (Mahalanobis distance) calculated by face symmetry, and the method comprises the following specific steps:
(1) firstly, turning the mapped template face left and right. GPA is carried out on the faces before and after overturning, and the Mahalanobis distance of each corresponding point is calculated.
(2) The mahalanobis distance of each point of each face is averaged and this value is defined as the symmetry value of each sample face. And observing the distribution of the symmetry values, and defining the error more than or equal to mu +3 sigma as an abnormal value according to the central limit theorem.
(3) And for the abnormal value samples, manually checking whether the abnormal value samples have mapping errors, and if so, carrying out mapping again or deleting the samples.
(4) And (4) carrying out translation, rotation and scaling operations on the reserved samples by reusing the GPA, and obtaining analyzable sample data.
After all sample 3D images were uniformly normalized, 7906 point cloud data were obtained.
2. Pretreatment of physiological and biochemical indexes
The pretreatment of the physiological and biochemical indexes is mainly to check and treat abnormal values and deficiency values of the physiological and biochemical indexes. We delete missing and outlier data.
3. Physiological and biochemical index prediction modeling and correlation analysis based on 3D facial image data
4809 samples were first split into discovery (2641) and verification (2168) sets. Based on the 3D coordinate point data of 2641 cases of Thai Han nationality samples in the discovery set, the method comprises the steps of screening features related to physiological and biochemical indexes by using methods such as a Support Vector Machine (SVM), principal Component Analysis (PCA), partial least squares regression (PLS) and deep learning, and establishing a prediction model of the physiological and biochemical indexes by using the screened features. The SVM can automatically perform feature screening; PCA, PLS requires semi-automated feature screening. The interpretation of the original data by the PCA or PLS extracted features is decreasing in order, so we can choose the top 150 features according to their variance interpretations (cumulative over 90%). And then carrying out regression analysis on the first 150 PCA or PLS characteristics and corresponding physiological and biochemical indexes one by one to screen out the significant association characteristics. Finally, a predictive regression model is established by utilizing the significant association characteristics and the age and the gender. And the superiority and inferiority of each method were verified by combining with 10-fold cross-validation method (10-fold CV) (FIG. 4). As shown in fig. 4, 2641 samples were randomly split into 10, 9 of which were used for predictive modeling training each time, and the remaining 1 was used for predictive testing. And (4) counting the effect of the 10-time prediction modeling result, the mean absolute error (MAD), the correlation and the like.
The project utilizes 2168 samples in a verification set to verify the effect of the prediction model established by different methods. And (3) carrying out data preprocessing on the sample data set of the verification set, further applying a prediction model to the data, and finally evaluating the effects of different prediction models by methods such as receiver operation characteristic curve (ROC curve), specificity, sensitivity, accuracy, correlation mapping, root Mean Square Error (RMSE) and the like.
Further, a concept of full face association analysis (FaWAS) is proposed. By analyzing the relevance of each coordinate point on the three-dimensional face image and the corresponding phenotype (physiological and biochemical index), the coordinate point which is obviously related to the phenotype is found.
Further, based on the coordinate points screened out by each physiological and biochemical index FaWAS, a prediction model of the physiological and biochemical index is reconstructed and compared with a modeling effect based on full-face point cloud data.
Meanwhile, the current popular deep learning method is also utilized to carry out prediction modeling on physiological and biochemical indexes, and the effect of the method is compared with the effect of the traditional PLS and PCA modeling methods.
(II) comparative verification of research results
1. Prediction effect comparison of three prediction modeling methods and systems (SVM, PCA, PLS)
Three predictive modeling methods (SVM, PCA, PLS) are used for physiological and biochemical indicator predictive modeling based on 3D face images. In the case of systolic pressure (SBP), diastolic pressure (DBP), SVM performed worse on the test set, while PCA and PLS performed comparably, both better than SVM (table 1). Therefore, the subsequent modeling will mainly employ the PLS, PCA method.
Table 1 10 Cross validation results of 3D image SVM, PCA, PLS health prediction model
Figure BDA0003052622450000141
Figure BDA0003052622450000151
2. Correlation analysis result of facial features and health indexes extracted based on partial least square method
By the partial least square method, facial features in the three-dimensional face image are extracted, and correlation coefficients of the facial features and seven health indexes in groups of all people, women and men in a discovery set are calculated. We found that facial features extracted by partial least squares have significant correlation with almost all health indicators (except systolic pressure in the male pool) and that the correlation coefficients for systolic pressure, diastolic pressure, triglycerides and high density lipoprotein cholesterol in the found pool remain above 0.2 (table 2).
Table 2 shows that the correlation analysis results of the facial features and the health indexes extracted based on the partial least square method are concentrated
Figure BDA0003052622450000152
Then, we apply the same model in the verification set to extract the facial features of the three-dimensional face image and perform correlation analysis, wherein the correlations in all people and women are repeated successfully, and only the systolic blood pressure and the correlation between the high-density lipoprotein and the facial features in men are repeated (table 3).
Table 3 shows the correlation analysis results of facial features and health indexes extracted based on partial least square method in verification set
Figure BDA0003052622450000153
Figure BDA0003052622450000161
Since the three-dimensional face image contains information about age and BMI, it is not enough to correct the three-dimensional face image in the linear regression analysis to completely remove the influence of the two on the correlation analysis result. In order to better analyze the correlation between the three-dimensional facial features and the health indexes, the residual error of linear analysis of the original health indexes on age and BMI is extracted and is used as a new health index to carry out facial feature and correlation analysis.
After correlation analysis using the residuals of the health indicators, we found that all seven health indicators still retained significant correlation with facial features in all people and women (table 3). Whereas in the male pool only triglycerides, high density lipoproteins and low density lipoproteins have a significant correlation (table 4).
Table 4 shows that the correlation analysis results of the facial features and the health index residuals extracted based on the partial least square method are concentrated
Figure BDA0003052622450000162
In the validation set, all populations had both triglyceride and high density lipoprotein successful repeats, and the association of total cholesterol, triglyceride, high density lipoprotein and low density lipoprotein with facial features in the female population was successfully repeated. Whereas in the male collection no associations are repeated. And no other health indicator residuals have a correlation of 0.2 or more, except that triglycerides have a correlation of more than 0.02.
Table 5 shows the correlation analysis results of the facial features and the health index residuals extracted based on the partial least square method in the verification set
Figure BDA0003052622450000171
Besides, the influence ways of the seven health indexes on the facial features are visualized in a mode of increasing/reducing the standard deviation on the facial features extracted in the partial least square method (fig. 5 to 11), and in order to make the facial feature difference more obvious, each line of facial images in fig. 5 to 11 are from-3 standard deviations to +3 standard deviations of extreme faces from left to right.
Through comparison of facial features, we find: six other criteria, except high density lipoprotein, higher levels corresponded to wider faces and more prominent eyes, as opposed to lower levels.
3. Health index prediction model based on full-face point cloud PLS and PCA
The 13 physiological and biochemical index prediction model based on the 3D full-face point cloud PLS can better predict triglyceride TG, blood sugar GLU, creatinine CREA, uric acid UA and glutamic-pyruvic transaminase ALT.
On a discovery set (TZ 14) 10 cross-over verification test set, a prediction model of 13 physiological and biochemical indexes based on a 3D full-face point cloud PLS has the prediction accuracy AUC of the prediction model on triglyceride TG, blood sugar GLU, creatinine CREA, uric acid UA and glutamic-pyruvic transaminase ALT of more than 0.7. The prediction accuracy in the 3 validation sets (TZ 15, ZZ17, NN 18) also reached substantially above 0.7 (fig. 12). The effect of the prediction model of 13 physiological and biochemical indicators based on the 3D full-face point cloud PCA is similar to that of PLS (fig. 13).
4. Physiological and biochemical index prediction model characteristic quantification based on full-face point cloud PLS
In order to explore which facial features are extracted for physiological and biochemical index prediction modeling, we extracted and further analyzed the facial features used by the triglyceride prediction model.
The PLS-based triglyceride prediction model finally used 5 x-axis principal components, 7 y-axis principal components and 8 z-axis principal components that were significantly associated with triglycerides.
xpci=Pi T X,i=1,2,…,5
ypcj=Pj T Y,j=1,2,…,7
zpck=Pk T Z,k=1,2,…,8
Wherein, pi, pj, pk are respectively the mapping matrix of the sample set X-axis 7906 coordinate, Y-axis 7906 coordinate, Z-axis 7906 coordinate to PLS extraction principal components xpci, ypcj, zpck, and the matrix dimension 2841 × 7906; x, Y, Z are the 3D face coordinate value matrix of the 2841 sample, matrix dimension 2841 × 7906.
A physiological and biochemical index prediction regression model,
y = intercept + β age + β sex gender + β xpc1xpc1+ β xpc2xpc2+ β xpc3xpc3+ β xpc4xpc4+ β xpc5xpc5+ β ypc1+ β ypc2ypc2+ β ypc3+ β ypc4ypc4+ β ypc5ypc5+ β ypc6ypc6+ β ypc7ypc7+ β zpc1zpc1+ β zpc2+ β zpc3zpc3+ β zpc4zpc4+ β zpc5zpc5+ β zpc6zpc6+ β zpc7zpc7+ β zpc8zpc
The method is characterized in that the method is to carry out quantitative processing on facial features of a biochemical physiological index prediction model, and comprises the following steps:
quantification of X-axis features on 3D face coordinates,
Xfeature=βxpc1xpc1+βxpc2xpc2+βxpc3xpc3+βxpc4xpc4+βxpc5xpc5
=βxpc1P1TX+βxpc2P2TX+βxpc3P3TX+βxpc4P4TX+βxpc5P5TX
=(βxpc1P1+βxpc2P2+βxpc3P3+βxpc4P4+βxpc5P5)TX
=βxprojection TX
similarly, a quantization equation and quantization coefficients of the Y-axis, Z-axis features at the 3D face coordinates can be calculated,
Y feature =β yprojection T Y,
Z feature =β zprojection T Z。
we will get beta xprojection ,β yprojection ,β zprojection Shown on a 3D facial coordinate system (fig. 14), it was found that the predictive model of triglyceride was relatively high in the x-axis for both cheek weights (the darker the blue or orange color represents the higher the weight coefficient), and cheek width appears to correlate with triglyceride height; giving higher weight coefficients to the vicinity of the eyebrows, temples, cheeks and upper and lower jaws on the y axis, and showing that the eyebrow areas, temples, upper and lower jaw areas extend upwards and cheeks extend downwards to be related to triglyceride height on the whole; the regions are given higher weight on the z-axis (darker blue or orange color represents higher weight coefficient), where orange represents the degree of protrusion forward of the face and blue represents the degree of recession rearward of the face, overall showing a high correlation of protrusion of the eye, maxillomandibular, and cheek regions from triglycerides.
5. Face-wide Association Analyses (FaWAS) results
By referring to the design of genome-wide association analysis (GWAS), we propose the concept of face-wide association analysis (FaWAS). By analyzing the relevance of each coordinate point on the three-dimensional face image and the corresponding phenotype (health index), the coordinate point which is obviously relevant to the phenotype is found. We corrected covariates such as age, gender, BMI, etc. and analyzed using a linear regression model. We found that for the triglyceride phenotype there were multiple coordinate points on the three-dimensional facial image that were significantly associated with it and could be successfully repeated in the validation set (finding set threshold P =1.55 x10-6, fig. 15).
On the x-axis, both cheeks tended to extend outward more, with the same trend being shown for both male and female samples. This indicates that high triglyceride is accompanied by facial widening, showing features consistent with obesity. In the z-axis, the region of the eye (upper and lower eyelids) bulges outward, presenting features similar to edema. This feature was more pronounced in association with triglycerides in pooled female and male samples (P < 1.54X10-6), with slightly reduced association in separate female and male samples (P < 10-5), hypothesized to result from reduced sample size after separation from male and female. This finding has not been studied, and we hypothesize that the outward bulging characteristics of the eye may be associated with increased storage of triglycerides in the fatty muscles of the eye, for reasons that remain to be confirmed. No significant change was found on the Y-axis. The results indicate that the higher triglyceride index is associated with wider cheeks and more prominent eyes (fig. 16). The results have similarities to facial features utilized by PLS based predictive models.
For other health indicators, total cholesterol and low density lipoprotein were also significantly correlated with three-dimensional coordinate points on multiple facial images throughout the population, and the results could be repeated in a validation set. Wherein, the total cholesterol and the low density lipoprotein tend to extend outwards from the cheek on both sides of the x axis, namely, the sample with higher index shows that the cheek part is fat. However, after the segmentation into male and female sets, the significant association of these two indices on the x-axis was reduced (fig. 17, fig. 18). The other four health indexes (systolic pressure, diastolic pressure, high-density lipoprotein and glucose) are not obviously associated with the three-dimensional coordinate point finding.
We then performed a "facial point risk score" calculation using all the facial three-dimensional points that can be repeated in the triglyceride whole-face correlation analysis (P-value in repeat set is less than 0.05) to assess the high triglyceride risk of the individual. We found that the facial point risk score had a high correlation with triglycerides (correlation coefficient 0.25, p-tres 2x 10-16). We analyzed the quartile of the facial point risk score and found that the triglyceride indicator was significantly reduced (P < 2x 10-16) for the upper quartile compared to the lower quartile, i.e., individuals with the upper quartile score had a lower triglyceride indicator. The results indicate that the facial point risk score has a better predictive value for triglycerides. Further we used causal inference to discover that triglycerides are responsible for changes in facial morphology (fig. 19).
6. Physiological and biochemical index prediction model of PLS and PCA based on FaWAS significant point
According to the correlation result of the whole-face point cloud and 13 physiological and biochemical indexes, 3D point data for predictive modeling is screened according to a threshold value (P < 6.3X10-6). After the points are utilized to carry out a physiological and biochemical index prediction model of PLS and PCA, 6 physiological and biochemical indexes can be found to be successfully modeled, and the modeling effect is similar to the full-face modeling effect. The prediction effect of the PLS-based prediction model in SBP, TG and GLU is more than 0.7 in the discovery set and the verification set (figure 20). The predicted effect based on PCA was slightly lower in HDL (fig. 21).
We looked at other physiological and biochemical indicators that could not be modeled successfully, the main reason for this was that FaWAS did not find a point that crossed the threshold (P < 6.3x10-6), resulting in no data for predictive modeling.
Meanwhile, the modeling prediction effect based on the FaWAS significant points is similar to the full-face modeling effect in terms of the successfully modeled physiological and biochemical indexes.
7. 3D image physiological and biochemical index prediction model based on deep learning
We also use the current popular deep learning method (multilayer perceptron, pointenet + + etc.) to realize the prediction of 13 physiological and biochemical indexes. The results show that: the deep learning prediction effect is not significantly better or even slightly lower than that based on the conventional statistical model (fig. 22).
8. 2D face image physiological and biochemical index prediction model based on deep learning
We attempt deep learning based predictive modeling with 2D face images for both systolic SBP and diastolic DBP. The results show that: the modeling effect based on the 3D face image is superior to that based on the 2D face image (see table 6).
TABLE 6 blood pressure prediction Effect based on 2D image and based on 3D image
Figure BDA0003052622450000201
(III) conclusion
The project utilizes 4809 samples, divides the samples into discovery sets (2641) and verification sets (2168), and constructs 13 physiological and biochemical index prediction models based on 3D face image data. The present study found that the PLS prediction method based on 3D face images works best, better than the prediction methods based on deep learning and on 2D face images.
The project finds the association between the facial area and the physiological-biochemical index. For example, high triglyceride levels are significantly associated with wider cheeks and more prominent eyes. Furthermore, causal analysis demonstrates that elevated triglycerides are responsible for changes in facial morphology.
Reference:
[1].Han C.,Liu F.,Yang X.,et al.,Ideal cardiovascular health and incidence of atherosclerotic cardiovascular disease among Chinese adults:the China-PAR project[J].Sci China Life Sci,2018.
[2].Seyan A.S.,Hughes R.D.,and Shawcross D.L.,Changing face of hepatic encephalopathy:role of inflammation and oxidative stress[J].World J Gastroenterol,2010.16:3347-57.
[3].Chen W.,Qian W.,Wu G.,et al.,Three-dimensional human facial morphologies as robust aging markers[J].Cell Res,2015.25:574-87.
[4].Hallgrimsson B.,Percival C.J.,Green R.,et al.,Morphometrics,3D Imaging,and Craniofacial Development[J].Curr Top Dev Biol,2015.115:561-97.
[5].Matthews H.S.,Penington A.J.,Hardiman R.,et al.,Modelling 3D craniofacial growth trajectories for population comparison and classification illustrated using sex-differences[J].Sci Rep,2018.8:4771.
[6].Bhuiyan Z.A.,Klein M.,Hammond P.,et al.,Genotype-phenotype correlations of 39 patients with Cornelia De Lange syndrome:the Dutch experience[J].J Med Genet,2006.43:568-75.
[7].Cox-Brinkman J.,Vedder A.,Hollak C.,et al.,Three-dimensional face shape in Fabry disease[J].Eur J Hum Genet,2007.15:535-42.
[8].Hammond P.,The use of 3D face shape modelling in dysmorphology[J].Arch Dis Child,2007.92:1120-6.
[9].Hammond P.,Forster-Gibson C.,Chudley A.E.,et al.,Face-brain asymmetry in autism spectrum disorders[J].Mol Psychiatry,2008.13:614-23.
[10].Hammond P.,Hannes F.,Suttie M.,et al.,Fine-grained facial phenotype-genotype analysis in Wolf-Hirschhorn syndrome[J].Eur J Hum Genet,2012.20:33-40.
[11].Hammond P.,Hutton T.J.,Allanson J.E.,et al.,Discriminating power of localized three-dimensional facial morphology[J].Am J Hum Genet,2005.77:999-1010.
[12].Hammond P.,Hutton T.J.,Allanson J.E.,et al.,3D analysis of facial morphology[J].Am J Med Genet A,2004.126A:339-48.
[13].Hammond P.and Suttie M.,Large-scale objective phenotyping of 3D facial morphology[J].Hum Mutat,2012.33:817-25.
[14].Tobin J.L.,Di Franco M.,Eichers E.,et al.,Inhibition of neural crest migration underlies craniofacial dysmorphology and Hirschsprung's disease in Bardet-Biedl syndrome[J].Proc Natl Acad Sci U S A,2008.105:6714-9.
[15].Hu S.,Xiong J.,Fu P.,et al.,Signatures of personality on dense 3D facial images[J].Sci Rep,2017.7:73.
[16].Wen L.and Guo G.A computational approach to body mass index prediction from face images.Image Vis.Comput.,2013.392-400.
[17].Stephen I.D.,Hiew V.,Coetzee V.,et al.,Facial Shape Analysis Identifies Valid Cues to Aspects of Physiological Health in Caucasian,Asian,and African Populations[J].Front Psychol,2017.8:1883.
[18].Aynechi N.,Larson B.E.,Leon-Salazar V.,et al.,Accuracy and precision of a 3D anthropometric facial analysis with and without landmark labeling before image acquisition[J].Angle Orthod,2011.81:245-52.
[19].Kau C.H.,Hunter L.M.,and Hingston E.J.,A different look:3-dimensional facial imaging of a child with Binder syndrome[J].Am J Orthod Dentofacial Orthop,2007.132:704-9.
[20].White J.D.,Ortega-Castrillón A.,Matthews H.et al.,MeshMonk:Open-source large-scale intensive 3D phenotyping[J].Scientific Reports,2019.9:6058.
[21].Guo J.,Mei X.and Tang K.,Automatic landmark annotation and dense correspondence egistration for 3D human facial images[J].BMC Bioinformatics,2013.14:232.
it should be noted that, as will be understood by those skilled in the art, the implementation functions of the modules shown in the above embodiment of the system for predicting physiological and biochemical indexes based on a three-dimensional image of a human face may be understood by referring to the aforementioned method for constructing a model for predicting physiological and biochemical indexes based on a three-dimensional image of a human face or the related description of the prediction of physiological and biochemical indexes based on a three-dimensional image of a human face. The functions of the modules shown in the embodiment of the prediction system based on physiological and biochemical indexes of the three-dimensional image of the human face can be realized by a program (executable instruction) running on a processor, and can also be realized by a specific logic circuit. The prediction system based on physiological and biochemical indexes of the three-dimensional human face image in the embodiment of the application can be stored in a computer readable storage medium if the prediction system is realized in the form of a software functional module and is sold or used as an independent product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially implemented or portions thereof contributing to the prior art may be embodied in the form of a software product stored in a storage medium, and including several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the method of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, or an optical disk, and various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, the present application also provides a computer-readable storage medium, in which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the computer-executable instructions implement the method embodiments of the present application. Computer-readable storage media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, a computer readable storage medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
In addition, the embodiment of the application also provides a system for constructing a physiological and biochemical index model based on a human face three-dimensional image, which comprises a memory and a processor, wherein the memory is used for storing computer executable instructions; the processor is used for realizing the steps of the construction method of the physiological and biochemical index model based on the human face three-dimensional image when executing the computer executable instructions in the memory. The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. The aforementioned memory may be a read-only memory (ROM), a Random Access Memory (RAM), a Flash memory (Flash), a hard disk, or a solid state disk. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
In addition, the embodiment of the application also provides a system for predicting physiological and biochemical indexes based on a human face three-dimensional image, which comprises a memory for storing computer executable instructions and a processor; the processor is used for realizing the steps of the prediction method based on the physiological and biochemical indexes of the human face three-dimensional image when executing the computer executable instructions in the memory. The Processor may be a Central Processing Unit (CPU), other general-purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), or the like. The aforementioned memory may be a read-only memory (ROM), a Random Access Memory (RAM), a Flash memory (Flash), a hard disk, or a solid state disk. The steps of the method disclosed in the embodiments of the present invention may be directly implemented by a hardware processor, or implemented by a combination of hardware and software modules in the processor.
It should be noted that, in the present patent application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element. In the present patent application, if it is mentioned that a certain action is executed according to a certain element, it means that the action is executed according to at least the element, and two cases are included: performing the action based only on the element, and performing the action based on the element and other elements. Multiple, etc. expressions include 2, 22 kinds, more than 2 times, more than 2 kinds.
All documents mentioned in this application are to be considered as being integrally included in the disclosure of this application so as to be subject to modification as necessary. It should be understood that the above description is only for the preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present disclosure should be included in the scope of protection of one or more embodiments of the present disclosure.

Claims (14)

1. A construction method of a physiological and biochemical index prediction model based on a human face three-dimensional image is characterized by comprising the following steps:
(a) Acquiring a data set of a plurality of three-dimensional sample faces, wherein the data set comprises at least one physiological and biochemical index corresponding to each three-dimensional sample face and a point cloud data set of a preset face area corresponding to each physiological and biochemical index;
(b) And analyzing and processing the data set to construct a physiological and biochemical index prediction model.
2. The method of constructing as claimed in claim 1, wherein said step (b) further comprises the sub-steps of:
extracting a preset number of features by using a feature extraction sub-model based on the data set, and performing regression analysis on the preset number of features and each physiological and biochemical index corresponding to the preset number of features one by one to obtain a significant associated feature corresponding to each physiological and biochemical index;
and a prediction regression sub-model established by utilizing each physiological and biochemical index and the corresponding significant correlation characteristics thereof, wherein the physiological and biochemical index prediction model comprises the characteristic extraction sub-model and the prediction regression sub-model.
3. The method of constructing as claimed in claim 1, wherein said step (b) further comprises the sub-steps of:
and constructing and training a deep learning model by taking the at least one physiological and biochemical index as output sample data and the point cloud data set of the preset face area corresponding to each physiological and biochemical index as input sample data based on the data set, and training to obtain the physiological and biochemical index prediction model.
4. The construction method according to any one of claims 1 to 3, wherein the data sets of the plurality of three-dimensional sample faces further include individual attribute information corresponding to each three-dimensional sample face.
5. The construction method according to claim 4, wherein the individual attribute information is selected from one or more of the following groups:
age, sex, height, weight, waist circumference, hip circumference.
6. The method of any one of claims 1-3, wherein the at least one physiological-biochemical indicator comprises one or more of:
triglyceride TG, blood glucose GLU, creatinine CREA, uric acid UA, glutamic pyruvic transaminase ALT.
7. The method of construction of claim 2, wherein the feature extraction submodel is a partial least squares regression analysis model, a principal component analysis model, a support vector machine, or a deep learning model.
8. The construction method according to claim 1, wherein the step (a) further comprises the substeps of:
and preprocessing each three-dimensional sample face to obtain corresponding full-face point cloud data, wherein the point cloud data set of the preset face area is the full-face point cloud data.
9. The construction method according to claim 1, wherein the step (a) further comprises the substeps of:
preprocessing each three-dimensional sample face to obtain corresponding full-face point cloud data;
and performing correlation analysis on each three-dimensional coordinate point data in the full-face point cloud data and each corresponding physiological and biochemical index, and extracting a point cloud data set which is obviously correlated with each physiological and biochemical index, wherein the point cloud data set of the preset face area is the extracted point cloud data set which is obviously correlated with each physiological and biochemical index.
10. A method for predicting physiological and biochemical indexes based on a human face three-dimensional image is characterized by comprising the following steps:
constructing a physiological and biochemical index prediction model according to the construction method of any one of claims 1 to 9;
and acquiring a point cloud data set of the three-dimensional image of the face to be predicted, inputting the acquired point cloud data set into the constructed physiological and biochemical index prediction model, and outputting to obtain the physiological and biochemical index of the face to be predicted.
11. A system for predicting physiological and biochemical indexes based on a human face three-dimensional image is characterized by comprising:
the acquisition module is used for acquiring a three-dimensional image of the face to be predicted and converting the three-dimensional image into a corresponding point cloud data set;
a physiological and biochemical index prediction model module, wherein the physiological and biochemical index prediction model is constructed according to the construction method of any one of claims 1 to 9 and is used for inputting a point cloud data set of the three-dimensional image of the human face to be predicted to obtain the physiological and biochemical index of the three-dimensional image of the human face to be predicted;
and the output module is used for outputting the physiological and biochemical indexes of the three-dimensional image of the human face to be predicted.
12. A construction system of a physiological and biochemical index model based on a human face three-dimensional image is characterized by comprising the following steps:
a memory for storing computer executable instructions; and the number of the first and second groups,
a processor for implementing the steps in the construction method according to any one of claims 1 to 9 when executing the computer-executable instructions.
13. A system for predicting physiological and biochemical indexes based on a human face three-dimensional image is characterized by comprising:
a memory for storing computer executable instructions; and (c) a second step of,
a processor for implementing the steps in the prediction method of any one of claim 10 when executing the computer executable instructions.
14. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor implement the steps in the method of any one of claims 1-9.
CN202110491808.4A 2021-05-06 2021-05-06 Method and system for predicting physiological and biochemical indexes and constructing prediction model Pending CN115312200A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110491808.4A CN115312200A (en) 2021-05-06 2021-05-06 Method and system for predicting physiological and biochemical indexes and constructing prediction model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110491808.4A CN115312200A (en) 2021-05-06 2021-05-06 Method and system for predicting physiological and biochemical indexes and constructing prediction model

Publications (1)

Publication Number Publication Date
CN115312200A true CN115312200A (en) 2022-11-08

Family

ID=83853228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110491808.4A Pending CN115312200A (en) 2021-05-06 2021-05-06 Method and system for predicting physiological and biochemical indexes and constructing prediction model

Country Status (1)

Country Link
CN (1) CN115312200A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403718A (en) * 2023-06-08 2023-07-07 中国医学科学院阜外医院 Method, device, equipment and storage medium for constructing physiological indication prediction model

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403718A (en) * 2023-06-08 2023-07-07 中国医学科学院阜外医院 Method, device, equipment and storage medium for constructing physiological indication prediction model
CN116403718B (en) * 2023-06-08 2023-09-01 中国医学科学院阜外医院 Method, device, equipment and storage medium for constructing physiological indication prediction model

Similar Documents

Publication Publication Date Title
CN109859203B (en) Defect tooth image identification method based on deep learning
Gao et al. Automatic feature learning to grade nuclear cataracts based on deep learning
EP3164062B1 (en) Detecting tooth wear using intra-oral 3d scans
US8099299B2 (en) System and method for mapping structural and functional deviations in an anatomical region
US20090292478A1 (en) System and Method for Analysis of Multiple Diseases and Severities
US20090292557A1 (en) System and Method for Disease Diagnosis from Patient Structural Deviation Data
US20090290772A1 (en) Medical Data Processing and Visualization Technique
JP7189257B2 (en) Method, apparatus and computer readable storage medium for detecting specific facial syndromes
Luo et al. Retinal image classification by self-supervised fuzzy clustering network
Dempere-Marco et al. The use of visual search for knowledge gathering in image decision support
CN110338763A (en) A kind of intelligence Chinese medicine examines the image processing method and device of survey
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
CN111612756A (en) Coronary artery specificity calcification detection method and device
Iqbal et al. Texture analysis of ultrasound images of chronic kidney disease
CN111340794B (en) Quantification method and device for coronary artery stenosis
McCullough et al. Convolutional neural network models for automatic preoperative severity assessment in unilateral cleft lip
CN115312200A (en) Method and system for predicting physiological and biochemical indexes and constructing prediction model
CN113516639B (en) Training method and device for oral cavity abnormality detection model based on panoramic X-ray film
CN114445784A (en) Method and system for acquiring CRRT screen parameters in real time
Rao et al. A Review on Alzheimer’s disease through analysis of MRI images using Deep Learning Techniques
Hasan et al. Dental impression tray selection from maxillary arch images using multi-feature fusion and ensemble classifier
CN116092157A (en) Intelligent facial tongue diagnosis method, system and intelligent equipment
CN110570425A (en) Lung nodule analysis method and device based on deep reinforcement learning algorithm
CN116407080A (en) Evolution identification and 3D visualization system and method for fundus structure of myopic patient
WO2022252107A1 (en) Disease examination system and method based on eye image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination