CN114038564A - Noninvasive risk prediction method for diabetes - Google Patents

Noninvasive risk prediction method for diabetes Download PDF

Info

Publication number
CN114038564A
CN114038564A CN202111332652.1A CN202111332652A CN114038564A CN 114038564 A CN114038564 A CN 114038564A CN 202111332652 A CN202111332652 A CN 202111332652A CN 114038564 A CN114038564 A CN 114038564A
Authority
CN
China
Prior art keywords
size
multiplied
layer
output
diabetes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111332652.1A
Other languages
Chinese (zh)
Other versions
CN114038564B (en
Inventor
张冰
郭立川
宋欣
齐峰
白晶
张媛
高瑞军
王朝
姚柏韬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhu Xianyi Memorial Hospital Of Tianjin Medical University
Original Assignee
Zhu Xianyi Memorial Hospital Of Tianjin Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhu Xianyi Memorial Hospital Of Tianjin Medical University filed Critical Zhu Xianyi Memorial Hospital Of Tianjin Medical University
Priority to CN202111332652.1A priority Critical patent/CN114038564B/en
Publication of CN114038564A publication Critical patent/CN114038564A/en
Application granted granted Critical
Publication of CN114038564B publication Critical patent/CN114038564B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0062Arrangements for scanning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Pathology (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Surgery (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Fuzzy Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a noninvasive risk prediction method for diabetes, which belongs to the field of intelligent medical treatment and comprises the following steps: recruiting subjects, and constructing a data set containing face images of diabetic patients and healthy people; carrying out image preprocessing; positioning the characteristic points of the face image, cutting, splicing and marking corresponding diagnostic information of the key region; randomly dividing the marked sample data set into a training set, a verification set and a test set; constructing a residual error attention network, and performing supervised machine learning on the sample; and performing model parameter adjustment according to the performance on the verification set, evaluating the generalization capability of the model based on the discrimination effect on the test set, and obtaining a risk prediction model with good performance by adopting a cross-validation method. The risk prediction model constructed by the invention can carry out rapid, noninvasive and accurate diabetes risk prediction by analyzing the facial image characteristics of the subject, and provides a new method for large-scale screening and auxiliary diagnosis of diabetes.

Description

Noninvasive risk prediction method for diabetes
Technical Field
The invention belongs to the field of intelligent medical treatment, and particularly relates to a noninvasive diabetes risk prediction method based on a face image and a residual error attention network.
Background
Diabetes is a chronic metabolic disease that is generally difficult to detect early in the disease process before complications occur. Estimated according to the global diabetes map (9 th edition) promulgated by the International Diabetes Federation (IDF), about 2.32 million of adult diabetic patients worldwide are not correctly diagnosed with diabetes. Especially in developing countries with scarce medical resources, medical workers have insufficient knowledge of diabetes and limited blood sugar detection facilities, resulting in diabetes often being misdiagnosed as malaria, pneumonia or various other diseases. If the diagnosis of diabetes is delayed or misdiagnosed, the risk of serious complications and death increases. Therefore, the method has very important significance for early risk prediction of diabetes-prone people.
At present, most of the conventional diabetes detection methods are invasive, and require patients to go to a hospital for a series of examination items on an empty stomach, which is time-consuming and labor-consuming, and often causes economic and physical burden to the patients. Particularly in poor regions and low-and-medium-income countries. Patients with diabetes are often limited by medical, economic, etc. conditions and cannot be screened and treated effectively in time. The diabetic patients often have symptoms of facial redness, skin infection, pruritus, dryness, pigmentation and the like due to blood sugar rise and capillary lesion, wherein the intensity of the facial redness depends on the degree of congestion of blood vessels from the superficial veins. Therefore, in order to deeply explore the internal association between the facial image and the diabetes, a noninvasive risk prediction method for the diabetes based on the facial image and the residual attention network is provided.
Disclosure of Invention
The invention aims to provide a noninvasive diabetes risk prediction method, which is based on a face image and adopts an attention mechanism, can sense target information of a facial key area, inhibit other useless information, greatly improve the fitting speed and generalization capability of a model, and can perform noninvasive and accurate diabetes risk prediction quickly.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a method for noninvasive risk prediction of diabetes, comprising the following steps:
acquiring or constructing a data set comprising facial images of a diabetic person and a healthy person,
preprocessing an image sample in the data set, positioning face image feature points, obtaining a plurality of key areas, cutting, splicing and marking the key areas with diabetes diagnosis information, and obtaining a marked sample data set;
and constructing a residual error attention network, performing supervised machine learning on the marked samples, and obtaining a diabetes noninvasive risk prediction model after training and parameter adjustment.
The sample images in the data set are a large number of front face images of testees collected by a high-definition camera under the same natural environment state (illumination, angle and expression), and are data sets for self-establishing facial images of diabetics and healthy people, and the specific process is as follows:
a large number of diabetic and healthy human subjects were enrolled, with the inclusion criteria set: all subjects need to be between 40 and 90 years of age; the facial skin has no obvious scars, and facial images are not made up at the same day of collection, wherein the diabetes subjects need to meet the condition that the definite diabetes diagnosis is made in second and above medical institutions; the blood sugar range of a healthy human subject meets 3.9-6.1 millimoles per liter of fasting whole blood sugar, 6.7-9.4 millimoles per liter of blood sugar after 1 hour, less than or equal to 7.8 millimoles per liter of blood sugar after 2 hours, or less than 6.5 percent of glycosylated hemoglobin H1A1c in a physical examination report within three months, and no history of diabetes exists; all subjects had no significant statistical differences in age, gender, etc.;
in a room that the illumination is good, the testee sits the one end at the desk, fixes the head on the forehead holds in the palm the support, and a high definition camera is placed to the other end of desk, holds in the palm the height of support through the adjustment forehead, ensures that the camera can be clear shoots the face image that the testee was facing, and the process of whole sample collection guarantees that the testee angle of shooing, expression and outside illumination condition are unanimous as far as possible.
Marking 68 points by using a pre-trained model 'shape _ predictor _68_ face _ landworks.dat' in a Dlib toolkit, performing imaging processing by using OpenCv, and drawing 68 points on a human face; positioning and cutting a key area according to the coordinates of the 68 feature points; the key area is selected in a scattered mode in the area where facial organs including eyebrows, eyes, a nose and a mouth are avoided, and the shape of the key area is rectangular.
Setting four key areas which are a forehead area (A), a left cheek area (B), a right cheek area (C) and a lower jaw area (D); the specific operation mode is as follows: firstly, a face characteristic point detection method in a machine learning toolkit Dlib is adopted to label the outlines of 68 key point positions on a face sample, and the coordinate of each point is marked as Pi(x, y), i is 1 to 68; with P9The horizontal axis of (x, y) is the axis of abscissa and is denoted as the x-axis, P1The vertical axis of (x, y) is the ordinate axis and is marked as the y axis, and simultaneously, 4 key squares with the same size are defined according to the coordinate relation between the characteristic pointsRegions (64 × 64 pixels), noted A, B, C, D respectively; if the position of the key region in the face image is to be accurately located, the coordinates of the center point of the key region must be located first, and the coordinates of the center point of the key region A, B, C, D are recorded as PA(x,y)、PB(x,y)、PC(x,y)、PD(x,y);
Key area A is near forehead area above central axis of human face, PAThe abscissa of (x, y) is taken as the abscissa of the nose tip feature point 34 and is denoted as P34(x) The vertical coordinate of the feature point corresponding to the highest point of the eyebrow is taken as Pmax-high(y) adding half of the corresponding length of the critical area pixel value 64 and recording as h, PAThe formula for the calculation of (x, y) is:
PA(x,y)=(P34(x),Pmax-high(y)+h) (1)
key regions B, C are near the left and right cheeks, respectively, of the face, PB(x,y)、PCThe formula for the calculation of (x, y) is:
PB(x,y)=(P42(x),P32(y)) (2)
PC(x,y)=(P47(x),P36(y)) (3)
PBthe abscissa and ordinate of (x, y) are the abscissa of the feature point 42 at the lowest position in the left eye, and are denoted as P42(x) Longitudinal coordinate P of feature point 32 on the leftmost side of the nose32(y);PCThe abscissa and ordinate of (x, y) are respectively the abscissa of the feature point 47 at the lowest position of the right eye and are denoted as P47(x) The ordinate of the rightmost feature point 36 of the nose is denoted as P36(y);
The critical area D is near the central axis below the mouth, PDThe abscissa of (x, y) is the abscissa of the feature point 58 at the lowermost end of the mouth, and is denoted as P58(x) The ordinate is half the vertical distance between feature point 58 and feature point 9; pDThe specific calculation formula of (x, y) is as follows:
Figure BDA0003349469940000031
after the coordinates of the central points of the four key areas are confirmed, the specific coordinates of the four vertexes of the key area of each square can be calculated according to the coordinates of the central points, and the calculation formula is as follows:
Pn, upper left(x,y)=(Pn(x)-h,Pn(y)+h) (5)
Pn, lower left(x,y)=(Pn(x)-h,Pn(y)-h) (6)
Pn, upper right(x,y)=(Pn(x)+h,Pn(y)+h) (7)
Pn, lower right(x,y)=(Pn(x)+h,Pn(y)-h) (8)
Wherein n represents A, B, C, D; h is half of the corresponding length of the key area pixel value 64;
the cut key areas of each human face image are spliced into a face combination image (128 multiplied by 128 pixels) according to the sequence of A, B, C, D, and the splicing sequence of all samples in the data set is guaranteed to be the same.
The residual error attention network is a 56-layer residual error attention network built by adopting a pytorech machine learning library, and the specific network architecture is as follows:
inputting the marked sample image into a residual error attention network, performing 1 convolution and maximum pooling operation through a first convolution layer and a maximum pooling layer, then inserting the marked sample image through 3 residual error units and 3 attention modules, respectively marking the 3 residual error units as a first residual error unit, a second residual error unit and a third residual error unit, performing average pooling operation, then reaching a full-link layer, finally connecting the full-link layer at the tail end of the residual error attention network by using a normalized exponential function Softmax to perform diabetes risk prediction, and outputting a prediction result;
each attention module is divided into two branches, one called the main branch, the other a soft mask branch,
the characteristic diagram is firstly preprocessed by 1 residual error unit, then respectively enters a main branch and a soft mask branch,
the main branch mainly comprises 2 residual units in series,
the soft mask branch comprises two steps of quick feedforward scanning and top-down feedback, the characteristic graph is subjected to two times of down-sampling operation to increase the receptive field, after the lowest resolution is reached, the size of the characteristic graph is enlarged to be consistent with the size of the input original characteristic graph through the same number of up-sampling operation to form an attention characteristic graph, 2 convolution layers of 1 multiplied by 1 are connected, and finally the attention of a mixed domain is obtained through a sigmoid activation function;
in addition, jump connection is added between down sampling and up sampling to fuse the feature information of feature maps with different proportions; the output of the soft mask branch is firstly subjected to matrix multiplication with the output of the main branch, the result is subjected to matrix addition with the output of the main branch, and finally the output of the attention module is obtained through p residual error units;
a residual error unit in the attention module adopts a bottleeck structure to reduce the parameter number, the size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the channel number is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, the number of channels is 256, the activation function between the convolution layers is set to relu, the output of the bottleeck structure is the sum of the output of the convolution layer of the third layer and the output of the identyblock, and the size is 112 multiplied by 112;
the first convolutional layer contains convolutional kernels with the size of 7 × 7, the step length is 2 × 2, the number of channels is 64, the filling mode is set to valid, and the output of the convolutional layer is 112 × 112;
the size of the pooling window of the maximum pooling layer is 3 multiplied by 3, the step length is 2 multiplied by 2, and the size of the feature graph output after the maximum pooling operation is 56 multiplied by 56;
the first residual unit adopts a bottleeck structure to reduce the number of parameters. The size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the number of channels is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 256; the activation function between convolutional layers is set to relu. The output of the bottompiece structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size is 56 × 56.
The first attention module is connected behind the first residual error unit, and the output size of the attention module is as follows: 56 is multiplied by 56;
a second residual error unit is connected behind the first attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 128; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 128; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 512; setting an activation function between each convolution layer as relu; the output of the bottleeck structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size of the output is 28 multiplied by 28;
the second attention module is connected behind the second residual error unit, and the output size of the attention module is as follows: 28X 28;
a third residual error unit is connected behind the second attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 256; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 256; the size of a convolution kernel in the third layer is 1 multiplied by 1, and the number of channels is 1024; setting an activation function between each convolution layer as relu; the output of the bottleeck structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size of the output is 14 multiplied by 14;
and a third attention module is connected behind the third residual error unit, and the output size of the attention module is as follows: 14 is multiplied by 14;
a fourth residual error unit is connected behind the third attention module, 3 serially connected bottleeck structures are adopted, 3 layers of convolution are set to reduce the parameter number, the size of a convolution kernel of a first layer in each bottleeck is 1 multiplied by 1, and the number of channels is 512; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 512; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 2048; the activation function between each convolution layer is set to relu, and the output size of the fourth residual error unit is 7 multiplied by 7;
carrying out average pooling operation on the feature map output by the fourth residual unit, wherein the size of a pooling window is 7 multiplied by 7, the step length is 1 multiplied by 1, and the size of the feature map subjected to average pooling is 1 multiplied by 1;
and finally, connecting a full-connection layer at the tail end of the residual attention network by using a normalized exponential function Softmax to predict the diabetes risk.
The invention has the beneficial effects that:
the invention adopts a residual error attention network to construct a diabetes noninvasive risk prediction model, and the network is constructed by stacking a plurality of attention modules through combining an end-to-end training mode and a feedforward network architecture. These modules generate attention awareness functionality. The visual attention mechanism is a brain signal processing mechanism unique to human vision. The target area needing attention is obtained by rapidly scanning the global image, then the target information needing attention is obtained in a focused mode, and other useless information is suppressed. The attention mechanism greatly improves the efficiency and the accuracy of machine vision information processing. Compared with a traditional residual error network model, the residual error attention mechanism can achieve fine-grained feature matching, meanwhile, the influence of a target area is enhanced, the influence of a non-target area is inhibited, and the fitting speed and the generalization capability of the model are favorably improved; in addition, the invention adopts a non-invasive testing method, namely, the frontal facial image of the testee is analyzed through a model, and the future diabetes onset risk of the testee is predicted. The method has the advantages of short time consumption and low cost, supports large-scale screening and remote diagnosis and treatment, is beneficial to quickly screening the diabetic patients in the high-morbidity crowd, reminds the diabetic patients to control the blood sugar as early as possible, and avoids the occurrence of diabetic complications.
The method is used for predicting the risks of the diabetics, whether the diabetics suffer from diabetes or not is distinguished through the face images, the residual error attention network has key feature perception capability by taking the face key regions as input samples after being processed on the basis of the traditional residual error network, compared with a traditional residual error network model mechanism, the network can achieve fine-grained feature matching, meanwhile, the influence of a target region is strengthened, the influence of a non-target region is inhibited, and the fitting speed and the generalization capability of the model are improved. The method supports large-scale screening and remote diagnosis and treatment, is high in speed and low in cost, and can be operated by non-professional personnel.
The self-built diabetes face database (the unit is a special diabetes hospital, has abundant diabetes patients and cases, and rarely has a complete diabetes patient face sample data set in the world at present) is adopted as an experimental basic sample, and the self-built diabetes patient and healthy human face sample data set is broken through, so that the diabetes risk prediction is carried out by using the residual attention neural network model. In the published literature of noninvasive diabetes risk prediction, no relevant method for noninvasive diabetes risk prediction by adopting a face image and a residual attention network exists, supervised machine learning is carried out on the face image of a diabetic by adopting a residual attention network model for the first time, and the diabetes risk prediction is carried out by a deep learning method.
Drawings
FIG. 1 is a flow chart of a noninvasive risk prediction method for diabetes based on face images and residual attention network in the present invention;
fig. 2 is a schematic diagram of the sample collection in step S1 according to the present invention.
Fig. 3 is a schematic diagram of the step S2 of locating, clipping and splicing the sample feature points.
FIG. 4 is a block diagram of an embodiment of an attention module according to the present invention.
FIG. 5 is a block diagram of a residual attention network embodiment of the present invention with a depth of 56.
Detailed Description
The technical solutions in the embodiments of the present invention will be described clearly and completely with reference to the accompanying drawings in the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict. The invention is further described with reference to the following drawings and specific examples, which are not intended to be limiting.
The invention provides a noninvasive diabetes risk prediction method based on a face image and a residual error attention network, which belongs to the field of intelligent medical treatment and mainly comprises the following steps as shown in figure 1:
in step S1, subjects are recruited to construct a data set containing images of the faces of diabetic patients and healthy persons.
The image in the data set in step S1 is a frontal facial image of the subject captured by the high-definition camera under the same natural environment conditions (illumination, angle, expression).
And step S2, preprocessing an image sample, positioning the characteristic points of the face image, cutting and splicing the key area, and marking the diabetes diagnosis information to obtain a marked sample data set.
In step S2, a Dlib machine learning library is first used to locate the feature points of the face image, 4 rectangular key regions are sequentially cut out according to the feature point locations, then a complete rectangular image is spliced in sequence, and finally corresponding diagnostic information is labeled for each spliced sample. And marking 68 points by using a pre-trained model 'shape _ predictor _68_ face _ landworks.dat' in a Dlib toolkit, and performing imaging processing by using OpenCv to draw 68 points on the face. The key area is located and cut according to the coordinates of the 68 feature points, which are conventional in the machine vision technology.
And step S3, randomly dividing the marked sample data set into a training set, a verification set and a test set. The random division in step S3 is to divide the sample data set into K mutually exclusive subsets with similar sizes after performing a shuffle operation on the sample data set, each time using the union of K-1 subsets as a training set, the rest subsets as a test set, and simultaneously dividing 50% of the samples in the test set as a verification set.
And step S4, constructing a residual attention network, and performing supervised machine learning on the sample. Step S4 builds a 56-layer residual attention network using the pytorech machine learning library, and classifies the sample by connecting all connected layers at the end of the residual attention network using the normalized exponential function Softmax.
And step S5, performing model parameter adjustment according to the performance on the verification set, estimating the generalization ability of the model based on the discrimination effect on the test set, and obtaining a diabetes noninvasive risk prediction model with good performance by adopting a cross-validation method. In step S5, the average of K test results is taken when the model effect is determined.
In this embodiment, the data set constructed in step S1 may be a sample data set of a self-constructed Face image of a diabetic patient in this unit, and is named as TMU-DFD (Tianjin Medical University-Diabetes Face Dataset). Wherein the 384 diabetes subjects recruited are from outpatients and inpatients of the Zhuxianyi memorial hospital of Tianjin medical university; 137 healthy human subjects were from the present college employees, family members, graduates, and social volunteers. In this embodiment, 2 to 3 clear front face images of each subject are collected by the high definition camera under the same natural environment condition (the same illumination and angle), and in this embodiment, the face image samples of the diabetic patient collected in step S1 are 966 and the face image samples of the healthy person are 411. Most of the objects collected in the embodiment are Chinese, and the sample data set is used for training a diabetes noninvasive risk prediction model, so that the method is more targeted and has good applicability to Chinese.
In this embodiment, in order to improve the subject recruitment efficiency and the sample quality, the inclusion/exclusion criteria need to be strictly set. Specifically, all subjects need to be between 40 and 90 years of age; the facial skin has no obvious scar, and the facial image collection day has no makeup. Wherein the diabetic subject is required to meet the requirement that a definite diabetes diagnosis has been made in second and above medical institutions; the blood sugar range of the healthy human subjects meets the requirements of fasting whole blood sugar of 3.9-6.1 millimole/liter, 1 hour after meal of 6.7-9.4 millimole/liter, 2 hours after meal of less than or equal to 7.8 millimole/liter, or a physical examination report within three months has less than 6.5 percent of glycosylated hemoglobin H1A1c and no history of diabetes. All subjects had no significant statistical differences in age, gender, etc.
Specifically, a schematic diagram of sample image acquisition is shown in fig. 2. In a well lit room, the subject sits on one end of the table and fixes his head to the forehead rest support. A high-definition camera is placed at the other end of the desk, and the height of the forehead support bracket is adjusted to ensure that the camera can clearly shoot facial images of the front face of a subject. In the whole sample collection process, conditions such as the photographing angle, the expression and the external illumination of the subject are ensured to be consistent as much as possible.
The diabetic is characterized in that the face is commonly suffered from symptoms of red swelling, skin infection, pruritus, dryness, pigmentation and the like due to the rise of blood sugar and the pathological changes of capillary vessels, wherein the intensity of the red swelling of the face depends on the degree of congestion of the blood vessels of the superficial venous plexus. The present invention focuses on the skin of the face. Meanwhile, in order to avoid the interference of facial organs such as eyebrows, eyes, a nose, a mouth and the like in the experiment, 4 key regions of a forehead region (a), a left cheek region (B), a right cheek region (C) and a lower jaw region (D) in the facial image are extracted for the experiment. The specific operation manner is as shown in fig. 3, firstly, a face feature point detection (facewellandmarkdetection) method in the machine learning toolkit Dlib is adopted to label contours of 68 key point locations on a face sample, and the coordinate of each point is marked as Pi(x, y). With P9The horizontal axis of (x, y) is the axis of abscissa and is denoted as the x-axis, P1The vertical axis of (x, y) is the ordinate axis and is marked as the y-axis, and meanwhile, 4 key areas (64 × 64 pixels) of the same size are defined according to the coordinate relationship between the feature points and are respectively marked as A, B, C, D. If the position of the key region in the face image is to be accurately located, the coordinates of the center point of the key region must be located first, and the coordinates of the center point of the key region A, B, C, D are recorded as PA(x,y)、PB(x,y)、PC(x,y)、PD(x,y)。
Key area A is near forehead area above central axis of human face, PAThe abscissa of (x, y) is taken as the abscissa of the nose tip feature point 34 and is denoted as P34(x) The feature point of the highest point of the eyebrow (generally P) is taken on the ordinate20(y) or P25(y)) the ordinate represents Pmax-high(y) adding half of the corresponding length of the critical area pixel value 64 and recording as h, PAThe calculation formula of (x, y) is as follows:
PA(x,y)=(P34(x),Pmax-high(y)+h) (1)
key regions B, C are near the left and right cheeks, respectively, of the face, PB(x,y)、PCThe calculation formula of (x, y) is as follows:
PB(x,y)=(P42(x),P32(y)) (2)
PC(x,y)=(P47(x),P36(y)) (3)
PBthe abscissa and ordinate of (x, y) are the abscissa of the feature point 42 at the lowest position in the left eye, and are denoted as P42(x) Longitudinal coordinate P of feature point 32 on the leftmost side of the nose32(y)。PCThe abscissa and ordinate of (x, y) are respectively the abscissa of the feature point 47 at the lowest position of the right eye and are denoted as P47(x) The ordinate of the rightmost feature point 36 of the nose is denoted as P36(y)。
The critical area D is near the central axis below the mouth, PDThe abscissa of (x, y) is the abscissa of the feature point 58 at the lowermost end of the mouth, and is denoted as P58(x) The ordinate is half the perpendicular distance of feature points 58 and 9. PDThe specific calculation formula of (x, y) is as follows:
Figure BDA0003349469940000071
after the coordinates of the central points of the four key areas are confirmed, the specific coordinates of the four vertexes of the key area of each square can be calculated according to the coordinates of the central points, and the calculation formula is as follows:
Pn, upper left(x,y)=(Pn(x)-h,Pn(y)+h) (5)
Pn, lower left(x,y)=(Pn(x)-h,Pn(y)-h) (6)
Pn, upper right(x,y)=(Pn(x)+h,Pn(y)+h) (7)
Pn, lower right(x,y)=(Pn(x)+h,Pn(y)-h) (8)
Wherein the value range of n is A, B, C, D; h is half the corresponding length of the critical area pixel value 64. The cut key areas of each sample are spliced into a face combination graph (128 multiplied by 128 pixels) according to the sequence of A, B, C, D, the splicing sequence of all samples in the data set is guaranteed to be the same, and the four positions of the selected key areas are different according to different face shapes.
The invention adopts a supervised machine learning algorithm to carry out the non-invasive detection of the diabetes, so that the diagnosis and the labeling of the data samples are needed. Specifically, the diagnostic information of the subject is associated with the image sample by searching medical systems such as HIS, EMR, LIS, physical examination, etc. in the home. For the subject samples recruited outside the hospital, the diagnosis materials (physical examination reports, medical records, etc.) related to the medical institution provided by the subjects are labeled.
In order to obtain a diabetes noninvasive risk prediction model with good classification performance and generalization capability, the sample data set marked in the step S3 is randomly divided into a training set, a verification set and a test set. Specifically, the random division refers to dividing the data set into 5 mutually exclusive subsets with similar sizes after performing a shuffle operation on the data set, wherein a union set of 4 subsets is used as a training set each time, and the remaining 1 subset is used as a test set. In the test set, 50% of the samples were randomly divided as validation sets, and the test set, validation set, and training set all included the same proportion of diabetic and healthy human samples.
Preferably, the embodiment uses a pytorech machine learning library to build a Residual Attention Network (Residual Attention Network) with a depth of 56, which is denoted as Attention-56. The network is constructed by stacking a plurality of attention modules through the combination of an end-to-end training mode and a feed-forward network architecture.
These attention modules generate attention aware functionality. The visual attention mechanism is a brain signal processing mechanism unique to human vision. Human vision obtains a target area needing attention by rapidly scanning a global image, then emphatically obtains target information needing attention, and suppresses other useless information. The attention mechanism greatly improves the efficiency and the accuracy of machine vision information processing. Compared with the traditional residual error network model mechanism, the residual error attention mechanism can achieve fine-grained feature matching, meanwhile, the influence of a target area is enhanced, the influence of a non-target area is inhibited, and the fitting speed and the generalization capability of the model are favorably improved.
In which the structural block diagram of the attention module is shown in fig. 4, the stacked structure is a basic application of the hybrid attention mechanism, which combines spatial information in the spatial domain and channel information in the channel domain. Each attention module can be divided into two branches, one called the main branch, which is the basic structure of the residual error network. The other branch is a soft mask branch, and the main part of the branch is a residual attention learning mechanism. The principle of soft mask is that key features in the image data are identified through another layer of new weights, and through learning training, the deep neural network learns the region needing attention in each new image, so that attention is brought to the region, and the essence of the soft mask is that a set of weight distribution which can be acted on a feature map is expected to be obtained through learning.
The parameter p in fig. 4 is represented as the number of preprocessed residual units before the main branch and the soft mask branch. t represents the number of residual units of the trunk branches. The relation of the t and p parameters is generally set as shown in formula (1):
t=2*p (1)
specifically, in this embodiment, p is 1, t is 2, and the feature map is denoted by x. The feature diagram is preprocessed by 1 residual unit, and then enters a main branch and a soft mask branch respectively. The residual error unit adopts a bottleeck structure to reduce the parameter number, the size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the number of channels is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, the number of channels is 256, and the activation function between the convolution layers is set to relu. The output of the bottompiece structure is the sum of the output of the third convolutional layer and the output of the identity block, and the size is 112 × 112.
The main branch mainly comprises 2 residual error units connected in series, the structure and parameters of each residual error unit are consistent with those of the residual error unit of the preprocessing operation, and the output is recorded as T (x). The soft mask branch comprises two steps of quick feedforward scanning and top-down feedback, the receptive field of the feature graph is increased through two times of down-sampling operation, after the lowest resolution is reached, the size of the feature graph is enlarged to be consistent with the size of the input original feature graph through the same number of up-sampling operation to form an attention feature graph, then 2 convolution layers of 1 × 1 are connected, and finally the attention of a mixed domain (combining the spatial information in the spatial domain and the channel information in the channel domain) is obtained through a sigmoid activation function, wherein the sigmoid function is shown as a formula (2):
Figure BDA0003349469940000091
in addition, jump connection is added between down sampling and up sampling to fuse the feature information of different scale feature maps. The output of the soft mask branch is marked as M (x), M (x) is firstly subjected to matrix multiplication (Element-wise Product) with the output T (x) of the main branch, the result is subjected to matrix addition (Element-wise Sum) with T (x) as shown in a formula (3), and the output of the attention module is obtained through p residual error units:
H(x)=(1+M(x))*T(x) (3)
preferably, a block diagram of a specific embodiment of the residual attention network is shown in fig. 5. The method comprises the steps that a sample image is input into a residual error attention network, convolution and maximum pooling operation are conducted for 1 time, then the sample image is interpenetrated through 3 residual error units and 3 attention modules, the sample image reaches a full connection layer after average pooling operation, finally a normalization index function Softmax is used for connecting the full connection layer at the tail end of the residual error attention network to conduct diabetes risk prediction, and a prediction result is output.
Specifically, the first convolutional layer contains convolutional kernels of size 7 × 7, step size 2 × 2, number of channels 64, fill mode set to valid, and convolutional layer output 112 × 112.
The size of the pooling window of the maximum pooling layer is 3 × 3, the step size is 2 × 2, and the size of the feature map output after the maximum pooling operation is 56 × 56.
The first residual unit adopts a bottleeck structure to reduce the number of parameters. The size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the number of channels is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 256; the activation function between convolutional layers is set to relu. The output of the bottompiece structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size is 56 × 56.
The first attention module is connected behind the first residual error unit, and the output size of the attention module is as follows: 56X 56.
A second residual error unit is connected behind the first attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 128; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 128; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 512; the activation function between convolutional layers is set to relu. The output of the bottomleneck structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size is 28 × 28.
The second attention module is connected behind the second residual error unit, and the output size of the attention module is as follows: 28X 28.
A third residual error unit is connected behind the second attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 256; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 256; the size of a convolution kernel in the third layer is 1 multiplied by 1, and the number of channels is 1024; the activation function between convolutional layers is set to relu. The output of the bottompiece structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size is 14 × 14.
And a third attention module is connected behind the third residual error unit, and the output size of the attention module is as follows: 14 × 14.
And a fourth residual error unit is connected behind the third attention module, 3 serially connected bottleeck structures are adopted, and 3 layers of convolution are arranged to reduce the parameter number. The size of a convolution kernel of a first layer in each bottleeck is 1 multiplied by 1, and the number of channels is 512; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 512; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 2048; the activation function between convolutional layers is set to relu. The output size of the fourth residual unit is 7 × 7.
The feature map output by the fourth residual unit is subjected to an average pooling operation, the size of a pooling window is 7 × 7, and the step size is 1 × 1. The average pooled feature size was 1 × 1.
Finally, the embodiment uses a normalized exponential function Softmax to connect with the full connectivity layer at the end of the residual attention network to predict the diabetes risk.
Specifically, in the step S5, a 5-fold cross validation method is adopted when the model effect is determined, 1 different sample is selected as a test set each time, and the other 4 samples are selected as a training set, and the experiment is repeated for 5 times. And performing model parameter adjustment according to the performance on the verification set, estimating the generalization capability of the model based on the discrimination effect on the test set, and taking the average value of 5 model tests as the evaluation index of the final model to obtain a risk prediction model with good performance.
The invention adopts a non-invasive risk prediction model of diabetes based on a residual attention network, and the residual attention network is formed by stacking a plurality of attention modules through combining an end-to-end training mode and a latest feedforward network system structure. Compared with a traditional residual error network model, the residual error attention network can achieve fine-grained feature matching, meanwhile, the influence of a target area is enhanced, the influence of a non-target area is inhibited, and the fitting speed and the generalization capability of the model are favorably improved; in addition, the invention adopts a non-invasive detection method, namely, the front face image of the testee is analyzed through a diabetes noninvasive risk prediction model, and the future diabetes onset risk of the testee is predicted. The method has the advantages of short time consumption and low cost, supports large-scale screening and remote diagnosis and treatment, is beneficial to quickly screening the diabetic patients in the high-morbidity crowd, reminds the diabetic patients to control the blood sugar as early as possible, and avoids the occurrence of diabetic complications.
Although only the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art, and all changes are encompassed in the scope of the present invention.
Nothing in this specification is said to apply to the prior art.

Claims (6)

1. A method for noninvasive risk prediction of diabetes, comprising the following steps:
acquiring or constructing a data set comprising facial images of a diabetic person and a healthy person,
preprocessing an image sample in the data set, positioning face image feature points, obtaining a plurality of key areas, cutting, splicing and marking the key areas with diabetes diagnosis information, and obtaining a marked sample data set;
and constructing a residual error attention network, performing supervised machine learning on the marked samples, and obtaining a diabetes noninvasive risk prediction model after training and parameter adjustment.
2. The method for noninvasive risk prediction of diabetes according to claim 1, wherein the sample images in the data set are a large number of front face facial images of the subject collected by a high definition camera under the same natural environment state (illumination, angle, expression), and are data sets of facial images of self-built diabetic patients and healthy people, and the specific process is as follows:
a large number of diabetic and healthy human subjects were enrolled, with the inclusion criteria set: all subjects need to be between 40 and 90 years of age; the facial skin has no obvious scars, and facial images are not made up at the same day of collection, wherein the diabetes subjects need to meet the condition that the definite diabetes diagnosis is made in second and above medical institutions; the blood sugar range of a healthy human subject meets 3.9-6.1 millimoles per liter of fasting whole blood sugar, 6.7-9.4 millimoles per liter of blood sugar after 1 hour, less than or equal to 7.8 millimoles per liter of blood sugar after 2 hours, or less than 6.5 percent of glycosylated hemoglobin H1A1c in a physical examination report within three months, and no history of diabetes exists; all subjects had no significant statistical differences in age, gender, etc.;
in a room that the illumination is good, the testee sits the one end at the desk, fixes the head on the forehead holds in the palm the support, and a high definition camera is placed to the other end of desk, holds in the palm the height of support through the adjustment forehead, ensures that the camera can be clear shoots the face image that the testee was facing, and the process of whole sample collection guarantees that the testee angle of shooing, expression and outside illumination condition are unanimous as far as possible.
3. The method for noninvasive risk prediction of diabetes according to claim 1, characterized in that the labeling is performed by using a model "shape _ predictor _68_ face _ landworks.dat" pre-trained in Dlib toolkit to perform 68-point calibration, and the imaging processing is performed by using OpenCv to draw 68 points on the face; positioning and cutting a key area according to the coordinates of the 68 feature points; the key area is selected in a scattered mode in the area where facial organs including eyebrows, eyes, a nose and a mouth are avoided, and the shape of the key area is rectangular.
4. The method for noninvasive risk prediction of diabetes according to claim 3, characterized in that four key areas are set, namely forehead area (A), left cheek area (B), right cheek area (C) and chin area (D); the specific operation mode is as follows: firstly, a face characteristic point detection method in a machine learning toolkit Dlib is adopted to label the outlines of 68 key point positions on a face sample, and the coordinate of each point is marked as Pi(x, y), i is 1 to 68; with P9The horizontal axis of (x, y) is the axis of abscissa and is denoted as the x-axis, P1The vertical axis of (x, y) is marked as y axis as ordinate axis, and simultaneously, 4 square key areas (64 x 64 pixels) with the same size are defined according to the coordinate relation between the characteristic points and are respectively marked as A, B, C, D; if the position of the key region in the face image is to be accurately located, the coordinates of the center point of the key region must be located first, and the coordinates of the center point of the key region A, B, C, D are recorded as PA(x,y)、PB(x,y)、PC(x,y)、PD(x,y);
Key area A is near forehead area above central axis of human face, PAThe abscissa of (x, y) is taken as the abscissa of the nose tip feature point 34 and is denoted as P34(x) The vertical coordinate of the feature point corresponding to the highest point of the eyebrow is taken as Pmax-high(y) adding half of the corresponding length of the critical area pixel value 64 and recording as h, PAFormula for calculating (x, y)Comprises the following steps:
PA(x,y)=(P34(x),Pmax-high(y)+h) (1)
key regions B, C are near the left and right cheeks, respectively, of the face, PB(x,y)、PCThe formula for the calculation of (x, y) is:
PB(x,y)=(P42(x),P32(y)) (2)
PC(x,y)=(P47(x),P36(y)) (3)
PBthe abscissa and ordinate of (x, y) are the abscissa of the feature point 42 at the lowest position in the left eye, and are denoted as P42(x) Longitudinal coordinate P of feature point 32 on the leftmost side of the nose32(y);PCThe abscissa and ordinate of (x, y) are respectively the abscissa of the feature point 47 at the lowest position of the right eye and are denoted as P47(x) The ordinate of the rightmost feature point 36 of the nose is denoted as P36(y);
The critical area D is near the central axis below the mouth, PDThe abscissa of (x, y) is the abscissa of the feature point 58 at the lowermost end of the mouth, and is denoted as P58(x) The ordinate is half the vertical distance between feature point 58 and feature point 9; pDThe specific calculation formula of (x, y) is as follows:
Figure FDA0003349469930000021
after the coordinates of the central points of the four key areas are confirmed, the specific coordinates of the four vertexes of the key area of each square can be calculated according to the coordinates of the central points, and the calculation formula is as follows:
Pn, upper left(x,y)=(Pn(x)-h,Pn(y)+h) (5)
Pn, lower left(x,y)=(Pn(x)-h,Pn(y)-h) (6)
Pn, upper right(x,y)=(Pn(x)+h,Pn(y)+h) (7)
Pn, lower right(x,y)=(Pn(x)+h,Pn(y)-h) (8)
Wherein n represents A, B, C, D; h is half of the corresponding length of the key area pixel value 64;
the cut key areas of each human face image are spliced into a face combination image (128 multiplied by 128 pixels) according to the sequence of A, B, C, D, and the splicing sequence of all samples in the data set is guaranteed to be the same.
5. The method for noninvasive risk prediction of diabetes according to claim 1, wherein the residual attention network is a 56-layer residual attention network built by using a pytorch machine learning library, and the specific network architecture is as follows:
inputting the marked sample image into a residual error attention network, performing 1 convolution and maximum pooling operation through a first convolution layer and a maximum pooling layer, then inserting the marked sample image through 3 residual error units and 3 attention modules, respectively marking the 3 residual error units as a first residual error unit, a second residual error unit and a third residual error unit, performing average pooling operation, then reaching a full-link layer, finally connecting the full-link layer at the tail end of the residual error attention network by using a normalized exponential function Softmax to perform diabetes risk prediction, and outputting a prediction result;
each attention module is divided into two branches, one called the main branch, the other a soft mask branch,
the characteristic diagram is firstly preprocessed by 1 residual error unit, then respectively enters a main branch and a soft mask branch,
the main branch mainly comprises 2 residual units in series,
the soft mask branch comprises two steps of quick feedforward scanning and top-down feedback, the characteristic graph is subjected to two times of down-sampling operation to increase the receptive field, after the lowest resolution is reached, the size of the characteristic graph is enlarged to be consistent with the size of the input original characteristic graph through the same number of up-sampling operation to form an attention characteristic graph, 2 convolution layers of 1 multiplied by 1 are connected, and finally the attention of a mixed domain is obtained through a sigmoid activation function;
in addition, jump connection is added between down sampling and up sampling to fuse the feature information of feature maps with different proportions; the output of the soft mask branch is firstly subjected to matrix multiplication with the output of the main branch, the result is subjected to matrix addition with the output of the main branch, and finally the output of the attention module is obtained through p residual error units;
a residual error unit in the attention module adopts a bottleeck structure to reduce the parameter number, the size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the channel number is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, the number of channels is 256, the activation function between the convolution layers is set to relu, the output of the bottleeck structure is the sum of the output of the convolution layer of the third layer and the output of the identyblock, and the size is 112 multiplied by 112;
the first convolutional layer contains convolutional kernels with the size of 7 × 7, the step length is 2 × 2, the number of channels is 64, the filling mode is set to valid, and the output of the convolutional layer is 112 × 112;
the size of the pooling window of the maximum pooling layer is 3 multiplied by 3, the step length is 2 multiplied by 2, and the size of the feature graph output after the maximum pooling operation is 56 multiplied by 56;
the first residual unit adopts a bottleeck structure to reduce the number of parameters. The size of a first layer convolution kernel in the bottleeck structure is 1 multiplied by 1, and the number of channels is 64; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 64; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 256; the activation function between convolutional layers is set to relu. The output of the bottompiece structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size is 56 × 56.
The first attention module is connected behind the first residual error unit, and the output size of the attention module is as follows: 56 is multiplied by 56;
a second residual error unit is connected behind the first attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 128; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 128; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 512; setting an activation function between each convolution layer as relu; the output of the bottleeck structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size of the output is 28 multiplied by 28;
the second attention module is connected behind the second residual error unit, and the output size of the attention module is as follows: 28X 28;
a third residual error unit is connected behind the second attention module, a bottleeck structure is also adopted, 3 layers of convolution are set to reduce the parameter number, wherein the size of a convolution kernel of the first layer is 1 multiplied by 1, and the number of channels is 256; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 256; the size of a convolution kernel in the third layer is 1 multiplied by 1, and the number of channels is 1024; setting an activation function between each convolution layer as relu; the output of the bottleeck structure is the sum of the output of the third convolutional layer and the output of the identyblock, and the size of the output is 14 multiplied by 14;
and a third attention module is connected behind the third residual error unit, and the output size of the attention module is as follows: 14 is multiplied by 14;
a fourth residual error unit is connected behind the third attention module, 3 serially connected bottleeck structures are adopted, 3 layers of convolution are set to reduce the parameter number, the size of a convolution kernel of a first layer in each bottleeck is 1 multiplied by 1, and the number of channels is 512; the size of the second layer of convolution kernel is 3 multiplied by 3, and the number of channels is 512; the size of the convolution kernel of the third layer is 1 multiplied by 1, and the number of channels is 2048; the activation function between each convolution layer is set to relu, and the output size of the fourth residual error unit is 7 multiplied by 7;
carrying out average pooling operation on the feature map output by the fourth residual unit, wherein the size of a pooling window is 7 multiplied by 7, the step length is 1 multiplied by 1, and the size of the feature map subjected to average pooling is 1 multiplied by 1;
and finally, connecting a full-connection layer at the tail end of the residual attention network by using a normalized exponential function Softmax to predict the diabetes risk.
6. The method according to claim 1, wherein the labeled sample data set is randomly divided into a training set, a validation set, and a test set, and specifically, the random division refers to dividing the sample data set into 5 mutually exclusive subsets of similar size after a shuffle operation is performed on the sample data set, wherein the union of 4 subsets is used as the training set each time, and the remaining 1 subset is used as the test set; in the test set, 50% of the samples were randomly divided as validation sets, and the test set, validation set, and training set all included the same proportion of diabetic and healthy human samples.
And (3) adopting a 5-fold cross validation method when the model effect is judged in the training parameter adjusting process, selecting 1 different sample as a test set each time, using other 4 samples as training sets, repeating the experiment for 5 times, adjusting the model parameters according to the performance on the verification set, estimating the generalization capability of the model based on the judging effect on the test set, and taking the average value of the 5 model tests as the evaluation index of the final model to obtain the risk prediction model with good performance.
CN202111332652.1A 2021-11-11 2021-11-11 Noninvasive risk prediction method for diabetes Active CN114038564B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111332652.1A CN114038564B (en) 2021-11-11 2021-11-11 Noninvasive risk prediction method for diabetes

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111332652.1A CN114038564B (en) 2021-11-11 2021-11-11 Noninvasive risk prediction method for diabetes

Publications (2)

Publication Number Publication Date
CN114038564A true CN114038564A (en) 2022-02-11
CN114038564B CN114038564B (en) 2024-06-21

Family

ID=80137246

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111332652.1A Active CN114038564B (en) 2021-11-11 2021-11-11 Noninvasive risk prediction method for diabetes

Country Status (1)

Country Link
CN (1) CN114038564B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645716A (en) * 2023-05-31 2023-08-25 南京林业大学 Expression Recognition Method Based on Local Features and Global Features
CN116913508A (en) * 2023-09-13 2023-10-20 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on white eye characteristics
CN117577333A (en) * 2024-01-17 2024-02-20 浙江大学 Multi-center clinical prognosis prediction system based on causal feature learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112349427A (en) * 2020-10-21 2021-02-09 上海中医药大学 Diabetes prediction method based on tongue picture and depth residual convolutional neural network
CN113436150A (en) * 2021-06-07 2021-09-24 华中科技大学同济医学院附属同济医院 Construction method of ultrasound imaging omics model for lymph node metastasis risk prediction

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112349427A (en) * 2020-10-21 2021-02-09 上海中医药大学 Diabetes prediction method based on tongue picture and depth residual convolutional neural network
CN113436150A (en) * 2021-06-07 2021-09-24 华中科技大学同济医学院附属同济医院 Construction method of ultrasound imaging omics model for lymph node metastasis risk prediction

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116645716A (en) * 2023-05-31 2023-08-25 南京林业大学 Expression Recognition Method Based on Local Features and Global Features
CN116645716B (en) * 2023-05-31 2024-01-19 南京林业大学 Expression recognition method based on local features and global features
CN116913508A (en) * 2023-09-13 2023-10-20 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on white eye characteristics
CN116913508B (en) * 2023-09-13 2023-12-12 中国人民解放军总医院第一医学中心 Method and system for predicting diabetic nephropathy based on white eye characteristics
CN117577333A (en) * 2024-01-17 2024-02-20 浙江大学 Multi-center clinical prognosis prediction system based on causal feature learning
CN117577333B (en) * 2024-01-17 2024-04-09 浙江大学 Multi-center clinical prognosis prediction system based on causal feature learning

Also Published As

Publication number Publication date
CN114038564B (en) 2024-06-21

Similar Documents

Publication Publication Date Title
Shahzadi et al. CNN-LSTM: Cascaded framework for brain tumour classification
CN108806792B (en) Deep learning face diagnosis system
JP6522161B2 (en) Medical data analysis method based on deep learning and intelligent analyzer thereof
CN114038564A (en) Noninvasive risk prediction method for diabetes
CN114926477A (en) Brain tumor multi-modal MRI (magnetic resonance imaging) image segmentation method based on deep learning
CN113012163A (en) Retina blood vessel segmentation method, equipment and storage medium based on multi-scale attention network
Ouchtati et al. Novel method for brain tumor classification based on use of image entropy and seven Hu’s invariant moments
CN115035127A (en) Retinal vessel segmentation method based on generative confrontation network
Zeng et al. Automated detection of diabetic retinopathy using a binocular siamese-like convolutional network
Zuo et al. Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's Disease Using 3D Comprehensive Visual Stimuli
US20240282090A1 (en) Multi-modal method for classifying thyroid nodule based on ultrasound and infrared thermal images
Wang et al. An interpretable deep learning system for automatic intracranial hemorrhage diagnosis with CT image
CN116597950A (en) Medical image layering method
CN116402756A (en) X-ray film lung disease screening system integrating multi-level characteristics
CN113273959B (en) Portable diabetic retinopathy diagnosis and treatment instrument
Marulkar et al. Nail Disease Prediction using a Deep Learning Integrated Framework
Fan et al. Automatic detection of Horner syndrome by using facial images
Shanthakumari et al. Glaucoma Detection using Fundus Images using Deep Learning
Zahari et al. Quantifying the Uncertainty in 3D CT Lung Cancer Images Classification
Nazir et al. Enhancing Autism Spectrum Disorder Diagnosis through a Novel 1D CNN-Based Deep Learning Classifier
Jahnavi et al. Segmentation of medical images using U-Net++
Salih et al. Neural Network Approach For Classification And Detection Of Chest Infection
Shankar et al. Wavelet based Machine Learning Approaches towards Precision Medicine in Diabetes Mellitus.
Priya et al. A novel intelligent diagnosis and disease prediction algorithm in green cloud using machine learning approach
Gunasekara et al. A feasibility study for deep learning based automated brain tumor segmentation using magnetic resonance images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Zhang Bing

Inventor after: Bai Jing

Inventor after: Gao Ruijun

Inventor after: Wang Chao

Inventor after: Yao Baitao

Inventor after: Song Zhenqiang

Inventor after: Sun Bei

Inventor after: Li Mingzhen

Inventor after: Yang Yanhui

Inventor after: Guo Lichuan

Inventor after: Zhang Yuan

Inventor after: Song Xin

Inventor after: Qi Feng

Inventor before: Zhang Bing

Inventor before: Guo Lichuan

Inventor before: Song Xin

Inventor before: Qi Feng

Inventor before: Bai Jing

Inventor before: Zhang Yuan

Inventor before: Gao Ruijun

Inventor before: Wang Chao

Inventor before: Yao Baitao

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant