CN113658702A - Cerebral apoplexy characteristic extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection diagnosis - Google Patents

Cerebral apoplexy characteristic extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection diagnosis Download PDF

Info

Publication number
CN113658702A
CN113658702A CN202110991019.7A CN202110991019A CN113658702A CN 113658702 A CN113658702 A CN 113658702A CN 202110991019 A CN202110991019 A CN 202110991019A CN 113658702 A CN113658702 A CN 113658702A
Authority
CN
China
Prior art keywords
prediction parameter
feature
feature prediction
weight
roi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110991019.7A
Other languages
Chinese (zh)
Other versions
CN113658702B (en
Inventor
赵紫娟
王华虎
冀伦文
强彦
李慧芝
王麒达
梁鑫
赵琛琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanxi Huihu Health Technology Co ltd
Original Assignee
Shanxi Huihu Health Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanxi Huihu Health Technology Co ltd filed Critical Shanxi Huihu Health Technology Co ltd
Priority to CN202110991019.7A priority Critical patent/CN113658702B/en
Publication of CN113658702A publication Critical patent/CN113658702A/en
Application granted granted Critical
Publication of CN113658702B publication Critical patent/CN113658702B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/90ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to alternative medicines, e.g. homeopathy or oriental medicines
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Abstract

The invention provides a stroke feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection diagnosis, wherein the feature extraction method comprises the following steps: acquiring image information of a palm, a face and an ear to be detected; performing hand ROI segmentation according to the hand key point model, performing eyebrow ROI segmentation according to the face key point model, and performing earlobe ROI segmentation according to the ear key point model; extracting a lifeline centerline-crossing feature prediction parameter, a rosy feature prediction parameter, a bump feature prediction parameter and a hypertrophy feature prediction parameter of a hand according to ROI segmentation of the hand; extracting wrinkle characteristic prediction parameters of the face according to the ROI segmentation of the eyebrow center; extracting a twill feature prediction parameter of the earlobe according to the ROI segmentation of the earlobe; summarizing and outputting a lifeline cross-centerline feature prediction parameter, a ruddy feature prediction parameter, a raised feature prediction parameter, a thickening feature prediction parameter, a wrinkle feature prediction parameter and a twill feature prediction parameter; has the beneficial effect of screening the cerebral apoplexy patients and is suitable for the field of health management.

Description

Cerebral apoplexy characteristic extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection diagnosis
Technical Field
The invention relates to the technical field of health management, in particular to a stroke feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection.
Background
Inspection is an important branch of the four diagnostic methods in traditional Chinese medicine, and is known as the first of the four diagnostic methods, the form, color and luster and gaseous state of the human body contain a lot of valuable information, and traditional Chinese medical doctors diagnose and predict the state of illness of patients by observing the changes of the form, color, qi and spirit of the patients.
As early as hundreds of thousands of years, the statement in "lingshu, Benzang ]: if the visceral manifestation is known, is known. "explain the correspondence between the zang-fu organs and the body surface, observe the external manifestations and detect the changes of the zang-fu organs, so as to understand the location and nature of the disease, and understand the intrinsic pathological nature, and explain the symptoms appearing externally.
There is a relevant theory in the aspect of identifying cardiovascular and cerebrovascular diseases in traditional Chinese medicine inspection, and with the development of computer vision, more and more methods are available to realize the inspection-based mode automatically. According to the relevant theory of traditional Chinese medicine inspection, a whole set of scheme of feature segmentation, analysis and weight assignment is provided by combining various features of the face and the hand, and the automatic processing method is the advancing direction of traditional Chinese medicine inspection.
Disclosure of Invention
Aiming at the defects in the related technology, the technical problem to be solved by the invention is as follows: the stroke feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection are provided, and the stroke feature extraction and intelligent risk prediction method and system are combined with a computer image processing method to extract and analyze stroke features so as to screen stroke patients.
In order to solve the technical problems, the technical scheme adopted by the invention is as follows:
the stroke feature extraction method based on traditional Chinese medicine inspection diagnosis comprises the following steps:
s10, acquiring image information of the palm, the face and the ear to be detected;
s20, performing hand ROI segmentation according to the hand key point model, performing eyebrow ROI segmentation according to the face key point model, and performing earlobe ROI segmentation according to the ear key point model;
s30, extracting a lifeline centerline-crossing feature prediction parameter, a rosy feature prediction parameter, a bump feature prediction parameter and a hypertrophy feature prediction parameter of the hand according to the ROI segmentation;
s40, extracting facial wrinkle feature prediction parameters according to the eyebrow center ROI segmentation;
s50, extracting a twill feature prediction parameter of the earlobe according to the ear lobe ROI segmentation;
and S60, summarizing and outputting the lifeline crossing centerline feature prediction parameter, the ruddy feature prediction parameter, the uplift feature prediction parameter, the thickening feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter.
Preferably, in step S40, the extracting of the wrinkle feature prediction parameters of the face according to the eyebrow center ROI segmentation includes:
s401, performing gray level conversion on an ROI (region of interest) of the eyebrow center, and then enhancing the image by utilizing histogram equalization;
s402, constructing a convolutional neural network based on banded pooling;
s403, inputting the image processed in the step S401 into a feature extractor of the convolutional neural network in the step S402, realizing pooling by using a long strip-shaped pooling kernel, classifying by using a softmax layer, and outputting a prediction score;
and S404, outputting wrinkle characteristic prediction parameters according to the prediction scores.
Preferably, in step S50, the extracting of the prediction parameters of the twill features of the earlobe according to the ear lobe ROI segmentation includes:
s501, segmenting an ear lobe ROI (region of interest), and extracting an image of a segmented region where twills are located;
s502, performing contrast enhancement and gray level processing on the segmented ear lobe image to obtain a preprocessed ear lobe image;
s503, scoring the ear-lobe twill index through a multi-scale edge detection algorithm, and finally outputting a scoring result;
and S504, outputting prediction parameters of the earlobe twill characteristics according to the grading result.
Preferably, in step S30, a lifeline-through-center line feature prediction parameter, a rosy feature prediction parameter, a ridge feature prediction parameter, and a fat feature prediction parameter of the hand are extracted from the hand ROI segmentation; the method specifically comprises the following steps:
s301, detecting a lifeline centerline ROI area corresponding to the lifeline centerline passing feature, judging whether the lifeline crosses the centerline or not by using an edge detection algorithm and a curve fitting algorithm, and outputting a lifeline centerline passing feature prediction parameter according to a judgment result;
s302, carrying out ruddiness detection on a ruddiness ROI corresponding to the ruddiness characteristics, judging whether the regions are ruddiness or not by an HSV-based color gamut space, and outputting a ruddiness characteristic prediction parameter according to a judgment result;
and S303, performing thickening and bulging detection on a thickening ROI (region of interest) corresponding to the thickening characteristic and a bulging ROI corresponding to the bulging characteristic, respectively judging whether the thickening and bulging are performed through an SVM (support vector machine) model, and outputting a bulging characteristic prediction parameter and a thickening characteristic prediction parameter according to a judgment result.
Correspondingly, the intelligent stroke risk prediction method based on traditional Chinese medicine inspection comprises the following steps:
a1, extracting the characteristics of the impact factors corresponding to the risk of cerebral apoplexy;
a2, performing weighting according to the feature extraction result;
a3, carrying out weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted;
the method for extracting stroke features according to any one of claims 1 to 5, wherein the step a1 of extracting features of the impact factors corresponding to the stroke risk.
Preferably, the step a2, performing weighting according to the feature extraction result; the method specifically comprises the following steps:
a21, establishing a weight factor corresponding to each characteristic prediction parameter;
a22, setting a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
Preferably, in the step a3, weight calculation is performed on the influence factors to obtain a stroke risk assessment result of the target to be predicted; the method specifically comprises the following steps:
a311, receiving weights corresponding to a lifeline crossing centerline feature prediction parameter, a ruddy feature prediction parameter, a bump feature prediction parameter, a hypertrophy feature prediction parameter, a wrinkle feature prediction parameter and a twill feature prediction parameter respectively;
a312, calculating a combined probability P1; the method specifically comprises the following steps:
when the patient is thick, raised and ruddy, the risk probability of the stroke is 0.9, otherwise, the risk probability is 0; and expressed by formula (1);
Figure BDA0003232411710000031
a313, calculating a combined probability P2, specifically:
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P21 formula (2);
a314, returning the final probability; wherein the final probability P is the maximum value of P1 and P2.
P=max(P1,P2) Equation (3).
Correspondingly, the intelligent cerebral stroke risk prediction system based on traditional Chinese medicine inspection comprises:
the characteristic extraction unit is used for carrying out characteristic extraction on the influence factors corresponding to the stroke risk;
the weighting unit is used for weighting according to the feature extraction result;
and the prediction unit is used for carrying out weight calculation on the influence factors and obtaining the stroke risk evaluation result of the target to be predicted.
Preferably, the weighting unit includes:
an establishing unit for establishing a weight factor corresponding to each feature prediction parameter;
the setting unit is used for setting a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
Preferably, the prediction unit includes:
the receiving unit is used for receiving weights corresponding to the lifeline crossing center line characteristic prediction parameter, the ruddy characteristic prediction parameter, the uplift characteristic prediction parameter, the hypertrophy characteristic prediction parameter, the wrinkle characteristic prediction parameter and the twill characteristic prediction parameter respectively;
the calculating unit is used for calculating the combined probability P1 and the combined probability P2, and the specific calculating process is as follows:
Figure BDA0003232411710000041
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P21 formula (2);
a prediction result output unit for returning the final probability P;
wherein, the final probability P is the maximum value of P1 and P2:
P=max(P1,P2) Equation (3).
The invention has the beneficial technical effects that:
1. according to the stroke feature extraction method based on traditional Chinese medicine inspection provided by the invention, 6 features including whether the life line of the palm crosses the palm midline, whether the palm is thick, uplift and ruddy, and eyebrow fold and earlobe twill are extracted according to the relevant theory of inspection stroke, and a feasible segmentation and analysis scheme is provided for the ROI of each feature, so that the stroke feature extraction method is simple to implement and convenient to realize.
2. According to the stroke intelligent risk prediction method based on traditional Chinese medicine inspection, 6 characteristics of the palm and the face are selected and the weight is distributed, so that a user can find stroke risks early, life habits can be improved consciously, health maintenance requirements of the public can be met, and the health level of residents can be improved.
Drawings
Fig. 1 is a schematic flow chart of a stroke feature extraction method based on traditional Chinese medicine inspection provided in an embodiment of the present invention;
FIG. 2 is a schematic diagram of a palm image to be measured according to an embodiment of the invention;
FIG. 3 is a schematic diagram of a facial image to be measured according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image of an ear to be measured according to a first embodiment of the invention;
FIG. 5 is a diagram illustrating a hand keypoint model according to an embodiment of the present invention;
FIG. 6 is a schematic diagram illustrating a hand lifeline crossing centerline feature extracted by hand ROI segmentation according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of hand ridge features extracted by hand ROI segmentation according to an embodiment of the present invention;
FIG. 8 is a schematic diagram illustrating a hand ruddy feature extracted by hand ROI segmentation according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of the fat thickness feature extracted by the ROI segmentation of the hand according to an embodiment of the present invention;
FIG. 10 is a diagram of a face keypoint model according to an embodiment of the present invention;
FIG. 11 is a schematic diagram illustrating wrinkle features extracted by ROI segmentation of an eyebrow center according to an embodiment of the present invention;
FIG. 12 is a schematic diagram illustrating a fitting region selection rule during a process of calculating a centerline-passing characteristic of a hand lifeline according to an embodiment of the present invention;
FIG. 13 is a schematic diagram illustrating the fitting effect of a quadratic function during the calculation process of the centerline-crossing feature of the hand lifeline according to the embodiment of the present invention;
FIG. 14 is a schematic diagram illustrating an effect of extracting each color region in a process of calculating a ruddiness feature according to an embodiment of the present invention;
FIG. 15 is a schematic diagram of a feature learning curve during calculation of a ridge feature prediction parameter and a hypertrophy feature according to an embodiment of the present invention;
fig. 16 is a schematic flowchart of an intelligent stroke risk prediction method based on traditional Chinese medicine inspection according to a second embodiment of the present invention;
fig. 17 is a schematic structural diagram of an intelligent stroke risk prediction system based on traditional Chinese medicine inspection provided in the third embodiment of the present invention;
fig. 18 is a schematic flow chart of an intelligent stroke risk prediction method based on traditional Chinese medicine inspection in an embodiment of the present invention;
in the figure: 10 is a feature extraction unit, 20 is an empowerment unit, and 30 is a prediction unit;
201 is a creating unit, 202 is a setting unit, 301 is a receiving unit, 302 is a calculating unit, and 303 is a prediction result output unit.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the present invention; all other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Next, the present invention will be described in detail with reference to the drawings, wherein the cross-sectional views illustrating the structure of the device are not enlarged partially according to the general scale for convenience of illustration when describing the embodiments of the present invention, and the drawings are only examples, which should not limit the scope of the present invention. In addition, the three-dimensional dimensions of length, width and depth should be included in the actual fabrication.
An embodiment of the stroke feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection diagnosis is described in detail below with reference to the accompanying drawings.
Example one
As shown in fig. 1 to 4, the method for extracting stroke features based on traditional Chinese medicine inspection diagnosis is characterized in that: the method comprises the following steps:
s10, acquiring image information of the palm, the face and the ear to be detected;
s20, performing hand ROI segmentation according to the hand key point model, performing eyebrow ROI segmentation according to the face key point model, and performing earlobe ROI segmentation according to the ear key point model;
s30, extracting a lifeline centerline-crossing feature prediction parameter, a rosy feature prediction parameter, a bump feature prediction parameter and a hypertrophy feature prediction parameter of the hand according to the ROI segmentation;
s40, extracting facial wrinkle feature prediction parameters according to the eyebrow center ROI segmentation;
s50, extracting a twill feature prediction parameter of the earlobe according to the ear lobe ROI segmentation;
and S60, summarizing and outputting the lifeline crossing centerline feature prediction parameter, the ruddy feature prediction parameter, the uplift feature prediction parameter, the thickening feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter.
As shown in fig. 5, in the present embodiment, in performing the hand ROI segmentation according to the hand keypoint model, the hand keypoint model may be the hand keypoint model proposed by tomasssimon et al in 2017, where:
as shown in fig. 6, the process of extracting the lifeline-through-midline feature of the hand includes:
segmenting a triangular ROI of the life line passing through the central line according to the key points of the hand; the three vertices of the triangle are: wrist center 0 point (x)0,y0) 5 points (x) at the root of the index finger5,y5) And point 0 of the wrist centerMiddle point of connecting line of root 13 points of ring finger
Figure BDA0003232411710000061
Such division can reduce interference of other palm prints while covering the intersection area of the lifeline and the midline.
As shown in fig. 7, the process of extracting the hand protrusion feature includes:
segmenting a rectangular ROI of the raised features according to the key points of the hands; the uplift feature is mainly expressed at the intersection of the knuckle and the palm, so that a rectangle RO of the uplift feature is divided by taking one fourth of the abscissa distance of the key point 5 and the key point 17 as a standard distance m; let the coordinates of key point 5 and key point 17 be (x) respectively5,y5) and (x17,y17) Then the coordinates of the four vertices of the ROI rectangle are (x)5+m,y5-m);(x5+m,y17+m);(x17-m,y17+m);(x17-m,y5-m)。
As shown in fig. 8, the process of extracting the ruddy features of the hand includes:
segmenting a rectangular ROI with ruddy characteristics according to the key points of the hands; the ruddy feature refers to the overall condition of the palm, so the area of the selected palm needs to be enlarged, that is, two m units are expanded downwards on the basis of fig. 7; the coordinates of the four vertices of the ROI rectangle are (x)5+m,y5-m);(x5+m,y17+3m);(x17-m,y17+3m);(x17-m,y5-m)。
As shown in fig. 9, the process of extracting the hand hypertrophy feature comprises:
segmenting a rectangular ROI with the hypertrophy characteristic according to the key points of the hands; the hypertrophy feature mainly refers to the fullness of the big and small fish, so compared with fig. 8, an area closer to the wrist is taken; the coordinates of the four vertices of the ROI rectangle are (x)5+3m,y5+6m);(x5+3m,y17+2m);(x17-m,y17+2m);(x17-m,y5+6m)。
As shown in fig. 10, the facial keypoint model may be a facial keypoint model trained by Dlib library, which extracts 68 keypoints of the face;
as shown in fig. 11, in the wrinkle feature extraction process, the eyebrow center ROI region is mainly located between the key points 22 and 23, so we select the rectangular frame with the width of the rectangular frame as l, the distance between the key points 22 and 23, and the length of the rectangular frame as twice the width l, the eyebrow center wrinkle region is located at the upper part of the two-point connecting line, and the coordinates of the four vertices are (x) respectively22,y22-0.5l);(x22,y22+1.5l);(x23,y23+1.5l);(x23,y23-0.5l)。
In this embodiment, in step S30, a lifeline-through-center line feature prediction parameter, a rosy feature prediction parameter, a ridge feature prediction parameter, and a fat feature prediction parameter of the hand are extracted from the hand ROI segmentation; the method specifically comprises the following steps:
s301, detecting a lifeline centerline ROI area corresponding to the lifeline centerline passing feature, judging whether the lifeline crosses the centerline or not by using an edge detection algorithm and a curve fitting algorithm, and outputting a lifeline centerline passing feature prediction parameter according to a judgment result;
s302, carrying out ruddiness detection on a ruddiness ROI corresponding to the ruddiness characteristics, judging whether the regions are ruddiness or not by an HSV-based color gamut space, and outputting a ruddiness characteristic prediction parameter according to a judgment result;
and S303, performing thickening and bulging detection on a thickening ROI (region of interest) corresponding to the thickening characteristic and a bulging ROI corresponding to the bulging characteristic, respectively judging whether the thickening and bulging are performed through an SVM (support vector machine) model, and outputting a bulging characteristic prediction parameter and a thickening characteristic prediction parameter according to a judgment result.
The specific process of step S301 is:
s3011, carrying out gray level processing on the image in the ROI (region of interest) of the live line, and converting the image into a gray level image; the conversion formula is as follows: gray ═ (R30 + G59 + B11 + 50)/100;
s3012, after obtaining the gradient in the x direction through a Scharr operator, performing edge detection on the gradient in the x direction through a Canny operator with a specific threshold, wherein the specific threshold comprises: an upper threshold 500, a lower threshold 300; the method specifically comprises the following steps:
s30121, obtaining a gradient in the x direction through a Scharr operator; because the lifeline is longitudinal on the palm, the gradient in the x direction is obtained through the Scharr operator, the edge characteristics are preliminarily extracted, the longitudinal grains are strengthened, and the transverse grains are weakened. The filters used are as follows:
Figure BDA0003232411710000081
s30122, taking a threshold value as (300,500), and carrying out Canny sub-edge detection on the image obtained in the step S30121; clear palm print images can be obtained by using a Canny operator, which is favorable for further curve fitting and specifically comprises the following steps:
1) the gradient in the x and y directions was found using the Sobel operator as follows:
Figure BDA0003232411710000082
2) gradient values and gradient directions were obtained as follows:
Figure BDA0003232411710000083
Figure BDA0003232411710000084
3) carrying out non-maximum value filtering to ensure that the width of the edge is 1 pixel point as far as possible; as follows:
Figure BDA0003232411710000085
only if a pixel has the largest gradient value in the gradient direction, it is considered as belonging to an edge, otherwise the pixel value is set to 0:
4) performing edge detection by using upper and lower thresholds; the upper threshold is set to be 500, the lower threshold is set to be 300, all the edges larger than the upper threshold are edges, and all the edges smaller than the lower threshold do not belong to edges. For a point between the upper and lower thresholds, it is considered an edge if it is connected to an edge, and otherwise it does not belong to an edge.
S3013, performing image enhancement operation;
the method specifically comprises the following steps: performing primary expansion, primary corrosion and secondary expansion operation on the sequential image obtained in the step S3012 to realize image enhancement; where the dilation is essentially the use of a convolution kernel to do the AND and replace the reference point with the maximum, as follows:
Figure BDA0003232411710000091
erosion also uses convolution kernels to perform an AND operation, but replaces the reference point with a minimum, as follows:
Figure BDA0003232411710000092
in the embodiment, the expansion operation can make the outline of the life line clearer and facilitate fitting; and the corrosion operation can eliminate noise to a certain extent and reduce interference. The two are combined to optimize the fitting effect.
S3014, obtaining a maximum bright spot set corresponding to the life line and a spot set of a bright spot in a vertical area where the maximum bright spot is located through a contour detection algorithm; in the first embodiment, the image obtained in step S3013 is composed of a plurality of bright spots, and since the segmented lifeline passing through the lifeline in the central line ROI region is the most obvious line, the largest bright spot must come from the lifeline and is used as the main reference for evaluating the fitting effect; furthermore, since the lifeline is longitudinal on the palm and is usually composed of several spots in the figure, in most cases fitting with the spot in the vertical area where the largest spot is located is better.
Specifically, in step S3014, obtaining, by using a contour detection algorithm, a maximum bright spot set corresponding to the life line and a spot set of a bright spot in a vertical region where the maximum bright spot is located, specifically including:
taking the outline with the largest area as the maximum bright spot, and then starting from the outline with the largest area, and taking a first point of each outline; wherein: let the first point of the contour with the largest area be (x)0,y0) The first point of the other contour is (x)k,yk) When yk-y0|≥4*|xk-x0If not, the outline is reserved as the bright spot in the vertical area where the maximum bright spot is located, otherwise, the outline is not reserved.
S3015, fitting a quadratic function to the lifeline; because the palm print on the hand conforms to the characteristics of the conic section, the secondary function is used for fitting, and the lifeline can be accurately fitted under the condition of small calculated amount;
the step S3015 of performing quadratic function fitting on the lifeline specifically includes:
s30151, fitting a quadratic function by using a least square method by using a point set of the bright spots in the vertical area where the maximum bright spot is located; let the error equation be:
Error(w|X,y)=(Xw-y)T(Xw-y)
where X is the sample input matrix of mxn, y is the function value matrix of mx1, and w is the solved weight matrix of nx1, the optimal solution is:
w=(XTX)-1XTy;
s30152, longitudinally quartering the maximum bright spot to obtain three middle check points, and respectively obtaining three points with the same longitudinal coordinate on the quadratic function, wherein if the three points are all in the outline of the maximum bright spot, the fitting effect is qualified; if the fitting effect is unqualified, fitting again by using the point set with the largest bright spot;
s3016, obtaining a linear equation of the central line from the key points;
s3017, calculating whether an intersection point exists between a linear equation of the central line and a quadratic function of the lifeline, if the lifeline intersects with the central line, outputting a lifeline crossing central line characteristic prediction parameter according to a judgment result, in the embodiment, if the lifeline crosses the central line characteristic prediction parameter, the Weight corresponding to the lifeline crossing central line characteristic prediction parameter is Weight1, otherwise, the Weight is 0.
In this embodiment, whether the intersection point exists between the central line and the life line can be obtained only by using a simultaneous formula of a quadratic curve and a linear equation, and the process of determining whether the intersection point exists is as follows:
Figure BDA0003232411710000101
if the intersection exists, calculating the distance d between the vertex of the quadratic function and the straight line in the formula (301-1), and obtaining the ratio L of three times the distance to the distance from the key point 5 to the key point 17: wherein: the hand position corresponding to the key point 5 is the base of the index finger, and the hand position corresponding to the key point 17 is the base of the little finger;
Figure BDA0003232411710000102
Figure BDA0003232411710000103
the specific process of step S302 is:
s3021, converting the acquired image into an HSV color space; HSV can more intuitively express the hue, the vividness and the brightness of the apparent color, so the HSV has better performance in the aspect of color contrast, is easier to track an object with a certain color and can be used for segmenting the object with a specific color; the calculation formula for converting RGB into HSV is as follows:
R′=R/255
G′=G/255
B′=B/255
Cmax=max(R′,G′,B′)
Cmin=min(R′,G′,B′)
△=Cmax-Cmin
then H, S, V:
Figure BDA0003232411710000111
Figure BDA0003232411710000112
V=Cmax。
in this example, the main color in the rosy ROI region was obtained. To verify that the palm is ruddy, the color composition of the ruddy ROI region is next analyzed in HSV color space.
S3022, binarizing the rosy ROI area according to the threshold value of each color in the HSV color space, wherein the corresponding color is bright, and the other areas are dark;
s3023, sequentially performing expansion operation and contour detection on the image obtained in the step S3022 to obtain a contour;
s3024, adding all areas in the outline according to colors to obtain the area of each color area; the method specifically comprises the following steps:
the contour is obtained by the same contour detection method in step S3014, and the areas of all the contours are added to obtain the area of each color region.
S3025, determining whether the area is ruddy, and outputting a centerline-crossing feature prediction parameter of the life line according to the determination result, in this embodiment, if the color with the largest area is one of "red", "red 2", and "purple", the color is considered to be ruddy, and when the color is ruddy, the Weight corresponding to the ruddy feature prediction parameter is Weight2, otherwise, the Weight is 0;
the specific process of step S303 is as follows:
s3031, carrying out gray level processing on the images in the hypertrophy ROI region and the bulge ROI region, and converting the images into gray level images;
s3032, performing singular value decomposition on the gray level image to obtain singular value characteristics in the hypertrophy ROI region and the bulge ROI region; singular value decomposition is performed on the image, namely for the matrix A, the following are found:
A=U∑VT
where V is an n × n orthogonal array, U is an m × m orthogonal array, and Σ is an m × n diagonal array;
s3033, carrying out LBP decomposition on the gray level image to obtain LBP characteristics in a hypertrophy ROI area and a bulge ROI area; performing LBP decomposition on the image, which specifically comprises the following steps:
a Local Binary Pattern (LBP) is an algorithm for re-assigning surrounding pixels with reference to a central pixel, taking a square area of 33 as an example, if the pixel values of 8 surrounding points are greater than the pixel value of the center, assigning a value of 1, otherwise, assigning a value of 0; this results in an 8-bit binary number to represent the texture information for the region, as follows:
Figure BDA0003232411710000121
wherein :(xc,yc) Representing the central element in the square region, having a pixel value of icThe pixel value of the other point is ipS (x) is defined as follows:
Figure BDA0003232411710000122
s3034, fusing the singular value characteristic and the LBP characteristic vector, and reducing the dimension to obtain a simplified effective model; the method specifically comprises the following steps:
s30341, sorting the features by using a recursive feature elimination method; the recursive feature elimination method is a greedy algorithm, a model is built in each step, a plurality of least important features are removed, and then the step is repeated by using the remaining features until all the features are exhausted; recursive feature elimination roughly orders the importance of features and then can focus on just how many features were selected.
S30342, drawing a learning curve and determining proper characteristic quantity; in order to reduce the number of features as much as possible and make the model compact and effective, a learning curve is drawn next to obtain the performance of the model under different feature numbers;
respectively selecting the test model expressions of the first 1 characteristic, the first 10001 characteristic, the first 20001 characteristic and the like to obtain a learning curve as shown in figure 12; as can be seen from the figure, when the number of used features is 10001, the model achieves higher accuracy with a smaller number of features, and therefore the first 10001 features are selected for SVM model building.
S3035, dividing 10001 features after dimensionality reduction into a training set and a prediction set randomly, respectively training SVM models by using the training set, establishing model parameters through an automatic parameter adjusting function, and then respectively training the bump and the hypertrophy features.
S3035, judging the ridge and the thickness by using the two trained models respectively, and outputting ridge characteristic prediction parameters and thickness characteristic prediction parameters according to the judgment result; in this embodiment, if the Weight is heavy and the Weight of the bump is Weight3, the Weight of the bump is Weight4, otherwise, the Weight is 0.
Specifically, in step S40, the extracting of the wrinkle feature prediction parameters of the face according to the eyebrow center ROI segmentation includes:
s401, performing gray level conversion on an ROI (region of interest) of the eyebrow center, and then enhancing the image by utilizing histogram equalization;
in this embodiment, after performing Gray level conversion on the eyebrow center ROI region, histogram equalization and wrinkle recognition can be performed conveniently, and the Gray level conversion formula may adopt Gray ═ R30 + G × (59 + B × (11 + 50)/100;
meanwhile, because the histogram of the face image is a discretization representation image, a certain display rule is not provided, and the images are difficult to process and need to be equalized; the equalization processing of the face image needs to ensure the following two conditions:
s4011, the attribute of the pixel cannot be changed, and the physical structure of the image needs to be kept unchanged;
s4012, the acquired human face ROI image is an 8-bits image, and the value range of a pixel mapping function is kept between 0 and 255; the histogram mapping method comprises the following steps:
Figure BDA0003232411710000131
where N is the total number of pixels, L is the total number of gray levels, NgThe total number of pixels with gray level g;
s402, constructing a convolutional neural network based on banded pooling;
s403, inputting the image processed in the step S401 into a feature extractor of the convolutional neural network in the step S402, realizing pooling by using a long strip-shaped pooling kernel, classifying by using a softmax layer, and outputting a prediction score;
because the convolutional neural network mainly comprises seven layers, the processed human face ROI gray level image is input into a feature extractor, then the long-distance dependence relationship of the image is learned by utilizing banded pooling operation, and finally a softmax layer is used for classification, and the method comprises the following steps:
s4031, the input layer directly selects a human face ROI image from the preprocessed image library at random;
s4032, the first layer network is a convolutional layer, and each channel output from the previous layer is convolved with 64 filters of 7 × 7, so the number of feature map channels at convolutional layer 1 is 192. The expression of the convolutional layer is:
Figure BDA0003232411710000132
where h is the output of this layer; wT is the weight; x is an input;
s4033, the second layer is a belt-shaped pooling layer; since the conventional square pooling layer inevitably merges many irrelevant areas when processing irregular-shaped objects, the invention uses the striped pooling window to help the backbone network capture the remote context.
Suppose that
Figure BDA0003232411710000133
The input tensor for the pooling layer, C represents the number of channels, H x W represents the scale size of the input tensor,
Figure BDA0003232411710000134
to representEach value of the input tensor is a real number;
firstly, inputting x into two parallel paths, wherein each path comprises a horizontal or vertical banded pooling layer;
then a one-dimensional convolution layer with the kernel size of 3 is used for adjusting the current position and the adjacent characteristics thereof to obtain
Figure BDA0003232411710000135
And
Figure BDA0003232411710000136
wherein ,yh and yvRespectively representing horizontal and vertical banded pooling and then outputting;
finally, we will yh and yvCombined to obtain an output containing more global information
Figure BDA0003232411710000141
Figure BDA0003232411710000142
Wherein: c represents a pooling operation, i, j represents the location of a coordinate point;
thus, the pooling layer output z is calculated as: z ═ Scale (x, σ (f (y)); wherein: scale (·, ·) represents a pixel point multiplication operation, sigma represents a sigmoid function, and f represents a1 × 1 convolution operation;
s4034, the output layer converts the output into a probability distribution using Softmax regression, where the output after the Softmax regression processing is:
Figure BDA0003232411710000143
wherein :yiRepresenting the probability value of the ith position of the output vector, and respectively representing the numerical values of the ith dimension and the jth dimension of the characteristic vector y before softmax by yi and yj;
s404, outputting wrinkle characteristic prediction parameters according to the prediction scores; in this embodiment, the category score not less than 0.5 represents a wrinkle image, and the category score less than 0.5 represents a non-wrinkle image;
wherein, if the wrinkle feature prediction parameter is a wrinkle, the wrinkle Weight is Weight4, otherwise, it is 0.
Further, in step S50, extracting a twill feature prediction parameter of the earlobe according to the earlobe ROI segmentation, specifically including:
s501, segmenting an ear lobe ROI (region of interest), and extracting an image of a segmented region where twills are located; the method specifically comprises the following steps:
s502, performing contrast enhancement and gray level processing on the segmented ear lobe image to obtain a preprocessed ear lobe image;
s5021, firstly, the high contrast enhancement is carried out on the ear lobe ROI area by using an image contrast enhancement algorithm based on log transformation, the log transformation can expand the low gray value part of an image to display more details of the low gray value part, the high gray value part of the image is compressed, and the details of the high gray value part are reduced, so that the aim of emphasizing the low gray value part of the image is fulfilled, and the transformation method comprises the following steps:
s=c·logv+1(1+v,r)r∈[0,1](ii) a Wherein c represents an original image of an ROI (region of interest) of the earlobe, v +1 represents the base number of a logarithmic function, and r is a hyper-parameter;
s5022, converting the image with the enhanced contrast into a gray-scale image;
s503, scoring the ear-lobe twill index through a multi-scale edge detection algorithm, and finally outputting a scoring result;
s5031, multi-scale Gaussian filtering;
the gaussian filtering denoising is to perform weighted average on the pixel values of the whole image, and the value of each pixel point is obtained by performing weighted average on the value of the pixel point and other pixel values in the neighborhood, and a two-dimensional gaussian function is as follows:
Figure BDA0003232411710000151
where (x, y) is the point coordinate and σ is the standard deviation;
s50311, first obtaining two Gaussian filter templates with different sizes, where the sizes of the selected templates are 3 × 3 and 5 × 5 respectively;
s5032, selecting the standard deviation sigma of the 3 × 3 template as 0.8 and the standard deviation sigma of the 5 × 5 template as 1.4, and calculating the following two filtering templates
Figure BDA0003232411710000152
S5033, performing convolution operation on the preprocessed image by using the two filters to obtain Gauddian filtered images which are respectively a large-scale Gaussian filtered image and a small-scale Gaussian filtered image;
s5034, detecting the multi-scale filtered image by using a Prewitt operator;
s50341, the templates of the Prewitt operator in the two directions of the x axis and the y axis are as follows:
Figure BDA0003232411710000153
s50342, performing neighborhood convolution on the image with two directional templates in the image space, wherein one directional template detects a horizontal edge and the other directional template detects a vertical edge; calculating an absolute value of the image after the edge detection in the two directions, and compressing the image to a [0,255] interval, namely G (x, y) ═ Gx + Gy is the image after prewitt edge detection; gx and Gy respectively represent the filtering results in the x-axis direction and the y-axis direction by using a Prewitt operator;
s50343, performing Prewitt operator edge detection on the Gaussian filtered images of the two scales respectively to obtain a double-scale edge image;
s50345, combining the double-scale edge images obtained in the previous step, and calculating a twill index;
s504, outputting prediction parameters of the earlobe twill features according to the scoring result, and specifically comprising the following steps: the above twill index is greater than 0.5 and defined as earlobe having twill, and less than 0.5 is defined as earlobe having no twill;
in this embodiment, if the earlobe has a twill, the Weight corresponding to the earlobe twill characteristic prediction parameter is Weight6, otherwise, it is 0.
According to the stroke feature extraction method based on traditional Chinese medicine inspection provided by the embodiment, 6 features including whether the life line of the palm crosses the palm center line, whether the palm is thick, whether the palm is raised and ruddy, and eyebrow fold and earlobe twill are extracted according to the relevant theory of inspection of stroke, and a feasible segmentation and analysis scheme is provided for the ROI of each feature, so that the stroke feature extraction method is simple to implement and convenient to realize.
Example two
As shown in fig. 16, on the basis of the first embodiment, the present invention provides a stroke intelligent risk prediction method based on traditional Chinese medicine inspection, which includes:
a1, extracting the characteristics of the impact factors corresponding to the risk of cerebral apoplexy;
a2, performing weighting according to the feature extraction result;
a3, carrying out weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted;
in step a1, the process of extracting features of the impact factors corresponding to the risk of stroke is the stroke feature extraction method described in the first embodiment.
Specifically, the step a2 is to perform weighting according to the feature extraction result; the method specifically comprises the following steps:
a21, establishing a weight factor corresponding to each characteristic prediction parameter;
a22, setting a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
In this embodiment, the through-center Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5, and the twill Weight6 are expressed as follows:
Figure BDA0003232411710000161
the Weight value corresponding to the through-center line Weight1 is controlled to be between 0.45 and 0.55, if no intersection exists, the Weight value is 0:
Figure BDA0003232411710000162
Figure BDA0003232411710000163
Figure BDA0003232411710000164
Figure BDA0003232411710000171
Figure BDA0003232411710000172
further, in the step a3, performing weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted; the method specifically comprises the following steps:
a311, receiving weights corresponding to a lifeline crossing centerline feature prediction parameter, a ruddy feature prediction parameter, a bump feature prediction parameter, a hypertrophy feature prediction parameter, a wrinkle feature prediction parameter and a twill feature prediction parameter respectively;
a312, calculating a combined probability P1; the method specifically comprises the following steps:
when the patient is thick, raised and ruddy, the risk probability of the stroke is 0.9, otherwise, the risk probability is 0; and expressed by formula (1);
Figure BDA0003232411710000173
a313, calculating a combined probability P2, specifically:
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P2formula (2) 1;
a314, returning the final probability; wherein the final probability P is the maximum value of P1 and P2.
P=max(P1,P2) Equation (3).
According to the stroke intelligent risk prediction method based on traditional Chinese medicine inspection provided by the embodiment, 6 characteristics of the palm and the face are selected and the weight is distributed, so that a user can find stroke risks early, life habits can be improved, and the user can go to a hospital regularly to perform investigation; the health-preserving tea is beneficial to meeting the health-preserving requirements of the public and improving the health level of residents.
EXAMPLE III
As shown in fig. 17, the present invention further provides a stroke intelligent risk prediction system based on traditional Chinese medicine inspection, including:
the feature extraction unit 10 is configured to perform feature extraction on an influence factor corresponding to a stroke risk;
an empowerment unit 20 for performing empowerment according to the feature extraction result;
and the prediction unit 30 is used for performing weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted.
Specifically, the weighting unit 20 includes:
an establishing unit 201 for establishing a weighting factor corresponding to each feature prediction parameter;
a setting unit 202, configured to set a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
Further, the prediction unit 30 includes:
a receiving unit 301, configured to receive weights corresponding to a centerline-crossing feature prediction parameter, a ruddy feature prediction parameter, a ridge feature prediction parameter, a hypertrophy feature prediction parameter, a wrinkle feature prediction parameter, and a twill feature prediction parameter of a lifeline;
the calculating unit 302 is configured to calculate a combined probability P1 and a combined probability P2, and the specific calculation process includes:
Figure BDA0003232411710000181
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P2formula (2) 1;
a prediction result output unit (303) for returning a final probability P;
wherein, the final probability P is the maximum value of P1 and P2:
P=max(P1,P2) Equation (3).
Fig. 18 is a schematic flow chart of an intelligent stroke risk prediction method based on traditional Chinese medicine inspection in an embodiment of the present invention; as shown in fig. 18, according to the method and the device, the stroke probability can be obtained through corresponding feature extraction and weight calculation according to the acquired palm image, the face image and the ear image to be detected, and the method and the device are favorable for a user to discover the stroke risk early, so that the life habit is improved, the health maintenance requirements of the public are met, and the health level of residents is improved.
In addition, the invention explores the analysis of the characteristics of the lines, colors, bulges and the like which are common to all parts of the human body, and provides a set of practical and high-accuracy scheme which has strong mobility and can be easily migrated to other parts of the body, thereby promoting the application and development of the inspection diagnosis of the traditional Chinese medicine on all parts of the body.
In the description of the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral part; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or they may be connected internally or in any other suitable relationship, unless expressly stated otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It will be appreciated that the relevant features of the method, apparatus and system described above are referred to one another. In addition, "first", "second", and the like in the above embodiments are for distinguishing the embodiments, and do not represent merits of the embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The algorithms and displays presented herein are not inherently related to any particular computer, virtual machine, or other apparatus. Various general purpose systems may also be used with the teachings herein. The required structure for constructing such a device will be apparent from the description above. In addition, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the embodiments provided in the present application, it should be understood that the disclosed system and method may be implemented in other ways. The above-described system embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and other divisions may be realized in practice, and for example, a plurality of modules or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The stroke feature extraction method based on traditional Chinese medicine inspection is characterized by comprising the following steps of: the method comprises the following steps:
s10, acquiring image information of the palm, the face and the ear to be detected;
s20, performing hand ROI segmentation according to the hand key point model, performing eyebrow ROI segmentation according to the face key point model, and performing earlobe ROI segmentation according to the ear key point model;
s30, extracting a lifeline centerline-crossing feature prediction parameter, a rosy feature prediction parameter, a bump feature prediction parameter and a hypertrophy feature prediction parameter of the hand according to the ROI segmentation;
s40, extracting facial wrinkle feature prediction parameters according to the eyebrow center ROI segmentation;
s50, extracting a twill feature prediction parameter of the earlobe according to the ear lobe ROI segmentation;
and S60, summarizing and outputting the lifeline crossing centerline feature prediction parameter, the ruddy feature prediction parameter, the uplift feature prediction parameter, the thickening feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter.
2. The stroke feature extraction method based on traditional Chinese medicine inspection diagnosis according to claim 1, characterized in that: in step S40, extracting wrinkle feature prediction parameters of the face according to the eyebrow center ROI segmentation, which specifically includes:
s401, performing gray level conversion on an ROI (region of interest) of the eyebrow center, and then enhancing the image by utilizing histogram equalization;
s402, constructing a convolutional neural network based on banded pooling;
s403, inputting the image processed in the step S401 into a feature extractor of the convolutional neural network in the step S402, realizing pooling by using a long strip-shaped pooling kernel, classifying by using a softmax layer, and outputting a prediction score;
and S404, outputting wrinkle characteristic prediction parameters according to the prediction scores.
3. The stroke feature extraction method based on traditional Chinese medicine inspection diagnosis according to claim 1, characterized in that: in step S50, extracting a twill feature prediction parameter of the earlobe according to the earlobe ROI segmentation, which specifically includes:
s501, segmenting an ear lobe ROI (region of interest), and extracting an image of a segmented region where twills are located;
s502, performing contrast enhancement and gray level processing on the segmented ear lobe image to obtain a preprocessed ear lobe image;
s503, scoring the ear-lobe twill index through a multi-scale edge detection algorithm, and finally outputting a scoring result;
and S504, outputting prediction parameters of the earlobe twill characteristics according to the grading result.
4. The stroke feature extraction method based on traditional Chinese medicine inspection diagnosis according to claim 1, characterized in that: in step S30, a centerline-crossing feature prediction parameter, a ruddy feature prediction parameter, a ridge feature prediction parameter, and a fat feature prediction parameter of the hand are extracted from the hand ROI segmentation; the method specifically comprises the following steps:
s301, detecting a lifeline centerline ROI area corresponding to the lifeline centerline passing feature, judging whether the lifeline crosses the centerline or not by using an edge detection algorithm and a curve fitting algorithm, and outputting a lifeline centerline passing feature prediction parameter according to a judgment result;
s302, carrying out ruddiness detection on a ruddiness ROI corresponding to the ruddiness characteristics, judging whether the regions are ruddiness or not by an HSV-based color gamut space, and outputting a ruddiness characteristic prediction parameter according to a judgment result;
and S303, performing thickening and bulging detection on a thickening ROI (region of interest) corresponding to the thickening characteristic and a bulging ROI corresponding to the bulging characteristic, respectively judging whether the thickening and bulging are performed through an SVM (support vector machine) model, and outputting a bulging characteristic prediction parameter and a thickening characteristic prediction parameter according to a judgment result.
5. The intelligent stroke risk prediction method based on traditional Chinese medicine inspection is characterized by comprising the following steps of: the method comprises the following steps:
a1, extracting the characteristics of the impact factors corresponding to the risk of cerebral apoplexy;
a2, performing weighting according to the feature extraction result;
a3, carrying out weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted;
the method for extracting stroke features according to any one of claims 1 to 5, wherein the step a1 of extracting features of the impact factors corresponding to the stroke risk.
6. The intelligent stroke risk prediction method based on traditional Chinese medicine inspection diagnosis as claimed in claim 5, wherein: the step A2, performing weighting according to the feature extraction result; the method specifically comprises the following steps:
a21, establishing a weight factor corresponding to each characteristic prediction parameter;
a22, setting a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
7. The intelligent stroke risk prediction method based on traditional Chinese medicine inspection diagnosis as claimed in claim 6, wherein: the step A3, carrying out weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted; the method specifically comprises the following steps:
a311, receiving weights corresponding to a lifeline crossing centerline feature prediction parameter, a ruddy feature prediction parameter, a bump feature prediction parameter, a hypertrophy feature prediction parameter, a wrinkle feature prediction parameter and a twill feature prediction parameter respectively;
a312, calculating a combined probability P1; the method specifically comprises the following steps:
when the patient is thick, raised and ruddy, the risk probability of the stroke is 0.9, otherwise, the risk probability is 0; and expressed by formula (1);
Figure FDA0003232411700000031
a313, calculating a combined probability P2, specifically:
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P2formula (2) 1;
a314, returning the final probability; wherein the final probability P is the maximum value of P1 and P2.
P=max(P1,P2) Equation (3).
8. Intelligent cerebral apoplexy risk prediction system based on traditional chinese medical science inspection is characterized in that: the method comprises the following steps:
the characteristic extraction unit (10) is used for carrying out characteristic extraction on the influence factors corresponding to the stroke risk;
an empowerment unit (20) for empowering according to the feature extraction result;
and the prediction unit (30) is used for carrying out weight calculation on the influence factors to obtain a stroke risk evaluation result of the target to be predicted.
9. The intelligent stroke risk prediction system based on traditional Chinese medicine inspection diagnosis according to claim 8, wherein: the empowerment unit (20) includes:
an establishing unit (201) for establishing a weighting factor corresponding to each feature prediction parameter;
a setting unit (202) for setting a corresponding weight coefficient for each feature extraction result;
wherein, the Weight factors respectively corresponding to the lifeline midline crossing feature prediction parameter, the ruddy feature prediction parameter, the bump feature prediction parameter, the hypertrophic feature prediction parameter, the wrinkle feature prediction parameter and the twill feature prediction parameter are midline crossing Weight1, ruddy Weight2, hypertrophic Weight3, bump Weight4, wrinkle Weight5 and twill Weight 6;
the Weight coefficients corresponding to the cross-midline Weight1, the ruddy Weight2, the fat Weight3, the ridge Weight4, the wrinkle Weight5 and the twill Weight6 are as follows: 0.45-0.55, 0.15, 0.35, 0.75.
10. The stroke intelligent risk prediction system based on traditional Chinese medicine inspection diagnosis of claim 9, which is characterized in that: the prediction unit (30) comprises:
a receiving unit (301) for receiving weights corresponding to a lifeline passing centerline feature prediction parameter, a ruddy feature prediction parameter, a ridge feature prediction parameter, a thickening feature prediction parameter, a wrinkle feature prediction parameter, and a twill feature prediction parameter, respectively;
the calculating unit (302) is used for calculating the combined probability P1 and the combined probability P2, and the specific calculating process is as follows:
Figure FDA0003232411700000032
P2=Weight1+Weight2+Weight3+Weight4+Weight5+Weight6,if P2≥1,P2formula (2) 1;
a prediction result output unit (303) for returning a final probability P;
wherein, the final probability P is the maximum value of P1 and P2:
P=max(P1,P2) Equation (3).
CN202110991019.7A 2021-08-26 2021-08-26 Cerebral apoplexy feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection Active CN113658702B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110991019.7A CN113658702B (en) 2021-08-26 2021-08-26 Cerebral apoplexy feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110991019.7A CN113658702B (en) 2021-08-26 2021-08-26 Cerebral apoplexy feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection

Publications (2)

Publication Number Publication Date
CN113658702A true CN113658702A (en) 2021-11-16
CN113658702B CN113658702B (en) 2023-09-15

Family

ID=78482198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110991019.7A Active CN113658702B (en) 2021-08-26 2021-08-26 Cerebral apoplexy feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection

Country Status (1)

Country Link
CN (1) CN113658702B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273652A (en) * 2017-03-10 2017-10-20 马立伟 Intelligent risk of stroke monitoring system
CN110349140A (en) * 2019-07-04 2019-10-18 五邑大学 A kind of traditional Chinese ear examines image processing method and device
CN111430029A (en) * 2020-03-24 2020-07-17 浙江达美生物技术有限公司 Multi-dimensional stroke prevention screening method based on artificial intelligence
CN111950492A (en) * 2020-08-19 2020-11-17 山西慧虎健康科技有限公司 Hypertension risk prediction method based on traditional Chinese medicine theory and palm multi-feature extraction
EP3758026A1 (en) * 2019-06-28 2020-12-30 Hill-Rom Services, Inc. Patient risk assessment based on data from multiple sources in a healthcare facility
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112489793A (en) * 2020-12-16 2021-03-12 郑州航空工业管理学院 Early warning system for stroke risk patient
CN112750531A (en) * 2021-01-21 2021-05-04 广东工业大学 Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN113140309A (en) * 2021-04-14 2021-07-20 五邑大学 Traditional Chinese medicine complexion diagnosis method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107273652A (en) * 2017-03-10 2017-10-20 马立伟 Intelligent risk of stroke monitoring system
EP3758026A1 (en) * 2019-06-28 2020-12-30 Hill-Rom Services, Inc. Patient risk assessment based on data from multiple sources in a healthcare facility
CN110349140A (en) * 2019-07-04 2019-10-18 五邑大学 A kind of traditional Chinese ear examines image processing method and device
CN111430029A (en) * 2020-03-24 2020-07-17 浙江达美生物技术有限公司 Multi-dimensional stroke prevention screening method based on artificial intelligence
CN111950492A (en) * 2020-08-19 2020-11-17 山西慧虎健康科技有限公司 Hypertension risk prediction method based on traditional Chinese medicine theory and palm multi-feature extraction
AU2020103901A4 (en) * 2020-12-04 2021-02-11 Chongqing Normal University Image Semantic Segmentation Method Based on Deep Full Convolutional Network and Conditional Random Field
CN112489793A (en) * 2020-12-16 2021-03-12 郑州航空工业管理学院 Early warning system for stroke risk patient
CN112750531A (en) * 2021-01-21 2021-05-04 广东工业大学 Automatic inspection system, method, equipment and medium for traditional Chinese medicine
CN113140309A (en) * 2021-04-14 2021-07-20 五邑大学 Traditional Chinese medicine complexion diagnosis method and device

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
刘礼梅,等: "中风耳穴诊断规律的临床调查研究", 中医药临床杂志, vol. 25, no. 08, pages 679 - 680 *
古楠: "高利教授脑梗死急性期特色望诊经验浅析", 内蒙古中医药, vol. 33, no. 34, pages 44 - 45 *
吴菊华,等: "基于神经网络的脑卒中风险预测模型研究", 数据分析与知识发现, vol. 03, no. 12, pages 70 - 75 *
徐向东,等: "舌诊在脑卒中患者查体中的应用", 北京中医药, vol. 38, no. 12, pages 1208 - 1210 *
毛红朝,等: "面向中医望诊的人脸分割及其算法实现", 计算机应用研究, vol. 24, no. 09, pages 295 - 297 *
王麒达,等: "多分支深度特征融合的中医脑卒中辅助诊断", 中国图象图形学报, vol. 27, no. 03, pages 935 - 947 *
赵琛琦,等: "视觉Transformer与多特征融合的脑卒中检测算法", 中国图象图形学报, vol. 27, no. 03, pages 923 - 924 *

Also Published As

Publication number Publication date
CN113658702B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN111524137B (en) Cell identification counting method and device based on image identification and computer equipment
CN112132166B (en) Intelligent analysis method, system and device for digital cell pathology image
CN108734108B (en) Crack tongue identification method based on SSD network
CN109948566A (en) A kind of anti-fraud detection method of double-current face based on weight fusion and feature selecting
CN112215807A (en) Cell image automatic classification method and system based on deep learning
CN111915572A (en) Self-adaptive gear pitting quantitative detection system and method based on deep learning
CN112348059A (en) Deep learning-based method and system for classifying multiple dyeing pathological images
JP4383352B2 (en) Histological evaluation of nuclear polymorphism
CN114998651A (en) Skin lesion image classification and identification method, system and medium based on transfer learning
CN113129390B (en) Color blindness image re-coloring method and system based on joint significance
CN111062936B (en) Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN117274278B (en) Retina image focus part segmentation method and system based on simulated receptive field
WO2022037029A1 (en) Hypertension risk prediction method based on traditional chinese medicine theory and palm multi-feature extraction
CN114119551A (en) Quantitative analysis method for human face image quality
CN116563647B (en) Age-related maculopathy image classification method and device
KR102430946B1 (en) System and method for diagnosing small bowel preparation scale
CN117036288A (en) Tumor subtype diagnosis method for full-slice pathological image
Ding et al. Classification of chromosome karyotype based on faster-rcnn with the segmatation and enhancement preprocessing model
CN113658702B (en) Cerebral apoplexy feature extraction and intelligent risk prediction method and system based on traditional Chinese medicine inspection
Jardeleza et al. Detection of Common Types of Eczema Using Gray Level Co-occurrence Matrix and Support Vector Machine
CN113989588A (en) Self-learning-based intelligent evaluation system and method for pentagonal drawing test
CN113706515A (en) Tongue image abnormality determination method, tongue image abnormality determination device, computer device, and storage medium
CN113139936B (en) Image segmentation processing method and device
Suganthi et al. A novel feature extraction method for identifying quality seed selection
Hao et al. Automatic detection of breast nodule in the ultrasound images using CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant