CN113362924A - Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system - Google Patents

Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system Download PDF

Info

Publication number
CN113362924A
CN113362924A CN202110627459.4A CN202110627459A CN113362924A CN 113362924 A CN113362924 A CN 113362924A CN 202110627459 A CN202110627459 A CN 202110627459A CN 113362924 A CN113362924 A CN 113362924A
Authority
CN
China
Prior art keywords
facial
facial paralysis
degree
paralysis patient
patient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110627459.4A
Other languages
Chinese (zh)
Inventor
刘雯
侯晨辉
胡倩
曹婧
姜锐
郭烘宇
陈莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Railway Vocational and Technical College
Original Assignee
Zhengzhou Railway Vocational and Technical College
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Railway Vocational and Technical College filed Critical Zhengzhou Railway Vocational and Technical College
Priority to CN202110627459.4A priority Critical patent/CN113362924A/en
Publication of CN113362924A publication Critical patent/CN113362924A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a medical big data-based facial paralysis rehabilitation task auxiliary generation method and system, and belongs to the technical field of facial paralysis rehabilitation training. The method comprises the following steps: acquiring a plurality of frames of facial images of a facial paralysis patient during a set action, and identifying coordinates of each feature point in each frame of facial image; judging the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical region; calculating the abnormal degree of each expression muscle of the face of the facial paralysis patient; and generating a corresponding rehabilitation training task for the facial paralysis patient according to the abnormal degree of each expression muscle of the face of the facial paralysis patient. The invention can automatically and timely make a proper rehabilitation training task for the facial paralysis patient, improve the rehabilitation training effect of the facial paralysis patient and solve the problem that the proper rehabilitation training task cannot be made for the facial paralysis patient accurately and timely in the prior art.

Description

Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system
Technical Field
The invention relates to the technical field of facial paralysis rehabilitation training, in particular to a method and a system for assisting in generating facial paralysis rehabilitation tasks based on medical big data.
Background
Facial paralysis is a common and frequent disease, and the main symptoms are that facial expression muscles can not perform normal functional movement, and patients often have difficulty in normally completing basic facial movements such as eye closing, eyebrow lifting, gill bulging, nose creasing or mouth opening.
When the facial paralysis patient carries out rehabilitation training at present, the action and the task volume of training are often formulated by the doctor according to the experience, and not only efficiency is lower, has great subjective factor error, and along with the recovered effect of each expression muscle of patient is different, corresponding change can not in time be made to the training task moreover, leads to the rehabilitation training effect not good.
Disclosure of Invention
The invention aims to provide a medical big data-based facial paralysis rehabilitation task auxiliary generation method and system, which are used for solving the problem of poor rehabilitation training effect caused by the fact that a proper rehabilitation training task cannot be automatically and timely formulated for facial paralysis patients.
In order to solve the problems, the technical scheme of the medical big data-based facial paralysis rehabilitation task auxiliary generation method comprises the following steps of:
acquiring a plurality of frames of facial images of a facial paralysis patient during a set action, and identifying coordinates of each feature point in each frame of facial image;
judging the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical region according to the coordinates of each feature point in each frame of facial image;
calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the abnormal degree of each region of the facial corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical regions;
and generating a corresponding rehabilitation training task for the facial paralysis patient according to the abnormal degree of each expression muscle of the face of the facial paralysis patient.
The invention also provides a technical scheme of the medical big data-based facial paralysis rehabilitation task auxiliary generation system, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory so as to realize the medical big data-based facial paralysis rehabilitation task auxiliary generation method.
The beneficial effects of the generation method and the generation system are as follows: obtaining the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical region by obtaining a plurality of frames of facial images of the facial paralysis patient when the facial paralysis patient does the set action, calculating the abnormal degree of each expression muscle of the facial paralysis patient on the basis, and generating a corresponding rehabilitation training task for the facial paralysis patient by taking the abnormal degree of each expression muscle as reference; the invention can automatically and timely make a proper rehabilitation training task for the facial paralysis patient, improve the rehabilitation training effect of the facial paralysis patient and solve the problem that the proper rehabilitation training task cannot be made for the facial paralysis patient accurately and timely by a doctor in the prior art.
Further, the method for calculating the abnormal degree of each expression muscle of the facial paralysis patient comprises the following steps:
calculating the pathological change degree of each region of the face of the facial paralysis patient according to the abnormal degree of each region of the face of the facial paralysis patient corresponding to each frame of facial image and the difference degree of the symmetrical regions;
and calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the pathological change degree of each region of the facial paralysis patient.
Further, the lesion degree of each region of the face of the facial paralysis patient is calculated by the following formula:
Figure BDA0003102346210000021
whereinMU (e) the degree of lesion in the e-th area of the face of the facial paralysis patient, S the total number of frames of the facial image of the facial paralysis patient, uesThe degree of abnormality of the e-th region in the facial image of the patient of the s-th frame facial paralysis, vesThe difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is max (ve), the maximum value of the difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is min (ve), and the minimum value of the difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is min (ve).
Further, the abnormal degree of each expression muscle of the facial paralysis patient is calculated by the following formula:
Figure BDA0003102346210000031
wherein M isjIndicating the degree of abnormality of the jth expression muscle, FiIndicates the degree of lesion of the ith region, gi,jAnd N is the total number of the divided human face areas.
Further, the method for generating the corresponding rehabilitation training task for the facial paralysis patient comprises the following steps:
acquiring the training degree of each set training action on each facial expression muscle;
obtaining the required degree of each set training action in the illness rehabilitation training corresponding to the facial paralysis patient according to the abnormal degree of each expression muscle of the facial paralysis patient and the training degree of each set training action on each facial expression muscle;
and obtaining the rehabilitation training task amount corresponding to each set training action according to the basic task amount.
Further, the required degree of each set training action in the disease rehabilitation training corresponding to the facial paralysis patient is calculated by adopting the following formula:
Figure BDA0003102346210000032
wherein, TkSetting the required degree of training action for the kth class, L being the total number of expression muscles, MlIndicates the pathological change degree of the first expression muscle of facial paralysis patients, Pk,lAnd setting the training effect of the training action on the ith expression muscle for the kth class.
Further, the rehabilitation training task amount corresponding to each set training action is calculated by adopting the following formula:
Figure BDA0003102346210000033
wherein, JkSetting rehabilitation training task quantity, T, corresponding to training action for kth classkIndicates the degree of required training action, U, for the kth class settingkAnd setting a basic task amount corresponding to the training action for the kth class, wherein alpha is a proportional adjustment coefficient.
Further, the method for judging the abnormal degree of each region of the face of the facial paralysis patient and the difference degree of the symmetrical regions comprises the following steps:
constructing a TCN network;
and taking the coordinate sequence of the characteristic points of each frame of facial image of the facial paralysis patient as the input of the TCN network, and taking the output result of the TCN as the abnormal degree of each region of the facial paralysis patient and the difference degree of each region with the symmetrical region.
Further, the method for identifying the coordinates of each feature point in each frame face image includes:
constructing a characteristic point regression neural network;
and taking the multi-frame facial images of the facial paralysis patient as the input of the feature point regression neural network when the facial paralysis patient performs the set action, and taking the output result of the feature point regression neural network as the coordinates of each feature point in the frame facial images.
Drawings
FIG. 1 is a flow chart of the medical big data-based facial paralysis rehabilitation task auxiliary generation method of the invention;
FIG. 2 is a schematic diagram of a corresponding structural principle of the medical big data-based auxiliary method for generating the facial paralysis rehabilitation task according to the present invention;
FIG. 3 is a schematic diagram of facial feature points of the present invention;
FIG. 4 is a schematic diagram of the distribution of various expression muscles of the face according to the present invention;
fig. 5 is a schematic diagram showing the association of each region of the face with each expression muscle according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention.
Embodiment of medical big data-based facial paralysis rehabilitation task auxiliary generation method
As shown in fig. 1 and fig. 2, the method for assisting in generating a facial paralysis rehabilitation task based on medical big data of the present embodiment mainly includes three parts, the first part is facial feature point recognition, and can be implemented by a facial feature point recognition module; the second part is the judgment of the abnormality of the expression muscles, and can be realized by an expression muscle abnormality judgment module; the third part is rehabilitation task generation which can be realized by a rehabilitation task generation module; specifically, the medical big data-based facial paralysis rehabilitation task auxiliary generation method comprises the following steps:
1) acquiring a plurality of frames of facial images of a facial paralysis patient during a set action, and identifying coordinates of each feature point in each frame of facial image
In this embodiment, the basis for generating the corresponding rehabilitation training task for the facial paralysis patient is the facial image of the facial paralysis patient during a series of setting actions, specifically, the facial images of consecutive frames of the facial paralysis patient during the series of setting actions may be collected by the camera, and as a matter of course, as another embodiment, the facial images may also be discontinuous.
After acquiring a plurality of frames of facial images of a patient during a setting operation, inputting the facial images into the constructed feature point regression DNN network to obtain the coordinates of each feature point corresponding to each facial image.
The feature point regression DNN network of this embodiment performs regression localization on human face feature points in an image in an Encoder-Decoder-heatmap structure. The input of the DNN network is an RGB image, the network firstly performs coding operation, namely, in the process of downsampling the image by using convolution and pooling operation, spatial domain features in the image are extracted, namely, the output of an Encoder is an extracted feature vector, then the feature vector is used as the input of a Decoder, and the Encoder recovers the feature vector into a corresponding heatmap image by using operation such as deconvolution.
The data set used in the DNN network training process is a human face image acquired before a camera, wherein the human face image comprises a normal face image and a facial paralysis patient face image, the proportion is 3:7, a label corresponding to the image is a heatmap image, the types of feature points are 68 in total, and the label making process is to use Gaussian to check the image to perform fuzzy operation after marking the feature points in the corresponding label image to obtain the heatmap label image; in the present embodiment, the change of the face surface is mainly detected, so 68 types of feature points on the eyebrows, the nose, and the mouth are required to be detected, and the style and the number of the feature points are shown in fig. 3. The Loss function (Loss function) adopted by the DNN network in this embodiment is a mean square error Loss function.
After the trained feature point regression DNN is obtained through the process, the obtained multiframe images of the facial paralysis patient during the set action are inferred through the trained DNN, so that a corresponding heatmap result image can be obtained, and after the acquired multiframe images are processed in a softargmax mode and the like, specific coordinate values of the feature points in each frame of facial image can be obtained.
In this embodiment, the identification of the coordinates of each feature point in each frame of facial image of the facial paralysis patient is realized through the constructed feature point regression DNN network, and as another implementation, other existing facial paralysis feature point detection models, such as the facial paralysis key point detection model disclosed in the patent document with the publication number CN111553250B, may be used to identify each feature point in each frame of facial image of the facial paralysis patient.
2) Judging the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical region according to the coordinates of each feature point in each frame of facial image;
in this embodiment, a coordinate sequence of feature points in each frame image is inferred by a TCN (time convolution network), so as to obtain an abnormal degree of each region of the face and a difference degree between the abnormal degree and a symmetric region. In the training process of the TCN network, the data used in the training data set of the TCN is the coordinate sequence of the feature point in each frame of image obtained by reasoning the feature point regression network on the multiple frames of images, so the training of the TCN in this embodiment can be performed together with the training of the feature point regression network, and the two are specifically performed in a series relationship.
The input shape of the TCN network is:
[M,N,2]
where M is the number of image frames for determining the state of one region, which is set to 10 in this embodiment; n is the number of categories of feature points input per frame image, and 2 indicates that the dimension of each feature point sequence is 2 dimensions.
The TCN network input data is in the form of:
Figure BDA0003102346210000061
wherein the content of the first and second substances,
Figure BDA0003102346210000062
coordinate information representing N feature points in the image of frame 1, wherein each element represents a coordinate value of the corresponding feature point, i.e., coordinate value
Figure BDA0003102346210000063
Coordinates representing a first feature point in the first frame image,
Figure BDA0003102346210000064
the label of TCN network input data is the abnormal degree of each area in the corresponding image and the difference degree of the symmetrical area, and the form is as follows:
[(u1,v1),(u2,v2),……(u10,v10)]
considering that the abnormal characteristics of the facial paralysis patient in the lower jaw region are not obvious, the present embodiment detects 10 regions of the face, namely, the left forehead, the right forehead, the left eye, the right eye, the left nose, the right nose, the left cheek, the right cheek, the left mouth and the right mouth; where u represents the degree of abnormality of the face region, and v represents the degree of difference from the symmetric region.
In the process of training the TCN, the abnormal degree of each region of the face of a patient corresponding to a plurality of frames of images and the abnormal degree value of a symmetrical region need to be artificially marked, and the implementation adopts a grade mode to represent the judgment of human body subjective blurring, namely the numerical values of U and V are 10 grades in total, namely [0,9 ]. In this embodiment, the Loss function corresponding to the TCN network uses a cross entropy Loss function.
In this embodiment, the coordinate sequence of the feature points in each frame image of the facial paralysis patient is input into the TCN network after training, and the output result is the abnormal degree U of each region of the face corresponding to each frame image and the difference degree V from the symmetric region. The output number and the input number of the TCN network are equal, namely S images are input, and S groups of face region judgment information is output; each frame of image corresponds to a set of face region expression sequences:
LM=[(u1,v1),(u2,v2),......(u10,v10)]
where u1 is the degree of abnormality of the 1 st region of the face, v1 is the degree of difference between the 1 st region of the face and the symmetric region, u2 is the degree of abnormality of the 2 nd region of the face, v2 is the degree of difference between the 2 nd region of the face and the symmetric region, and so on, …, u10 is the degree of abnormality of the 10 th region of the face, and v10 is the degree of difference between the 10 th region of the face and the symmetric region.
Thus, the degree of abnormality of each region of the face corresponding to each frame of the face image of the facial paralysis patient and the degree of difference from the symmetric region can be obtained.
3) Calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the abnormal degree of each region of the facial corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical regions;
on the basis of obtaining the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree with the symmetric region, the embodiment calculates the lesion degree of each region of the face of the facial paralysis patient; and further calculating the abnormal degree of each expression muscle of the facial paralysis patient on the basis of obtaining the pathological change degree of each region of the facial paralysis patient.
The abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree between the abnormal degree and the symmetric region fluctuate along with the time sequence, in order to obtain the more accurate lesion degree of each region of the face of the facial paralysis patient, and considering that the abnormal degree of the corresponding region is the most reliable when the difference value between the left region and the right region is the largest, the embodiment adopts the following formula to calculate the lesion degree of each region of the face of the facial paralysis patient:
Figure BDA0003102346210000071
wherein MU (e) is the lesion degree of the e-th area of the face of the patient, S is the total frame number of the obtained facial images of the facial paralysis patient, uesThe degree of abnormality of the e-th region in the facial image of the patient of the s-th frame facial paralysis, vesThe difference degree between the e-th area in the facial image of the S-frame facial paralysis patient and the symmetric area is max (ve), the maximum value of the difference degree between the e-th area in the facial image of the S-frame facial paralysis patient and the symmetric area is min (ve), and the minimum value of the difference degree between the e-th area in the facial image of all the S-frame facial paralysis patients and the symmetric area is min (ve).
The expression muscles corresponding to each area of the human face are different, the expression motions of the human face are controlled by the contraction and the relaxation of the corresponding facial expression muscles, and if the expression motions of the facial areas are abnormal, the expression muscles related to the facial areas are abnormal. The facial nerve disease is mainly characterized in that a patient cannot well control corresponding muscle states through facial nerves, and the main purpose of making rehabilitation training for facial paralysis patients is to train the control ability of the patient on expression muscles.
The total number of facial expression muscles of a human body is 42, but the expression muscles mainly involved in the facial nerve paralysis disease are occipital frontalis, orbicularis oculi, levator labialis superior, zygomatic muscles, levator oris canthus, orbicularis oris and inferior labial muscles.
The process of calculating the abnormal degree of each expression muscle of the facial paralysis patient based on the lesion degree of each region of the facial paralysis patient in the embodiment is as follows:
acquiring the association degree g of each expression muscle of the face and each area of the face;
the process of acquiring the association degree between each expression muscle of the face and each area of the face in the embodiment is as follows:
A. constructing an existing facial muscle model, such as a ZBursh model;
B. controlling each facial muscle in the model to make the epidermis of the facial model change correspondingly;
C. calculating the average change value of the three-dimensional coordinates of the mesh grid points of the model epidermis corresponding to each region when each facial muscle is controlled, and taking the average change value as the change value of the corresponding region;
D. and normalizing the change value corresponding to each region to obtain a normalized change value corresponding to each region under the control of each facial muscle, and taking the normalized change value as the association degree of the corresponding facial muscle and the facial region.
In the present embodiment, the association degree between each expression muscle of the face and each region of the face is obtained through the above process, as another embodiment, the association degree between each expression muscle of the face and each region of the face may also be set empirically, for example, as shown in fig. 4, the association degree between the left eye region and the orbicularis oculi muscle is large, the association degree is set to 0.9, the association degree with the left frontal abdomen is small, the association degree is set to 0.2, and the association degree with the orbicularis oris muscle is 0; the degree of association of the orbicularis levator sinistral muscle to the forehead is 0.07, the degree of association to the left eye is 0.9, the degree of association to the left cheek is 0.03, and the degree of association to other regions is 0.
And secondly, calculating the abnormal degree of each expression muscle of the face.
The abnormal degree of each expression muscle of the facial paralysis patient is calculated by the following formula in the embodiment:
Figure BDA0003102346210000091
wherein M isjIndicating the degree of abnormality of the jth expression muscle, FiIndicates the degree of lesion of the ith region, gi,Representing the ith area and the jth tableAnd N is the total number of the divided human face areas.
In this embodiment, the range of the abnormal degree F of each region is [0,9], the range of the association degree g between each expressive muscle of the face and each region of the face is [0, 1], correspondingly, the range of the abnormal degree M of each expressive muscle of the face of the facial paralysis patient obtained based on the above calculation formula is [0,9], and the abnormal degree M reflects the degree of the rehabilitation training required by the corresponding expressive muscle.
The abnormal degree of each expression muscle of the facial paralysis patient is calculated on the basis of obtaining the pathological change degree of each region of the facial paralysis patient, and as other embodiments, the abnormal degree of each expression muscle of the facial paralysis patient can be directly calculated according to the abnormal degree of each region of the facial paralysis patient and is recorded as the abnormal degree 1 of each expression muscle of the facial paralysis patient; and calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the difference degree with the symmetrical region, recording the abnormal degree as the abnormal degree 2 of each expression muscle of the facial paralysis patient, and calculating the final abnormal degree of each expression muscle of the facial paralysis patient according to the two calculation results (namely the abnormal degree 1 of each expression muscle of the facial paralysis patient and the abnormal degree 2 of each expression muscle of the facial paralysis patient) by a weighted summation formula.
4) And generating a corresponding rehabilitation training task for the facial paralysis patient according to the abnormal degree of each expression muscle of the face of the facial paralysis patient.
After the abnormal degree condition of each facial expression muscle of the facial paralysis patient is obtained, the training degree of each facial expression muscle is set in combination with the set training action, and a corresponding rehabilitation training task is generated for the facial paralysis patient, so that the rehabilitation effect is better, and the illness recovery of the facial paralysis patient is accelerated.
The process of generating the corresponding rehabilitation training task for the facial paralysis patient in the embodiment is as follows:
acquiring the training degree of each set training action on each facial expression muscle;
as shown in fig. 5, the corresponding setting training actions of facial paralysis patients mainly include raising eyebrows, closing eyes, shrugging nose, showing teeth, putting nipples, and drumsticking; in the embodiment, the training degree P of each set training action on each facial expression muscle is obtained based on statistics through a large amount of rehabilitation training data;
as another embodiment, the training degree P of each set training action for each facial expression muscle may be obtained based on the thermodynamic diagram, and the main steps are as follows: collecting an initial face thermodynamic diagram; performing m corresponding set training actions; and acquiring the face thermodynamic diagrams again, and comparing the changes of the face thermodynamic diagrams to obtain the training degree of the corresponding actions on muscles at corresponding positions.
Setting the training degree of each training action on each facial expression muscle according to the abnormal degree of each expression muscle of the facial paralysis patient to obtain the required degree of each set training action in the illness rehabilitation training corresponding to the facial paralysis patient;
the training effect of the expression training action on the expression muscles is mainly embodied in the control of pulling and relaxing the expression muscles in the action process, if the abnormal degree of a certain expression muscle is higher, the more uncontrolled the expression muscle nerve is, in order to accelerate the rehabilitation of a patient, the more the training of the facial paralysis patient on the expression muscles needs to be strengthened, and the more the training action which can train the expression muscles to a greater degree is needed in the rehabilitation training task generated for the facial paralysis patient.
In this embodiment, the required degree of each set training action in the rehabilitation training corresponding to the facial paralysis patient is calculated by using the following formula:
Figure BDA0003102346210000101
wherein, TkSetting the required degree of training action for the kth class, L being the total number of expression muscles, MlExpressing the degree of pathological changes of the first expression muscle of facial paralysis patients, MlHas a value range of [0,9]],Pk,lSetting the training Effect of the training action on the l expression muscle for the k-th class, Pk,lHas a value range of [0, 1]]. In this embodiment, the training effect of all the set training actions on the expression muscles is defined as 1, that is:
Figure BDA0003102346210000102
wherein K is the total number of the set training actions.
Thus, the required degree of each set training action in the rehabilitation training of the illness state of the facial paralysis patient can be obtained:
XT={T1,T2,T3,…Tk}。
III combining the basic task quantity UkAnd obtaining the rehabilitation training task amount corresponding to each set expression action:
Figure BDA0003102346210000111
wherein, JkRepresenting the task quantity, T, of the kth action in the training taskkIndicates the required degree of the k-th action, UkThe k-th action is represented by the basic task quantity, and alpha is a scaling coefficient and represents the ratio of the maximum quantity of the action to the basic quantity. In this embodiment, U is setkWhen T is 10, α is 3k(ii) when J is 5, J is obtainedk=17。
Therefore, the rehabilitation training task quantities corresponding to the set training actions can be obtained, and the rehabilitation training task quantities corresponding to the set training actions are combined to be the rehabilitation task suitable for the facial paralysis patient, so that the purpose of automatically generating the suitable rehabilitation training task for the facial paralysis patient is achieved.
In the embodiment, the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree between the abnormal degree and the symmetrical region are obtained by obtaining the plurality of frames of facial images of the facial paralysis patient when the facial paralysis patient performs the set action, the abnormal degree of each expression muscle of the facial paralysis patient is calculated on the basis, and the corresponding rehabilitation training task is generated for the facial paralysis patient by taking the abnormal degree of each expression muscle as a reference. The embodiment can automatically and timely make a proper rehabilitation training task for the facial paralysis patient, improves the rehabilitation training effect of the facial paralysis patient, and solves the problem that the proper rehabilitation training task cannot be made for the facial paralysis patient accurately and timely by a doctor when the rehabilitation training task is made for the facial paralysis patient.
Embodiment of medical big data-based facial paralysis rehabilitation task auxiliary generation system
The medical big data-based facial paralysis rehabilitation task auxiliary generation system of the embodiment comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the medical big data-based facial paralysis rehabilitation task auxiliary generation method described in the embodiment of the medical big data-based facial paralysis rehabilitation task auxiliary generation method.
Since the embodiment of the medical big data-based facial paralysis rehabilitation task auxiliary generation method has already described the medical big data-based facial paralysis rehabilitation task auxiliary generation method, details are not repeated here.
It should be noted that while the preferred embodiments of the present invention have been described, additional variations and modifications to these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.

Claims (10)

1. A medical big data-based facial paralysis rehabilitation task auxiliary generation method is characterized by comprising the following steps:
acquiring a plurality of frames of facial images of a facial paralysis patient during a set action, and identifying coordinates of each feature point in each frame of facial image;
judging the abnormal degree of each region of the face corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical region according to the coordinates of each feature point in each frame of facial image;
calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the abnormal degree of each region of the facial corresponding to each frame of facial image of the facial paralysis patient and the difference degree of the symmetrical regions;
and generating a corresponding rehabilitation training task for the facial paralysis patient according to the abnormal degree of each expression muscle of the face of the facial paralysis patient.
2. The method for assisting in generating the rehabilitation task for facial paralysis based on medical big data as claimed in claim 1, wherein the method for calculating the degree of abnormality of each expression muscle in the face of the facial paralysis patient comprises:
calculating the pathological change degree of each region of the face of the facial paralysis patient according to the abnormal degree of each region of the face of the facial paralysis patient corresponding to each frame of facial image and the difference degree of the symmetrical regions;
and calculating the abnormal degree of each expression muscle of the facial paralysis patient according to the pathological change degree of each region of the facial paralysis patient.
3. The method for assisting in generating facial paralysis rehabilitation task based on medical big data as claimed in claim 2, wherein the degree of lesion of each region of the face of facial paralysis patient is calculated by using the following formula:
Figure FDA0003102346200000011
wherein MU (e) is the lesion degree of the e-th area of the face of the facial paralysis patient, S is the total frame number of the obtained facial images of the facial paralysis patient, uesThe degree of abnormality of the e-th region in the facial image of the patient of the s-th frame facial paralysis, vesThe difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is max (ve), the maximum value of the difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is min (ve), and the minimum value of the difference degree between the e-th area in the facial image of the frame S facial paralysis patient and the symmetric area is min (ve).
4. The method for assisting in generating the facial paralysis rehabilitation task based on the medical big data as claimed in claim 2, wherein the degree of abnormality of each facial expression muscle of the facial paralysis patient is calculated by using the following formula:
Figure FDA0003102346200000021
wherein M isjIndicating the degree of abnormality of the jth expression muscle, FiIndicates the degree of lesion of the ith region, gi,jAnd N is the total number of the divided human face areas.
5. The assisted medical big data-based facial paralysis rehabilitation task generation method according to claim 1, wherein the method for generating the corresponding rehabilitation training task for facial paralysis patients comprises:
acquiring the training degree of each set training action on each facial expression muscle;
obtaining the required degree of each set training action in the illness rehabilitation training corresponding to the facial paralysis patient according to the abnormal degree of each expression muscle of the facial paralysis patient and the training degree of each set training action on each facial expression muscle;
and obtaining the rehabilitation training task amount corresponding to each set training action according to the basic task amount.
6. The method for assisting in generating facial paralysis rehabilitation task based on medical big data as claimed in claim 5, wherein the degree of each set training action required in the rehabilitation training of the condition corresponding to the facial paralysis patient is calculated by using the following formula:
Figure FDA0003102346200000022
wherein, TkSetting the required degree of training action for the kth class, L being the total number of expression muscles, MlIndicates the pathological change degree of the first expression muscle of facial paralysis patients, Pk,lAnd setting the training effect of the training action on the ith expression muscle for the kth class.
7. The method for assisting in generating facial paralysis rehabilitation task based on medical big data as claimed in claim 5, wherein the rehabilitation training task amount corresponding to each set training action is calculated by using the following formula:
Figure FDA0003102346200000023
wherein, JkSetting rehabilitation training task quantity, T, corresponding to training action for kth classkIndicates the degree of required training action, U, for the kth class settingkAnd setting a basic task amount corresponding to the training action for the kth class, wherein alpha is a proportional adjustment coefficient.
8. The method for assisting in generating the facial paralysis rehabilitation task based on the medical big data as claimed in claim 1, wherein the method for determining the degree of abnormality and the degree of difference between each region of the face of the facial paralysis patient and the symmetric region comprises:
constructing a TCN network;
and taking the coordinate sequence of the characteristic points of each frame of facial image of the facial paralysis patient as the input of the TCN network, and taking the output result of the TCN as the abnormal degree of each region of the facial paralysis patient and the difference degree of each region with the symmetrical region.
9. The method for assisting in generating the rehabilitation task for facial paralysis based on medical big data as claimed in claim 1, wherein the method for identifying the coordinates of each feature point in each frame of facial image comprises:
constructing a characteristic point regression neural network;
and taking the multi-frame facial images of the facial paralysis patient as the input of the feature point regression neural network when the facial paralysis patient performs the set action, and taking the output result of the feature point regression neural network as the coordinates of each feature point in the frame facial images.
10. A medical big data-based facial paralysis rehabilitation task auxiliary generation system, comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the medical big data-based facial paralysis rehabilitation task auxiliary generation method according to any one of claims 1-9.
CN202110627459.4A 2021-06-05 2021-06-05 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system Withdrawn CN113362924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110627459.4A CN113362924A (en) 2021-06-05 2021-06-05 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110627459.4A CN113362924A (en) 2021-06-05 2021-06-05 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system

Publications (1)

Publication Number Publication Date
CN113362924A true CN113362924A (en) 2021-09-07

Family

ID=77532543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110627459.4A Withdrawn CN113362924A (en) 2021-06-05 2021-06-05 Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system

Country Status (1)

Country Link
CN (1) CN113362924A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114917544A (en) * 2022-05-13 2022-08-19 上海交通大学医学院附属第九人民医院 Visual auxiliary training method and equipment for function of orbicularis oris
CN116486999A (en) * 2023-05-19 2023-07-25 黑龙江中医药大学 Self-adaptive auxiliary monitoring method and system for facial paralysis acupuncture treatment process
CN117372437A (en) * 2023-12-08 2024-01-09 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114917544A (en) * 2022-05-13 2022-08-19 上海交通大学医学院附属第九人民医院 Visual auxiliary training method and equipment for function of orbicularis oris
CN114917544B (en) * 2022-05-13 2023-09-22 上海交通大学医学院附属第九人民医院 Visual method and device for assisting orbicularis stomatitis function training
CN116486999A (en) * 2023-05-19 2023-07-25 黑龙江中医药大学 Self-adaptive auxiliary monitoring method and system for facial paralysis acupuncture treatment process
CN116486999B (en) * 2023-05-19 2023-10-20 黑龙江中医药大学 Self-adaptive auxiliary monitoring method and system for facial paralysis acupuncture treatment process
CN117372437A (en) * 2023-12-08 2024-01-09 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis
CN117372437B (en) * 2023-12-08 2024-02-23 安徽农业大学 Intelligent detection and quantification method and system for facial paralysis

Similar Documents

Publication Publication Date Title
CN113362924A (en) Medical big data-based facial paralysis rehabilitation task auxiliary generation method and system
CN109902558B (en) CNN-LSTM-based human health deep learning prediction method
CN107798318A (en) The method and its device of a kind of happy micro- expression of robot identification face
CN104298753B (en) Personal assessment methods based on face image processing
CN110046675A (en) A kind of the exercise ability of lower limbs appraisal procedure based on improved convolutional neural networks
CN110477907B (en) Modeling method for intelligently assisting in recognizing epileptic seizures
CN113627256B (en) False video inspection method and system based on blink synchronization and binocular movement detection
CN111403026A (en) Facial paralysis grade assessment method
CN112465773A (en) Facial nerve paralysis disease detection method based on human face muscle movement characteristics
CN113782184A (en) Cerebral apoplexy auxiliary evaluation system based on facial key point and feature pre-learning
CN110321827A (en) A kind of pain level appraisal procedure based on face pain expression video
CN112949560A (en) Method for identifying continuous expression change of long video expression interval under two-channel feature fusion
CN114565957A (en) Consciousness assessment method and system based on micro expression recognition
CN115147636A (en) Lung disease identification and classification method based on chest X-ray image
Verma et al. Quantification of facial expressions using high-dimensional shape transformations
CN112102234B (en) Ear sclerosis focus detection and diagnosis system based on target detection neural network
Yuan et al. Pain intensity recognition from masked facial expressions using swin-transformer
CN114943924B (en) Pain assessment method, system, equipment and medium based on facial expression video
CN115050067B (en) Facial expression construction method and device, electronic equipment, storage medium and product
KR102476888B1 (en) Artificial diagnostic data processing apparatus and its method in digital pathology images
CN115154828A (en) Brain function remodeling method, system and equipment based on brain-computer interface technology
CN114400086A (en) Articular disc forward movement auxiliary diagnosis system and method based on deep learning
CN115359522A (en) Elderly health monitoring method and system based on expression emotion calculation
CN112329640A (en) Facial nerve palsy disease rehabilitation detection system based on eye muscle movement analysis
Wang et al. Chaos in Motion: Unveiling Robustness in Remote Heart Rate Measurement through Brain-Inspired Skin Tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210907