CN111012307A - Method and device for evaluating training input degree of patient based on multi-mode information - Google Patents

Method and device for evaluating training input degree of patient based on multi-mode information Download PDF

Info

Publication number
CN111012307A
CN111012307A CN201911175432.5A CN201911175432A CN111012307A CN 111012307 A CN111012307 A CN 111012307A CN 201911175432 A CN201911175432 A CN 201911175432A CN 111012307 A CN111012307 A CN 111012307A
Authority
CN
China
Prior art keywords
patient
input
emotion
signal
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911175432.5A
Other languages
Chinese (zh)
Inventor
季林红
李翀
钱超
贾天宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201911175432.5A priority Critical patent/CN111012307A/en
Publication of CN111012307A publication Critical patent/CN111012307A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4833Assessment of subject's compliance to treatment

Abstract

The invention discloses a method and a device for evaluating the training input degree of a patient based on multi-modal information, wherein the method comprises the following steps: calculating the exercise input degree of the patient according to the electromyographic signals and the exercise speed of the patient; calculating a perception input degree of the patient according to a distance between a focus position of the eyes of the patient and a moving object on a screen used for training; calculating the cognitive input degree of the patient according to the electroencephalogram signals of the frontal lobe area of the patient; calculating the emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient; and comprehensively evaluating according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient. The method evaluates the input degree of the patient during rehabilitation training based on the multi-mode information of movement, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate.

Description

Method and device for evaluating training input degree of patient based on multi-mode information
Technical Field
The invention relates to the technical field of rehabilitation assessment, in particular to a method and a device for assessing the training input degree of a patient based on multi-mode information.
Background
Stroke, a common disease of cerebrovascular blood circulation disorder. With the continuous deepening of the global aging degree and the continuous aggravation of the problems of large living pressure, irregular living and the like of young people, the number of stroke patients is on an increasing trend. Research shows that more than 80% of patients can recover most of motor functions through rehabilitation training and return to life and work again.
However, the repeated nature of the rehabilitation training easily makes the patient lose interest and feel boring, and the patient is depressed in mood due to the influence of the illness state, so that the patient is very likely to lose confidence due to improper training prescription and training difficulty, and the rehabilitation training produces boring mood and extremely adverse effect on the rehabilitation effect. Therefore, it is very important to evaluate the degree of investment of the patient in rehabilitation training.
The rehabilitation input degree is a variable which can represent a plurality of elements and comprises the attitude of a patient to rehabilitation training, the understanding of the patient to the requirement of a training task, the requirement of a training prompt on language or action, the active participation degree in the training, the attendance rate of the whole rehabilitation treatment course and the like. The input degree in rehabilitation is defined as the state driven by enthusiasm and actively strives to participate in rehabilitation training, and the difference of the input degree of a patient from the state of purely participating in the training is that the input degree contains the high interest of the patient and the effort of the patient following the training process. Many existing methods for evaluating patient input levels are based primarily on usage scales or indirect evaluation methods, which are inaccurate.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art.
Therefore, one objective of the invention is to provide a method for evaluating the input degree of patient training based on multi-modal information, which evaluates the input degree of the patient in rehabilitation training based on multi-modal information of motion, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate.
It is another object of the invention to propose a device for assessing the level of exercise input of a patient based on multimodal information.
In order to achieve the above object, an embodiment of an aspect of the present invention provides a method for evaluating a training input level of a patient based on multi-modal information, including:
acquiring an electromyographic signal and a movement speed of a patient, and calculating the movement input degree of the patient according to the electromyographic signal and the movement speed;
acquiring the focus position of the eyes of a patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
acquiring an electroencephalogram signal of a frontal lobe area of a patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signal;
acquiring a facial expression image of a patient in a training process, extracting and identifying emotions in the facial expression image through image analysis software to obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculating the emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient;
and comprehensively evaluating according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The method for evaluating the input degree of the patient training based on the multi-mode information, disclosed by the embodiment of the invention, evaluates the input degree of the patient in the rehabilitation training based on the multi-mode information of movement, perception, cognition and emotion, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the method for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the calculating of the exercise input level of the patient based on the electromyographic signal and the exercise velocity includes: calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000021
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
In order to achieve the above object, another embodiment of the present invention provides an apparatus for evaluating a training input level of a patient based on multi-modal information, comprising:
the first calculation module is used for acquiring the electromyographic signals and the movement speed of the patient and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed;
the second calculation module is used for acquiring the focus position of the eyes of the patient in the training process and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
the third calculation module is used for collecting electroencephalogram signals of frontal lobe areas of the patients and calculating cognitive input degrees of the patients according to the electroencephalogram signals;
the fourth calculation module is used for acquiring facial expression images of a patient in a training process, extracting and identifying emotions in the facial expression images through image analysis software to obtain the duration time of positive emotions and the duration time of negative emotions of the patient in the training process, and calculating the emotion input degree of the patient according to the duration time of the positive emotions and the duration time of the negative emotions of the patient;
and the evaluation module is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device for evaluating the input degree of the patient training based on the multi-mode information evaluates the input degree of the patient in the rehabilitation training based on the motion, perception, cognition and emotion multi-mode information, makes up the subjectivity of evaluating the input state by a scale method, and enables the evaluation result to be more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
In addition, the device for evaluating the training input degree of the patient based on the multi-modal information according to the above embodiment of the present invention may further have the following additional technical features:
further, in an embodiment of the present invention, the first calculating module is specifically configured to,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in an embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the level of perception input, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in an embodiment of the present invention, the calculating the cognitive input degree of the patient according to the electroencephalogram signal includes:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000041
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input degree,TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The foregoing and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention;
FIG. 2 is a block flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for assessing a patient's training input based on multimodal information, according to one embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are illustrative and intended to be illustrative of the invention and are not to be construed as limiting the invention.
A method and apparatus for assessing a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
A method for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described first with reference to the accompanying drawings.
FIG. 1 is a flow diagram of a method for assessing a patient's training input based on multimodal information, in accordance with one embodiment of the present invention.
As shown in FIG. 1, the method for assessing the exercise input level of a patient based on multi-modal information comprises the following steps:
and step S1, acquiring the electromyographic signals and the movement speed of the patient, and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed.
Degree of investment in exercise (Motor engagement, E)m) Is defined as the state in which the patient is actively and striving to exercise. In rehabilitation training, the exercise state is generally monitored and characterized by electromyographic signals (EMG), and how much effort the patient has done on the exercise can be directly expressed. The Root Mean Square (RMS) value of the EMG signal is used in rehabilitation training to evaluate the patient's exercise input state during training. Since the energy of the signal can be characterized, the rms value is considered to be the most meaningful method for analyzing the amplitude of the electromyographic signal. Because the movement speed is an important factor influencing the EMG amplitude, the participation degree of the patient movement level is represented by the ratio of the root mean square value of the electromyographic signals to the movement speed.
During rehabilitation training of a patient, the patient wears electroencephalogram equipment and electromyogram equipment to measure electroencephalogram signals and electromyogram signals in the training process, the motion input degree of the patient is calculated according to the ratio of the root mean square value of the measured electromyogram signals of the training limb to the motion speed, and the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
And step S2, acquiring the focus position of the eyes of the patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and the moving object on the screen used for training.
The perceived level of engagement is defined as the state of system attention focus perceived for the training task. For visual interaction, eye trajectory tracking is widely used to assess the attention of a user subject. Some evaluation indexes such as the number of times the eye focus immobility occurs, the number of times the eye focus focuses on the area outside the screen, and the like. However, these evaluation indexes cannot represent the reduction of perception input degree in the training process, and even if the training subjects look at the screen, the training subjects are not input into the training. The eye focus moving speed, the eye focus total displacement and the like are used for evaluating the perception input degree, and the indexes can quantify the visual attention degree of the patient in rehabilitation training. However, the evaluation method cannot perform real-time evaluation in the training process. In an embodiment of the invention the distance between the focus of the eyes of the training subject and the moving object on the screen is used to characterize the perceived input level. The perception input degree is as follows:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
And step S3, acquiring the electroencephalogram signals of the frontal lobe area of the patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
Further, calculating the cognitive input degree of the patient according to the electroencephalogram signals, comprising:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000061
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Cognitive engagement is defined as the degree of concentration at the completion of a cognitive task. The cognitive concentration and cognitive load of a subject is generally assessed by monitoring brain waves (EEG).
The brain electrical variables used to monitor cognitive concentration include decreased alpha signal, increased beta signal, increased theta signal and the ratios between them. At present, the most widely used method for evaluating cognitive input is to calculate a formula of cognitive input according to energy of alpha, beta and theta frequency bands, that is, a ratio of energy of the beta frequency band to a sum of the energy of the alpha and the theta frequency bands, according to the current understanding of electroencephalogram signals, because electroencephalogram signals mainly represent signals of the beta frequency bands when people are attentive or alert, and the electroencephalogram signals mainly represent signals of the alpha or the theta or even lower frequency bands when people are at rest or sleep, the ratio can represent the attention input degree of people. The frontal lobe on the cerebral cortex is responsible for attention, mental state, motion planning and the like of a human body, the electroencephalogram signals in the embodiment of the invention are taken from the frontal lobe of a patient, and the electroencephalogram signals of the frontal lobe area of the patient are detected through electroencephalogram equipment worn by the patient.
Step S4, facial expression images of the patient in the training process are collected, emotions in the facial expression images are extracted and identified through image analysis software, the duration time of the positive emotion and the duration time of the negative emotion of the patient in the training process are obtained, and the emotion input degree of the patient is calculated according to the duration time of the positive emotion and the duration time of the negative emotion of the patient.
It is understood that the emotion input state is monitored based on the facial expression of the user's subject and expressed by the ratio of the duration of the main emotion of the positive emotion to the duration of the main emotion of the negative emotion.
The increase in the motor function state of the patient is associated with a positive mood of the patient. Therefore, one of the goals of rehabilitation training is to mobilize the positive emotions of the patient. The emotional engagement level is defined as the emotional engagement level in rehabilitation training. If the rehabilitation training can affect the emotion of the patient, the patient is indicated to be emotionally put into the training. If the patient is involved in the rehabilitation training with emotion, different events in the rehabilitation training, such as different game elements or the completion or non-completion of a task, may have an impact on the patient's emotion.
Therefore, in the training process of a patient, the facial expression of the patient is monitored, the facial expression image of the patient is collected, the emotion of the patient can be identified and extracted by adopting instight software for the collected facial expression image, the duration that the positive emotion of the patient is the main emotion and the duration that the negative emotion is the main emotion are recorded in the training process, the emotion input degree of the patient is calculated according to the duration of the positive emotion and the duration of the negative emotion of the patient, and the emotion input degree calculation formula is as follows:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TposttiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
And step S5, carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
As shown in fig. 2, the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree of the patient obtained through the calculation are comprehensively evaluated, so that the input degree of the patient in the training process is more accurately and objectively evaluated, a rehabilitation doctor can change the training mode in the rehabilitation training process, and the patient can keep a higher input degree in the rehabilitation training process.
According to the method for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the multi-modal information of motion, perception, cognition and emotion, the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Next, an apparatus for evaluating a training input level of a patient based on multi-modal information according to an embodiment of the present invention will be described with reference to the accompanying drawings.
FIG. 3 is a schematic diagram of an apparatus for assessing a patient's training input based on multimodal information, according to one embodiment of the present invention.
As shown in fig. 3, the apparatus for evaluating the exercise input level of a patient based on multi-modal information includes: a first calculation module 100, a second calculation module 200, a third calculation module 300, a fourth calculation module 400 and an evaluation module 500.
The first calculating module 100 is configured to collect an electromyographic signal and a movement velocity of a patient, and calculate a movement input degree of the patient according to the electromyographic signal and the movement velocity.
And the second calculation module 200 is configured to acquire the focal position of the patient's eye during the training process, and calculate the perception input degree of the patient according to the distance between the focal position of the patient's eye and the moving object on the screen used for the training.
And the third calculating module 300 is used for acquiring the electroencephalogram signals of the frontal lobe area of the patient and calculating the cognitive input degree of the patient according to the electroencephalogram signals.
The fourth calculating module 400 is configured to collect facial expression images of a patient in a training process, extract and identify emotions in the facial expression images through image analysis software, obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculate an emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient.
And the evaluation module 500 is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
The device makes up the subjectivity of the input state evaluation by a scale method, so that the evaluation result is more objective and accurate.
Further, in one embodiment of the present invention, the first calculation module, in particular for,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
Further, in one embodiment of the present invention, the perceptual investment level calculation formula is:
Ep=d(gaze,screen changes)
wherein E ispTo sense the degree of engagement, the size is the focal position of the patient's eyes and the screen changes are the positions of moving objects on the screen.
Further, in one embodiment of the present invention, calculating the cognitive input level of the patient according to the electroencephalogram signal comprises:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure BDA0002289821050000081
wherein E iscTo recognize the degree of input, α is the alpha signal, β is the beta signal, and θ is the theta signal.
Further, in an embodiment of the present invention, the emotion investment degree calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
It should be noted that the foregoing explanation of the embodiment of the method for evaluating the training input level of the patient based on the multi-modal information is also applicable to the apparatus of the embodiment, and is not repeated herein.
According to the device for evaluating the input degree of the patient training based on the multi-modal information, which is provided by the embodiment of the invention, the input degree of the patient in the rehabilitation training is evaluated based on the motion, perception, cognition and emotion multi-modal information, so that the subjectivity of evaluating the input state by a scale method is compensated, and the evaluation result is more objective and accurate. The rehabilitation training device is beneficial for rehabilitation doctors to change the training mode in the rehabilitation training process, so that the patients can keep higher input degree in the rehabilitation training process.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention.

Claims (10)

1. A method for assessing a patient's training input based on multimodal information, comprising the steps of:
acquiring an electromyographic signal and a movement speed of a patient, and calculating the movement input degree of the patient according to the electromyographic signal and the movement speed;
acquiring the focus position of the eyes of a patient in the training process, and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
acquiring an electroencephalogram signal of a frontal lobe area of a patient, and calculating the cognitive input degree of the patient according to the electroencephalogram signal;
acquiring a facial expression image of a patient in a training process, extracting and identifying emotions in the facial expression image through image analysis software to obtain duration of positive emotion and duration of negative emotion of the patient in the training process, and calculating the emotion input degree of the patient according to the duration of the positive emotion and the duration of the negative emotion of the patient;
and comprehensively evaluating according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
2. The method of claim 1, wherein calculating the exercise input level of the patient based on the electromyographic signals and the exercise velocity comprises: calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
3. The method of claim 1, wherein the perceptual input computational formula is:
Ep=d(gaze,screen changes)
wherein E ispFor the said perception input processDegree, size, the focal position of the patient's eyes, and screen changes, the position of moving objects on the screen.
4. The method of claim 1, wherein said calculating a cognitive input level of the patient from said brain electrical signals comprises:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure FDA0002289821040000011
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
5. The method of claim 1, wherein the emotional engagement level calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
6. An apparatus for assessing a patient's training input based on multimodal information, comprising:
the first calculation module is used for acquiring the electromyographic signals and the movement speed of the patient and calculating the movement input degree of the patient according to the electromyographic signals and the movement speed;
the second calculation module is used for acquiring the focus position of the eyes of the patient in the training process and calculating the perception input degree of the patient according to the distance between the focus position of the eyes of the patient and a moving object on a screen used for training;
the third calculation module is used for collecting electroencephalogram signals of frontal lobe areas of the patients and calculating cognitive input degrees of the patients according to the electroencephalogram signals;
the fourth calculation module is used for acquiring facial expression images of a patient in a training process, extracting and identifying emotions in the facial expression images through image analysis software to obtain the duration time of positive emotions and the duration time of negative emotions of the patient in the training process, and calculating the emotion input degree of the patient according to the duration time of the positive emotions and the duration time of the negative emotions of the patient;
and the evaluation module is used for carrying out comprehensive evaluation according to the exercise input degree, the perception input degree, the cognition input degree and the emotion input degree to obtain the training input degree of the patient.
7. The apparatus for assessing a patient's training input based on multimodal information according to claim 6, wherein the first computing module, in particular for,
calculating the motion input degree of the patient according to the ratio of the root mean square value of the electromyographic signal to the motion speed, wherein the formula is as follows:
Em=EMGRms/v
wherein E ismFor said exercise input level, EMGRmsIs the root mean square value of the electromyographic signals of the moving limb of the patient in one movement period, and v is the average speed of the movement of the patient in one movement period.
8. The apparatus according to claim 6, wherein the perceptual input computational formula is:
Ep=d(gaze,screen changes)
wherein E ispGaz for the perceived input levele is the focal position of the patient's eye and screen changes is the position of the moving object on the screen.
9. The apparatus of claim 6, wherein said calculating a cognitive input level of the patient from said brain electrical signals comprises:
decomposing the electroencephalogram signal to obtain a reduced alpha signal, an increased beta signal and an increased theta signal of the frontal lobe area of the patient, and calculating the cognitive input degree according to the ratio of the reduced alpha signal, the increased beta signal and the increased theta signal, wherein the specific formula is as follows:
Figure FDA0002289821040000031
wherein E iscFor the cognitive input level, α is the alpha signal, β is the beta signal and theta signal for the elevation of the frontal lobe zone.
10. The apparatus of claim 6, wherein the emotional engagement level calculation formula is:
Ee=Tpositive/Tnegative
wherein E iseFor the emotional input level, TpositiveDuration of the patient's positive emotion as the dominant emotion, TnegativeThe negative emotion for the patient is the duration of the primary emotion.
CN201911175432.5A 2019-11-26 2019-11-26 Method and device for evaluating training input degree of patient based on multi-mode information Pending CN111012307A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911175432.5A CN111012307A (en) 2019-11-26 2019-11-26 Method and device for evaluating training input degree of patient based on multi-mode information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911175432.5A CN111012307A (en) 2019-11-26 2019-11-26 Method and device for evaluating training input degree of patient based on multi-mode information

Publications (1)

Publication Number Publication Date
CN111012307A true CN111012307A (en) 2020-04-17

Family

ID=70202410

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911175432.5A Pending CN111012307A (en) 2019-11-26 2019-11-26 Method and device for evaluating training input degree of patient based on multi-mode information

Country Status (1)

Country Link
CN (1) CN111012307A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112057040A (en) * 2020-06-12 2020-12-11 国家康复辅具研究中心 Upper limb motor function rehabilitation evaluation method
CN112541541A (en) * 2020-12-10 2021-03-23 杭州电子科技大学 Lightweight multi-modal emotion analysis method based on multi-element hierarchical depth fusion
CN113349780A (en) * 2021-06-07 2021-09-07 浙江科技学院 Method for evaluating influence of emotional design on online learning cognitive load
CN116370788A (en) * 2023-06-05 2023-07-04 浙江强脑科技有限公司 Training effect real-time feedback method and device for concentration training and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724987A (en) * 1991-09-26 1998-03-10 Sam Technology, Inc. Neurocognitive adaptive computer-aided training method and system
CN106128201A (en) * 2016-06-14 2016-11-16 北京航空航天大学 The attention training system that a kind of immersion vision and discrete force control task combine
CN107564585A (en) * 2017-07-06 2018-01-09 四川护理职业学院 Brain palsy recovery management system and method based on cloud platform
CN109620265A (en) * 2018-12-26 2019-04-16 中国科学院深圳先进技术研究院 Recognition methods and relevant apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5724987A (en) * 1991-09-26 1998-03-10 Sam Technology, Inc. Neurocognitive adaptive computer-aided training method and system
CN106128201A (en) * 2016-06-14 2016-11-16 北京航空航天大学 The attention training system that a kind of immersion vision and discrete force control task combine
CN107564585A (en) * 2017-07-06 2018-01-09 四川护理职业学院 Brain palsy recovery management system and method based on cloud platform
CN109620265A (en) * 2018-12-26 2019-04-16 中国科学院深圳先进技术研究院 Recognition methods and relevant apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李翀: "基于机器人辅助神经康复的患者训练参与度与专注度研究", 《中国博士学位论文全文数据库 医药卫生科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112057040A (en) * 2020-06-12 2020-12-11 国家康复辅具研究中心 Upper limb motor function rehabilitation evaluation method
CN112057040B (en) * 2020-06-12 2024-04-12 国家康复辅具研究中心 Upper limb movement function rehabilitation evaluation method
CN112541541A (en) * 2020-12-10 2021-03-23 杭州电子科技大学 Lightweight multi-modal emotion analysis method based on multi-element hierarchical depth fusion
CN112541541B (en) * 2020-12-10 2024-03-22 杭州电子科技大学 Lightweight multi-modal emotion analysis method based on multi-element layering depth fusion
CN113349780A (en) * 2021-06-07 2021-09-07 浙江科技学院 Method for evaluating influence of emotional design on online learning cognitive load
CN116370788A (en) * 2023-06-05 2023-07-04 浙江强脑科技有限公司 Training effect real-time feedback method and device for concentration training and terminal equipment
CN116370788B (en) * 2023-06-05 2023-10-17 浙江强脑科技有限公司 Training effect real-time feedback method and device for concentration training and terminal equipment

Similar Documents

Publication Publication Date Title
Ahn et al. Wearable sensing technology applications in construction safety and health
CN111012307A (en) Method and device for evaluating training input degree of patient based on multi-mode information
Zheng et al. Unobtrusive and multimodal wearable sensing to quantify anxiety
US10925533B2 (en) Systems and methods for determining human performance capacity and utility of a person-utilized device
US9173582B2 (en) Adaptive performance trainer
KR101739058B1 (en) Apparatus and method for Psycho-physiological Detection of Deception (Lie Detection) by video
WO2014072461A1 (en) Method and device for determining vital parameters
Sehle et al. Objective assessment of motor fatigue in multiple sclerosis: the Fatigue index Kliniken Schmieder (FKS)
Zhang et al. Sleep stage classification using bidirectional lstm in wearable multi-sensor systems
US20160029965A1 (en) Artifact as a feature in neuro diagnostics
JP2013505811A (en) System and method for obtaining applied kinesiology feedback
WO2019111259A1 (en) Methods and systems for determining mental load
KR20140041382A (en) Method for obtaining information about the psychophysiological state of a living being
Przybyło et al. Eyetracking-based assessment of affect-related decay of human performance in visual tasks
Barrios et al. Recognizing digital biomarkers for fatigue assessment in patients with multiple sclerosis
Jiao et al. A quick identification model for assessing human anxiety and thermal comfort based on physiological signals in a hot and humid working environment
Tiwari et al. Movement artifact-robust mental workload assessment during physical activity using multi-sensor fusion
KR101753834B1 (en) A Method for Emotional classification using vibraimage technology
Apicella et al. Preliminary validation of a measurement system for emotion recognition
Ngamsomphornpong et al. Development of Hybrid EEG-fEMG-based Stress Levels Classification and Biofeedback Training System
Jebelli Wearable Biosensors to Understand Construction Workers' Mental and Physical Stress
Baran Thermal imaging of stress: a review
KR102198294B1 (en) Method and System of Brain-Fatigue Evaluation by using Noncontact Vision System
Nagasawa et al. Multimodal stress estimation using multibiological information: Towards more accurate and detailed stress estimation
Lu et al. Measurements of Mental Stress and Safety Awareness during Human Robot Collaboration-Review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417