CN118236041B - Cognitive function evaluation method, system, equipment and readable storage medium - Google Patents

Cognitive function evaluation method, system, equipment and readable storage medium Download PDF

Info

Publication number
CN118236041B
CN118236041B CN202410661827.0A CN202410661827A CN118236041B CN 118236041 B CN118236041 B CN 118236041B CN 202410661827 A CN202410661827 A CN 202410661827A CN 118236041 B CN118236041 B CN 118236041B
Authority
CN
China
Prior art keywords
task
motion
double
data
task motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410661827.0A
Other languages
Chinese (zh)
Other versions
CN118236041A (en
Inventor
万巧琴
安然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN202410661827.0A priority Critical patent/CN118236041B/en
Publication of CN118236041A publication Critical patent/CN118236041A/en
Application granted granted Critical
Publication of CN118236041B publication Critical patent/CN118236041B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/40Detecting, measuring or recording for evaluating the nervous system
    • A61B5/4076Diagnosing or monitoring particular conditions of the nervous system
    • A61B5/4088Diagnosing of monitoring cognitive diseases, e.g. Alzheimer, prion diseases or dementia
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4803Speech analysis specially adapted for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Molecular Biology (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Neurology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Databases & Information Systems (AREA)
  • Psychology (AREA)
  • Neurosurgery (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a cognitive function assessment method, a system, equipment and a readable storage medium, which relate to the technical field of medical care information and comprise the following steps: acquiring motion data containing multi-frame images of a target user and memory data containing voice of the target user; extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user; and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result. The application improves the robustness of the cognitive assessment result.

Description

Cognitive function evaluation method, system, equipment and readable storage medium
Technical Field
The application relates to the field of medical care informatics, in particular to the technical field of intelligent medical treatment, and especially relates to a cognitive function assessment method, a system, equipment and a readable storage medium.
Background
Dementia is a syndrome which takes acquired cognitive function impairment as a core and leads to obvious decline of daily life ability, working ability and the like of patients, and the current dementia prevention and treatment focus is on early cognitive function assessment and management. The most predominant dementia categories at present include Alzheimer's disease and vascular dementia, with impaired memory, executive function and attention as central manifestations in the cognitive field. In cognitive assessment, assessment of memory, executive function and attention is considered to be vital links, as they reflect the ability and degree of impairment of individuals in dealing with complex tasks and external stimuli in daily life. Neuropsychological testing is the primary method of clinical cognitive function assessment, quantifying cognitive performance through tests of memory, executive function, etc. However, the conventional paper pen test has various limitations, such as being influenced by external factors such as test environment and professional level, resulting in unstable evaluation results, time and effort consumption. Thus, there is a strong need for cost-effective, objective, readily available assessment methods and tools to address these problems.
In a population with cognitive impairment, motor performance abnormalities are clinical behavioral markers of cognitive impairment, in addition to multiple areas of reduced cognitive function. Motor is a cognitive process that requires advanced cognitive control, which share a common anatomical basis of the brain, particularly covering areas such as the frontal lobe that govern executive function and attention. It is more sensitive and accurate for evaluating the fields of executive functions, attention, and the like. The main evaluation means of the current intelligent cognitive function evaluation is the cognitive function evaluation based on the movement behaviors, however, the cognitive function evaluation is carried out by using only the movement data reflecting the executive functions and the attention of the user, and the cognitive function evaluation has a large limitation in the aspect of evaluating the memory field of the most core of cognitive impairment, and has the problem of low robustness of the cognitive evaluation result.
Therefore, how to improve the robustness of the cognitive assessment result is a technical problem to be solved in the art.
Disclosure of Invention
The application mainly aims to provide a cognitive function evaluation method, a system, equipment and a readable storage medium, which aim to solve the technical problem of how to improve the robustness of a cognitive evaluation result.
In order to achieve the above object, the present application provides a cognitive function assessment method comprising the steps of:
Acquiring motion data containing multi-frame images of a target user and memory data containing voice of the target user;
Extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user;
and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result.
In one embodiment, the step of extracting parameters from the memory data to obtain memory information includes:
performing voice recognition on the memory data to obtain a voice recognition result;
Matching the voice recognition result with a preset standard result to obtain the matching degree between the voice recognition result and the preset standard result;
And determining a memory capacity value corresponding to the matching degree based on a preset mapping relation, and taking the memory capacity value corresponding to the matching degree as the extracted memory information, wherein the preset mapping relation comprises the corresponding relation between different matching degrees and the memory capacity value.
In an embodiment, the motion data includes single-task motion data and double-task motion data, the single-task motion data is motion data when the target user walks freely in a first preset time period, the double-task motion data is motion data when the target user performs motion and cognition pattern calculation simultaneously in a second preset time period, and the step of extracting parameters from the motion data to obtain motion information includes:
Respectively extracting characteristics of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the single-task motion parameters and the double-task motion parameters comprise one or more of the step length, the stride length, the pace speed, the stride frequency, the step length time, the stride time, the single support time and the double support time of bilateral limbs;
Calculating a double-task motion cost based on the single-task motion parameter and the double-task motion parameter;
And taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information.
In an embodiment, the step of calculating a bi-task motion cost based on the single-task motion parameter and the bi-task motion parameter comprises:
matching the double-task motion parameters with the single-task motion parameters aiming at each double-task motion parameter,
If the matched single-task motion parameters matched with the double-task motion parameters exist in the single-task motion parameters, determining a parameter difference value of the matched single-task motion parameters minus the double-task motion parameters, and taking the ratio between the parameter difference value and the matched single-task motion parameters as the double-task motion cost corresponding to the double-task motion parameters.
In an embodiment, the step of extracting features of the single-task motion data and the double-task motion data to obtain a single-task motion parameter and a double-task motion parameter includes:
Inputting the single-task motion data and the double-task motion data into a preset gesture estimation model respectively, and outputting a single-task motion feature map and a double-task motion feature map marked with key points of a human body;
respectively carrying out image preprocessing on the single-task motion feature image and the double-task motion feature image, wherein the image preprocessing comprises one or more of key point correction, missing value interpolation and image denoising;
And respectively carrying out feature extraction based on the single-task motion feature map and the double-task motion feature map after image preprocessing to obtain single-task motion parameters and double-task motion parameters.
In an embodiment, the step of acquiring multi-frame image data including an image of the target user, and aggregating all the image data to obtain motion data, the method further comprises:
receiving a cognitive function evaluation request, and acquiring an operation task based on the cognitive function evaluation request;
collecting multi-frame image data containing target user images in preset time length, and taking the collected multi-frame image data as single-task motion data;
outputting the operation task, acquiring multi-frame image data with preset time length based on target user images fed back by the operation task again, and taking the acquired multi-frame image data as double-task motion data;
and taking the single-task motion data and the double-task motion data as motion data.
In one embodiment, the step of collecting multi-frame speech data including the target user speech includes:
Acquiring a memory task based on the cognitive function assessment request;
Outputting the memorizing task, collecting multi-frame voice data containing target user voice based on feedback of the memorizing task, and taking the collected multi-frame voice data as memorizing data.
In addition, in order to achieve the above object, the present application also provides a cognitive function evaluation system including:
the acquisition module is used for acquiring motion data containing multi-frame images of the target user and memory data containing voice of the target user;
The extraction module is used for carrying out parameter extraction on the motion data to obtain motion information, and carrying out parameter extraction on the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user;
The detection module is used for inputting the movement information and the memory information into a pre-trained cognitive function evaluation model and outputting a cognitive evaluation result.
In addition, to achieve the above object, the present application also provides a cognitive function assessment apparatus including: the cognitive function evaluation system comprises a memory, a processor and a cognitive function evaluation program which is stored in the memory and can run on the processor, wherein the cognitive function evaluation program realizes the steps of the cognitive function evaluation method when being executed by the processor.
In addition, in order to achieve the above object, the present application also provides a readable storage medium including a computer readable storage medium having stored thereon a cognitive function evaluation program which, when executed by a processor, implements the steps of the cognitive function evaluation method as described above.
The method comprises the steps of obtaining motion data containing target user videos and memory data containing target user voices; extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user; and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result. Thus, compared with a cognitive function evaluation mode based on only motion data, the embodiment of the application extracts the motion information representing the motion behavior of the user and the memory information representing the memory capacity of the user, namely extracts the motion information reflecting the executive function and the attention of the user and the memory information reflecting the memory capacity of the user, inputs the motion information and the memory information into the pre-trained cognitive function evaluation model, outputs the cognitive evaluation result, fully fuses the motion information and the memory information based on the cognitive function evaluation model, utilizes the complementary advantages of the motion information and the memory information and comprehensively evaluates the cognitive function of the user from the memory, the executive function and the attention, thereby improving the robustness of the cognitive evaluation result.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the application or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flowchart of a cognitive function assessment method according to a first embodiment of the present application;
Fig. 2 is a schematic flow chart of a cognitive function evaluation method according to a first embodiment of the present application
FIG. 3 is a schematic flow chart of a cognitive function assessment method according to the present application;
FIG. 4 is a schematic diagram of a training process of a cognitive function assessment model according to the present application;
FIG. 5 is a flowchart of a cognitive function assessment method according to a second embodiment of the present application;
FIG. 6 is a schematic flow chart of a cognitive function assessment method according to a second embodiment of the present application;
FIG. 7 is a flowchart of a cognitive function assessment method according to a fourth embodiment of the present application;
FIG. 8 is a schematic block diagram of a cognitive function assessment system according to an embodiment of the present application;
Fig. 9 is a schematic structural diagram of a cognitive function assessment device according to an embodiment of the present application.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
For a better understanding of the technical solution of the present application, the following detailed description will be given with reference to the drawings and the specific embodiments.
Dementia is a syndrome that is characterized by a significant decline in the ability of the patient to learn and work, and social abilities, with the heart of acquired cognitive impairment. The current work focus of dementia prevention and treatment is to move forward the dementia prevention and treatment gateway so as to evaluate and manage the cognitive functions of the old. Dementia is manifested in the heart by impaired memory, executive function and attention in the cognitive domain. Based on this cognitive function test, assessment is mainly performed around the cognitive fields of memory, executive function, attention, etc.
Memory refers to the ability of an individual to acquire, store, and recall information, including instant recall and delayed recall. Executive function refers to a series of advanced cognitive processes used to plan, organize, control and monitor behavior to achieve goals, including suppressing three core components of control, working memory and cognitive flexibility. Attention is responsible for the individual's ability to selectively focus on and concentrate on external information. In cognitive assessment, the assessment of memory can help identify whether an individual's cognitive function is impaired, and the extent of the impairment, particularly the assessment of immediate and delayed recall, can provide important information about the individual's short-term and long-term memory function; the evaluation of the executive functions can reveal the cognitive control and adaptability of the individual facing the complex tasks, and the coping capacity of the individual facing the complex tasks in daily life is known; for the evaluation of the attention, the concentration, dispersion, duration, etc. of the individual in the face of the external stimulus can be revealed. Thus, in cognitive function testing, assessment of memory, executive function and attention is considered to be vital links.
Neuropsychological testing is the primary means of clinical cognitive function detection, quantifying cognitive performance through tests of memory, executive function, etc. However, the conventional paper pen test has various limitations, such as being influenced by external factors such as test environment and professional level, resulting in unstable evaluation results, time and effort consumption. Thus, there is a strong need for cost-effective, objective, readily available detection methods and tools to address these problems.
Recent studies have revealed that motor performance abnormalities are clinical behavioral markers of cognitive disorders in addition to multiple areas of reduced cognitive function in populations with cognitive disorders. Motor is a cognitive process that requires advanced cognitive control, which share a common anatomical basis of the brain, particularly covering areas such as the frontal lobe that govern executive function and attention. Thus, current cognitive function assessment quantifies cognitive performance based mostly on motor information, which is more sensitive and accurate for assessing areas of executive function and attention, etc. However, there are certain limitations in assessing memory.
Based on this, the main solutions of the application are: acquiring motion data containing multi-frame images of a target user and memory data containing voice of the target user; extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user; and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result.
According to the application, the movement information representing the gait of the user and the memory information representing the memory capacity of the user are extracted, namely, the movement information reflecting the executive function and the attention of the user and the memory information reflecting the memory capacity of the user are extracted, the movement information and the memory information are input into a pre-trained cognitive function evaluation model, a cognitive evaluation result is output and obtained, the movement information and the memory information are fully fused based on the cognitive function evaluation model, and the cognitive function evaluation is comprehensively carried out on the user from the memory, the executive function and the attention by utilizing the complementary advantages of the movement information and the memory information, so that the robustness of the cognitive evaluation result is improved.
It should be noted that, the execution body of the present embodiment may be a computing service device having functions of data processing, network communication and program running, such as a tablet computer, a personal computer, a mobile phone, or a cognitive function evaluation device capable of implementing the above functions, which is not specifically limited in this embodiment.
Based on this, the present application proposes a cognitive function evaluation method according to a first embodiment, referring to fig. 1, the cognitive function evaluation method includes steps S10 to S30:
Step S10, obtaining motion data containing multi-frame images of a target user and memory data containing voice of the target user;
The target user is used for indicating a user to be subjected to cognitive function assessment. Before acquiring motion data comprising a multi-frame image of a target user and memory data comprising voice of the target user, the motion data and the memory data of the target user can be acquired first, and then the acquired motion data and memory data are acquired.
In particular, the acquisition of the motion data may acquire the motion data of the user by a camera of the execution body itself or preinstalled on the execution body. Therefore, the user does not need to wear additional data acquisition equipment through the camera to acquire motion data, such as the data acquisition equipment integrated with an inertial sensor and/or a pressure sensor, and the motion data of the user can be acquired in real time only by shooting with the camera in a natural environment, so that the camera has the advantages of zero wearing, real time, safety, universality, high efficiency and the like. The memory data may be acquired by a camera, or may be acquired by a pickup microphone of the execution body itself or pre-installed on the execution body, which is not particularly limited in this embodiment.
After the motion data is acquired, invalid data in the motion data, such as image frames in which no target user exists, adjacent image frames with consistent image content, and the like, can be deleted to improve the quality of the motion data for performing cognitive function evaluation.
Step S20, performing parameter extraction on the motion data to obtain motion information, and performing parameter extraction on the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user;
The exercise information includes, but is not limited to, exercise behavior parameters such as a step length, a stride length, a pace speed, a stride frequency, a step length time, a stride time, a single support time, a double support time, etc., which are not particularly limited in this embodiment, specifically, the exercise information may be extracted from the exercise data in a known manner, and the extraction process of the exercise information is not described in detail in this embodiment.
In one possible implementation manner, the step of extracting parameters from the memory data to obtain the memory information is shown in fig. 2, and includes steps S21 to S23:
Step S21, performing voice recognition on the memory data to obtain a voice recognition result;
step S22, matching the voice recognition result with a preset standard result to obtain the matching degree between the voice recognition result and the preset standard result;
The calculation mode of the preset matching degree can be preset in advance, and as one implementation mode, the matching degree between the voice recognition result and the preset standard result can be specifically the duty ratio of the same words in the voice recognition result and the preset standard result. For example, assuming that the preset standard result is "temple peony blue", and the voice recognition result is "temple peony blue", the matching degree between the voice recognition result and the preset standard result is 2/5. As another embodiment, the matching degree between the speech recognition result and the preset standard result may be specifically a ratio of the number of words in the speech recognition result to the number of words in the preset standard result. For example, assuming that the preset standard result is "blue peony in face silk temple", and the voice recognition result is "blue peony in temple", the matching degree between the voice recognition result and the preset standard result is 5/10, and the calculation mode of the matching degree is not particularly limited in this embodiment.
Step S23, determining a memory capacity value corresponding to the matching degree based on a preset mapping relation, and taking the memory capacity value corresponding to the matching degree as the extracted memory information, wherein the preset mapping relation comprises the corresponding relation between different matching degrees and memory capacity values.
It is easy to understand that there is a positive correlation between the matching degree and the memory capacity value in the preset mapping relationship, that is, the higher the matching degree, the larger the memory capacity value.
And step S30, inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result.
The cognitive evaluation result at least includes cognitive normal, cognitive impairment, further, cognitive normal, mild cognitive impairment, dementia, which is not particularly limited in this embodiment. It should be noted that, the cognitive evaluation result output by the embodiment may be referred to by the relevant personnel, so that the relevant personnel can perform the subsequent processing based on the cognitive evaluation result.
In addition, when the cognitive assessment result is not cognitive normal, risk early warning information can be output.
The pre-trained cognitive function evaluation model is a pre-trained cognitive function evaluation model, and can be specifically a logistic regression model and an integrated learner model. Considering that the motion data is composed of a plurality of motion periods, high collinearity exists between the motion information, complex nonlinear relations possibly exist between the motion information and the classification results, the motion information and the classification results are mutually influenced, and the problems of inaccurate parameter estimation, poor model generalization capability and the like exist in a model based on logistic regression, so that the accuracy and the reliability of a cognitive assessment result are low. In the embodiment, the integrated learner is adopted to construct a cognitive function evaluation model, and the integrated learner is utilized to better capture the complex nonlinear relation between the motion information, so that the cognitive function evaluation model based on the motion and memory information is constructed. The model has stronger robustness and stability, and can improve the accuracy and reliability of the cognitive assessment result.
Specifically, the cognitive function evaluation model may be a random forest, a gradient-lifted tree (Gradient Boosting Trees) or XGBoost model, which is not particularly limited in this embodiment.
In the embodiment, motion data containing multi-frame images of a target user and memory data containing voice of the target user are obtained; extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user; and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result. Thus, compared with the cognitive function evaluation mode based on only the movement data, the embodiment extracts the movement information representing the gait of the user and the memory information representing the memory capacity of the user, namely extracts the movement information reflecting the executive function and the attention of the user and the memory information reflecting the memory capacity of the user, inputs the movement information and the memory information into the pre-trained cognitive function evaluation model, outputs the cognitive evaluation result, fully fuses the movement information and the memory information based on the cognitive function evaluation model, and comprehensively evaluates the cognitive function of the user from the memory, the executive function and the attention by utilizing the complementary advantages of the movement information and the memory information, thereby improving the robustness of the cognitive evaluation result.
For example, XGBoost is selected as a cognitive function evaluation model, and the cognitive function evaluation is performed on the old over 550 cases of ages 60 by using different input data, the area under the curve, the accuracy, the precision, the sensitivity, the specificity, the interpretation degree and other index results capable of reflecting the robustness of the detection result are shown in the following table one, the demographic information in the table one comprises age, sex, height, weight and the like, and the cognitive function evaluation mode based on the movement information and the memory information in the embodiment is referred to the table one, and the area under the curve, the accuracy, the precision, the sensitivity, the specificity and the interpretation degree of the model are comprehensively higher (especially the interpretation degree), so that the detection result for performing the cognitive function evaluation based on the model has higher robustness.
List one
In the second embodiment of the present application, the same or similar content as in the first embodiment of the present application may be referred to the description above, and will not be repeated. On this basis, referring to fig. 5, before the step of acquiring the motion data including the multi-frame image of the target user and the memory data including the voice of the target user, the method further includes:
Step A10, receiving a cognitive function evaluation request, and acquiring an operation task based on the cognitive function evaluation request;
The user may preset the triggering condition of the cognitive function evaluation request in advance, for example, initiate the cognitive function evaluation request through a mobile phone APP (Application), shoot a preset start detection gesture, collect a preset start detection voice, etc., which is specifically limited in this embodiment.
The operation task is an image file and/or a video file and/or an audio file preset in advance, for example, an image containing 100 continuous minus 7 contents, "please carry out 100 continuous minus 7 calculation in the walking process", etc., which is not particularly limited in this embodiment.
Step A20, collecting multi-frame image data containing target user images in preset time length, and taking the collected multi-frame image data as single-task motion data;
the preset duration may be any duration set in advance, and in particular, the preset duration is generally greater than one complete walking cycle.
Step A30, outputting the operation task, acquiring multi-frame image data which contains target user images and is fed back by the operation task for a preset time period again, and taking the acquired multi-frame image data as double-task motion data;
the operation tasks can be output through the display screen and/or the microphone, and the display screen and/or the microphone can use an old user-friendly output form and have the characteristics of large word size, high resolution, high volume and the like. And the user is guided to efficiently and accurately complete the operation task by displaying the operation task prompt to the user. And further, after the display screen and/or the microphone completely output the operation task, for example, after the microphone repeatedly plays the operation task for a preset time, the completion confirmation information can be fed back and output, and after the completion confirmation information of the operation task is received, the collection of image data with preset duration is started, so that the collection of image data when the target user walks in the task, namely, the collection of double-task motion data of the target user is ensured.
It should be noted that, in this embodiment, the single task motion data and the double task motion data are not specifically limited, and the duration of the two acquisitions may be the same or different, which is not specifically limited in this embodiment.
And step A40, taking the single-task motion data and the double-task motion data as motion data.
Considering that the sensitivity and accuracy of the randomly collected different types of motion data of the user to the cognitive function evaluation are greatly different, the embodiment collects the most sensitive two types of motion data, wherein the two types of motion task data comprise single-task motion data and double-task motion data. The single-task motion data refers to motion data when a user walks freely according to a self-selection pace; the double-task exercise data refers to exercise data when a user walks according to the self-selection pace and executes an operation task, namely walking calculation at the same time according to a prompt. And the estimation capability of the cognitive level based on the motion information is improved to the greatest extent by the single-task motion data and the double-task motion data.
In one possible implementation, referring to fig. 6, after the step of receiving the cognitive function assessment request, the method further includes:
step B10, acquiring a memory task based on the cognitive function evaluation request;
The memory task is a memory test task preset in advance, such as repeating 5 simple double word words. The preset memory test task is to collect memory data of the user in a single time or in a plurality of times at regular intervals to obtain voice data reflecting the memory data expression of the user.
And step B20, outputting the memorizing task, collecting multi-frame voice data containing target user voice based on feedback of the memorizing task, and taking the collected multi-frame voice data as memorizing data.
Specifically, the memory task can be output through the display screen and/or the microphone, and similar to the collection of the double-task exercise data, the voice data can be collected after receiving the confirmation information of the completion of the output of the memory task.
In this embodiment, in consideration of the fact that the sensitivity and accuracy of different types of memory data to the evaluation of the memory function are greatly different, the embodiment collects the voice data corresponding to the delayed recall test task which is most sensitive to the response memory function, and improves the evaluation capability of the cognitive level based on the memory information to the greatest extent.
In the third embodiment of the present application, the same or similar contents as those of the first and second embodiments of the present application can be referred to the description above, and the description thereof will not be repeated. On the basis, the motion data comprises single-task motion data and double-task motion data, the single-task motion data is motion data when the target user freely walks in a first preset time period, the double-task motion data is motion data when the target user simultaneously performs motion and cognition paradigm calculation in a second preset time period, and the step of extracting parameters from the motion data to obtain motion information comprises the following steps:
step C10, respectively extracting characteristics of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the single-task motion parameters and the double-task motion parameters comprise one or more of step length, stride length, pace speed, stride frequency, step length time, stride time, single support time and double support time of bilateral limbs;
Step size: a longitudinal linear distance from one heel strike to the other heel strike while walking;
stride length: the longitudinal straight line distance between the heel of the same side from the first landing to the second landing during walking, namely the sum of the step sizes of two sides in one gait cycle;
Step frequency: the number of steps taken in a time unit, commonly used units: step/min;
gait speed: including step speed and stride speed, by dividing the step or stride by the step time or stride time.
Gait time: the method comprises a step length time and a stride length time, wherein the step length time refers to the time from one heel to the other heel during walking; the stride time refers to the time from heel first landing to heel second landing of the same foot during walking, namely the total time of a single complete gait cycle, and is also the sum of the step times at two sides of one gait cycle;
single support time: the time from the heel strike of one lower limb to the toe off of the same toe is equal to the stepping time of the lower limb on the opposite side;
double support time: the time for the feet to simultaneously support the body weight.
The motion feature extraction model may be constructed by using computer vision, and the motion parameters may be extracted based on the motion feature extraction model, and specifically, the motion feature extraction model may be a model such as a convolutional neural network, a deep learning network, a kalman filter, an acceleration robust feature, and a scale invariant feature transformation, which is not particularly limited in this embodiment.
Step C20, calculating a double-task motion cost based on the single-task motion parameter and the double-task motion parameter;
The double task exercise cost is used for quantifying the interference degree of the operation task on the walking performance, and the double task exercise cost is = [ (single task gait performance-double task gait performance)/single task gait performance ]. Times.100%. The dual task athletic costs include specifically one or more of dual task stride speed costs, dual task stride costs, dual task single support costs, dual task dual support costs for the bilateral limb.
And step C30, taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information.
The motion information includes, but is not limited to, a single task motion parameter, the double task motion parameter, and the double task motion cost, and may further include a single task motion parameter variation degree and a double task motion parameter variation degree, specifically, the variation degree refers to a variation situation of various space-time gait parameters in a plurality of gait cycles, and is expressed by a variation coefficient, where the variation coefficient=standard deviation/average×100%, and includes one or more of a stride variation degree, a step length variation degree, and a step length variation degree.
In a possible implementation manner, the step of calculating a dual-task motion cost based on the single-task motion parameter and the dual-task motion parameter includes:
Step D10, matching the double-task motion parameters with the single-task motion parameters according to each double-task motion parameter;
when the double-task motion parameters exist and the single-task motion parameters exist, sequentially matching each double-task motion parameter with each single-task motion parameter until the matching is successful or all the single-task motion parameters are failed to match, and if the double-task motion parameters which are failed to match with all the single-task motion parameters are not available, the double-task motion cost corresponding to the double-task motion parameters is not available.
And D20, if the matched single-task motion parameters matched with the double-task motion parameters exist in the single-task motion parameters, determining a parameter difference value of the matched single-task motion parameters subtracted by the double-task motion parameters, and taking the ratio between the parameter difference value and the matched single-task motion parameters as the double-task motion cost corresponding to the double-task motion parameters.
For each dual-task motion parameter, whether the matched single-task motion parameter exists in the single-task motion parameters is detected, specifically, the dual-task motion parameter matched with the single-task motion parameter is two motion parameters with the same parameter type, such as the dual-task motion parameter of the stride, and the dual-task motion parameter matched with the dual-task motion parameter is the single-task motion parameter of the stride. For example, for a two-task motion parameter of this type of parameter of stride speed, the corresponding cost of two-task motion, i.e. the cost of two-task stride speed, is [ (single-task stride speed-two-task stride speed)/single-task stride speed ] ×100%. Similarly, based on the calculation mode, the corresponding double-task motion cost of each double-task motion parameter can be calculated.
In the practical use process, the corresponding double task movement cost of the right limb pace is found to have a significant contribution to the robustness of the detection result, and in consideration of this, the double task movement cost at least comprises the double task pace cost of the right limb in the embodiment. The index results in table one show that, when the exercise information includes the double task pace cost of the right limb, the area under the model curve, the accuracy, the precision, the sensitivity, the specificity and the model interpretation index results are 0.980, 0.880, 0.895, 0.925, 0.943 and 0.777 when the double task pace cost of the right limb is deleted from the exercise information, the integrated value of each index of the model is still higher than other input data, but compared with the exercise information with the double task pace cost of the right limb, the model interpretation degree is reduced, the double task pace cost of the right limb can obviously improve the interpretation degree of the model, and the robustness of the detection result can be further improved when the exercise information includes the double task pace cost of the right limb.
In the fourth embodiment of the present application, the same or similar contents as those of the first, second and third embodiments may be referred to the above description, and the description thereof will be omitted. On this basis, referring to fig. 7, the step of extracting features of the single-task motion data and the double-task motion data to obtain a single-task motion parameter and a double-task motion parameter respectively includes:
e10, respectively inputting the single-task motion data and the double-task motion data into a preset gesture estimation model, and outputting a single-task motion feature map and a double-task motion feature map marked with key points of a human body;
The preset posture estimation model can be specifically based on a OpenPose model of deep learning, and the OpenPose model of deep learning can realize unmarked human body key point information including connection relations among joints so as to restore the posture of a human body. Specifically, a model is developed OpenPose based on a deep neural network for analyzing the user motion data, namely, identifying unmarked human body key points of the user in the video through the OpenPose model and outputting related data, including skeleton joint pixel coordinates, confidence level, connection relation among the key points and the like. The pixel coordinates refer to a position in the image where the upper left corner of the image is the origin, the horizontal direction is the x-axis, and the vertical direction is the y-axis. The coordinates of each key point are expressed by (x, y) to describe their position in the image, and a motion profile is output.
E20, respectively carrying out image preprocessing on the single-task motion feature map and the double-task motion feature map, wherein the image preprocessing comprises one or more of key point correction, missing value interpolation and image denoising;
The keypoint correction may specifically use a generative neural Network (GENERATIVE ADVERSARIAL Network, GAN) to identify and correct the identified erroneous keypoints (outlier frames). Abnormal frames include inconsistent poses due to pose estimation errors, such as bilateral limb mismatches, gait cycle recognition disorders, and possibly abnormal conditions due to other causes (e.g., movement irregularities, environmental changes, etc.). The invention uses a large amount of motion video training to generate the full motion data characteristics of the neural network, generates key points consistent with the actual gait, and ensures the quality and the authenticity of the generated key points and the consistency and the naturalness of the motion through parameter adjustment.
In the data acquisition process, due to factors such as video quality, light conditions or body movement speed, missing values or noise may exist in the motion feature map output by OpenPose. To ensure data integrity and accuracy, interpolation functions may be used to estimate the value of the missing data point from the information of the known data point to fill in the missing value.
In processing noise, a filter function may be used to smooth the data, making the data smoother and more reliable. Secondly, the movement period refers to the time interval required for the user to reach the same position or state from a specific starting point in a specific movement process. The movement cycle is one of the important indicators in movement analysis, and reflects the rhythm and stability of the human body when moving.
And E30, respectively carrying out feature extraction on the single-task motion feature map and the double-task motion feature map which are subjected to image preprocessing to obtain a single-task motion parameter and a double-task motion parameter.
Coordinate information of the key points, that is, x and y coordinates of each key point, is extracted from the data output from OpenPose. By analyzing the change of the coordinates of the key points, the motion cycle characteristics are identified, including the motion pattern analysis such as differential operation of the coordinates of the key points. And identifying a motion event according to the key point coordinate change analysis result, taking the most typical gait event as an example, wherein the gait event is based on the specific position and motion state change of specific key points such as heels, toes and the like in the motion process, and marks the beginning and ending of a walking cycle and specific occurrence time. The identified motion event time is marked in the data for subsequent motion parameter calculation and analysis.
And (5) recovering the real world scale, and calculating the real world three-dimensional gait parameters. Wherein the real world scale refers to a scale that compares or matches OpenPose models to the real world. Ensuring that the model scale matches the real world scale can ensure the accuracy and reliability of the results. Specifically, first, the scale information in the OpenPose model output image is extracted, including the reference data of the user, the environment, and the like. By analyzing the position change of the key points in the image, the movement ranges of the key points in the horizontal and vertical directions are counted, and the scale of the human body posture is deduced. The scale comprises parameters such as the length of the user in the image, the focal length and the angle of view of the camera, and scale information such as the distance between the camera and the photographed human body. The scale information provides the true scale of the human body gesture in the image to help convert the pixel coordinates into physical coordinates in the real world, and provides basic data for calculation and analysis of subsequent motion parameters.
After the rectified and processed keypoint data is acquired, these data are used to calculate a series of real world motion parameters. By analyzing the relative position and angular changes between the keypoints, the motion parameters can be extracted.
For example, in order to facilitate understanding of the technical concept or principle of the cognitive function assessment method according to the present embodiment combined with the first, second and third embodiments, referring to fig. 3, fig. 3 provides a schematic flow chart of a cognitive function assessment method, which is specifically as follows:
Step S100, obtaining motion data of a user;
As an example, the embodiment may collect the motion data of the user in a single time or multiple times periodically, so as to obtain the video information of the motion data of the user.
As another example, since the sensitivity and accuracy of the different types of motion data of the user, which are acquired randomly, on the cognitive function evaluation have great difference, the invention proposes an operation task which is most sensitive to the reaction of the cognitive function, and the evaluation capability of the cognitive level based on the motion information is improved to the greatest extent. The two operational tasks include single-task motion and double-task motion. The single task exercise means that the user naturally walks according to the self-selection pace; the double-task exercise refers to the process that a user walks according to the self-selection pace, and the operation task is executed according to the prompt, namely walking and calculating. In the walking process, the user does not need to wear any equipment and only needs to walk normally on a flat and barrier-free ground.
Step S200, extracting motion information by utilizing the motion data of the user recorded in the step S100;
in this embodiment, the motion information may include one or more motion characteristics of a step size, a stride length, a pace speed, a stride frequency, a step length time, a stride time, a single support time, a double task motion cost of the motion parameter, a variability parameter of the motion parameter, and the like.
As an example, the present embodiment may obtain the motion information by inputting the motion information into a motion information extraction model constructed by means of computer vision. The motion feature extraction model can comprise a convolutional neural network, a deep learning network, a Kalman filter, an acceleration robust feature, a scale invariant feature transformation and other algorithms.
Step S300, memory data of a user is obtained;
as an example, the embodiment may collect the memory data of the user in a single time or multiple times periodically, to obtain the memory data representation of the user.
As another example, due to the fact that the sensitivity and accuracy of different types of cognitive behaviors of the user, which are acquired randomly, on the memory function assessment are greatly different, the invention provides a delayed recall test task which is most sensitive to the response memory function, and the assessment capability of the cognitive level based on the memory information is guided to the greatest extent. The memorizing data means that the user defaults the simple double-word words given in the prompt before the operation task according to the prompt, and recommends words after the operation task is completed according to the prompt, and the process is the memorizing data.
Step S400, extracting memory information by using the memory data of the user recorded in the step S300;
step S500, outputting a cognitive evaluation result of the user by inputting the movement information and the memory information by using a pre-trained target classification model (namely a cognitive function evaluation model);
the target classification model is a classification model which is trained in advance and used for identifying cognitive impairment based on movement and memory information, and specifically the training flow of the model is shown in fig. 4:
step S510, obtaining sample personnel movement and memory information, wherein the movement and memory information refers to a plurality of groups of movement samples, memory samples and corresponding sample labels thereof, and the sample labels comprise cognitive normal and cognitive dysfunction;
It should be noted that the motion and memory data includes a plurality of sets of motion samples, memory samples, and corresponding sample tags. The motion sample refers to motion information extracted according to the motion performance of a sample person, and comprises parameters such as space-time motion characteristics, motion stability, motion variation degree, double-task motion cost (influence degree of cognitive tasks in walking on the performance of walking tasks) and the like in the walking process. The memory sample is obtained by extracting memory information according to a memory task, namely, before a walking task, according to a prompt, the simple double-word words are recorded, and after the walking task is finished, the number of the words is correctly repeated, and a memory normal score is calculated. The sample label refers to the cognitive performance of a sample person, namely, after neuropsychological evaluation is carried out on the sample person, the cognitive performance of the user is confirmed according to a clinical threshold; when the sample person cognitive function score is at a normal level, then the sample tag may be scored as 0; when the sample person cognitive function score is at a cognitive impairment level, then the sample tag may be scored as 1.
Step S520, preprocessing the movement of the sample personnel and the memory sample data;
In this embodiment, after obtaining the movement and memory information of the sample person, the data preprocessing mainly includes data cleaning. Data cleansing refers to the need to cleanse the original motion and memory data, detect and process outliers, error values, duplicate values and missing values in the data. And finally, consistency checking is carried out on the data, so that the consistency and the accuracy of the data are ensured. The outliers may be due to errors or equipment failures in the acquisition of the motion data and the memory data, which may be removed, replaced or filled in by interpolation methods after the outliers are identified. Outlier detection methods include, but are not limited to, statistical-based methods (e.g., Z-score method, box plot method), distance-based methods (e.g., K-nearest neighbor method, isolated forest method), and cluster-based methods (e.g., DBSCAN algorithm). The error value may be due to errors in the input or transmission of motion information and memory information, and may need to be corrected or eliminated. The error value may include values outside of a reasonable range, invalid formats, or inconsistent data. Methods of handling error values include, but are not limited to, culling abnormal records or correcting using techniques such as interpolation. Repeated values can negatively impact the modeling process, requiring repeated value detection and processing of the data. The duplicate values may be identified by simple comparison or based on similarity of features. When duplicate records are found, they are deleted or merged into a single record. The missing value refers to that the value of the attribute of the motion and memory information is missing or not recorded, and may be caused by errors, equipment faults and the like in the information extraction process, and the method for processing the missing value includes, but is not limited to, deleting the sample where the missing value is located, filling the missing value by using a mean value or a median value, filling by using an interpolation method, predicting by using a regression model and the like. Consistency checks include, but are not limited to, verifying logical relationships in the data, such as sample personnel and their motion information, memory information, homology of sample labels, and uniformity of numerical attribute units, etc., to find that the data is inconsistent, and correction or adjustment is needed to ensure the quality and reliability of the data.
Step S530, dividing the movement and memory sample data of the sample person into a training data set and a test data set.
In this embodiment, by acquiring a sufficient number of data samples and movement and memory information thereof, the movement and memory information (data) is divided into a training data set and a test data set according to a preset proportion, and the training data set is used to construct a cognitive assessment model based on the movement and memory information, and to ensure generalization capability and stability of the model.
Dividing the training data set and the test data set is an important step in the construction process of the cognitive assessment model, and aims to keep the independence of data in the modeling process so as to assess the performance of the model on unseen data. The training data set is used to train parameters of the model, while the test data set is used to evaluate the predictive ability of the model on unknown data. In this embodiment, randomness is maintained in dividing the training data set and the test data set to ensure that samples in both data sets can represent the overall data distribution. The samples in the data set are divided into a training data set and a test data set according to a certain proportion by adopting a random sampling method, wherein the common proportion is that 70% of samples are used for training and 30% of samples are used for testing. When the data set has the condition of unbalanced categories, the data set is divided by adopting a hierarchical sampling method, so that the similarity of category distribution in the training data set and the test data set is ensured, and the problem of inaccurate performance evaluation of the model on certain categories is avoided. To ensure repeatability of the modeling results, the same partitioning method and random seeds are used in each training to ensure consistency and comparability of the results.
Step S540, based on the training data set, constructing a classification model by means of an integrated learning algorithm, and obtaining a trained classification model through super-parameter optimization.
When a cognitive assessment model based on the movement and memory information is constructed, the movement parameters and memory score data in the training data set are input into a machine learning model. In the training process, an integrated learning algorithm, such as a random forest, gradient lifting tree (Gradient Boosting Trees) or XGBoost algorithm, is used for constructing a classification model, and the algorithms can be combined with classification results of a plurality of basic learners, so that the robustness and accuracy of the model are improved.
Step S550, performing model evaluation on the trained classification model based on the training data set.
And carrying out comprehensive model performance evaluation on the trained classification model based on the training data set. The evaluation indicators include sensitivity, specificity, positive predictive value, negative predictive value, brier score, HL test, receiver operating characteristic curve (ROC curve), AUC, about log index, best threshold, F1 value, precision-recall (PR) curve, kolmogorov-Smirnov (KS) statistic, bayesian Information Criterion (BIC), average precision (Average Precision), bayesian area (Bayesian Area Under the ROC Curve, BAUC). These evaluation indicators cover the performance of the model in different aspects, such as the ability to identify true positives and negatives, the accuracy of predicting positives and negatives, the degree of calibration of the model, the overall performance of the classification effect, etc. Through the index evaluation, the advantages and disadvantages of the trained classification model can be comprehensively known, and important references are provided for further improvement and optimization. Here, sensitivity (Sensitivity) is also called true positive rate (True Positive Rate, TPR), which refers to the proportion of samples correctly identified as positive class. Specificity (SPECIFICITY), also known as True negative rate (True NEGATIVE RATE, TNR), represents the proportion of samples that are correctly identified as negative categories. The positive predictive value (Positive Predictive Value, PPV) is the ratio that is actually positive when the model predicts positive. The negative predictive Value (NEGATIVE PREDICTIVE Value, NPV) is a ratio of actual negative values when the model prediction is negative. Brier scores are used to evaluate the prediction accuracy of the model, and the average of the square errors between the model predictions and the actual observations is calculated. HL test (Hosmer-Lemeshow Test) is used to test the fit of the model between observations and expected values at different predictive probability levels to assess the goodness of fit of the model. ROC curve refers to the plotting of receiver operating characteristics (ROC curve) to demonstrate the trade-off relationship between sensitivity and specificity of the model at different thresholds. AUC (Area Under the ROC Curve) refers to the area under the ROC curve, representing the probability that the model classifies correctly at all possible thresholds. The Jooden Index (Youden's Index) was used to select the best classification threshold and the maximum value of the difference between sensitivity and specificity was calculated. The optimal threshold is the optimal classification threshold selected on the ROC curve so that the overall performance of the model is optimal. The F1 value is a harmonic mean value of the accuracy rate and the recall rate which comprehensively considers the accuracy rate and the recall rate of the model and is used for evaluating the comprehensive performance of the classification model. an accuracy-recall (PR) curve is drawn for evaluating the model for variations between accuracy and recall at different thresholds. The Kolmogorov-Smirnov (KS) statistic is used to measure the difference in distribution of the model between positive and negative samples and can help determine the optimal classification threshold. Bayesian Information Criteria (BIC) are used to evaluate the complexity and goodness of fit of the model, and the best model is selected by minimizing the BIC values. The average accuracy (Average Precision) was used to evaluate the integrated area under the PR curve, reflecting the average accuracy of the model at different recall rates. Bayesian areas (Bayesian Area Under the ROC Curve, BAUC) take into account the prior probability distribution and the loss function of the classifier for evaluating the performance of the classifier.
Step S560, testing the trained classification model based on the test data set, and taking the trained classification model passing the test as a target classification model.
When the trained classification model is tested based on the test dataset, the performance of the model on the new sample is evaluated to verify its generalization ability and stability. The model performance evaluation will refer to the evaluation contents in step S550. Cross-validation is performed on the test dataset to verify consistency and stability of the model across different subsets, ensuring robustness of the model. And meanwhile, carrying out error analysis, analyzing a prediction result of the model on the test data set, and identifying misclassification samples and model deviation so as to further improve the performance of the model. And comparing the trained classification model passing the test with other classification models, evaluating the relative performance of the classification model, and determining an optimal model for cognitive evaluation. Finally, the prediction results of the model on the test data set are interpreted, and the characteristics and decision processes considered by the model are analyzed to provide interpretability and credibility for cognitive assessment based on movement and memory information.
And step S600, carrying out cognitive impairment risk early warning on the user based on the cognitive assessment result.
It should be noted that the above examples are only for aiding understanding of the present application, and do not limit the cognitive function evaluation method of the present application, and more forms of simple transformation based on the technical concept are all within the scope of the present application.
In addition, an embodiment of the present application further provides a cognitive function evaluation system, referring to fig. 8, where the cognitive function evaluation system includes:
An acquisition module 10, configured to acquire motion data including a multi-frame image of a target user and memory data including voice of the target user;
The extraction module 20 is configured to perform parameter extraction on the motion data to obtain motion information, and perform parameter extraction on the memory data to obtain memory information, where the motion information is a parameter representing a motion behavior of a target user, and the memory information is a parameter representing a memory capacity of the target user;
the detection module 30 is configured to input the movement information and the memory information into a pre-trained cognitive function evaluation model, and output a cognitive evaluation result.
In an embodiment, the extracting module 20 is further configured to:
performing voice recognition on the memory data to obtain a voice recognition result;
Matching the voice recognition result with a preset standard result to obtain the matching degree between the voice recognition result and the preset standard result;
And determining a memory capacity value corresponding to the matching degree based on a preset mapping relation, and taking the memory capacity value corresponding to the matching degree as the extracted memory information, wherein the preset mapping relation comprises the corresponding relation between different matching degrees and the memory capacity value.
In an embodiment, the motion data includes single-task motion data and double-task motion data, the single-task motion data is motion data when the target user walks freely in a first preset time period, the double-task motion data is motion data when the target user performs motion and mathematical operation simultaneously in a second preset time period, and the extraction module 20 is further configured to:
Respectively extracting characteristics of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the single-task motion parameters and the double-task motion parameters comprise one or more of the step length, the stride length, the pace speed, the stride frequency, the step length time, the stride time, the single support time and the double support time of bilateral limbs;
Calculating a double-task motion cost based on the single-task motion parameter and the double-task motion parameter;
And taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information.
Respectively extracting features of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the motion parameters comprise one or more of step length, stride, pace speed, step frequency, step length time, stride time, single support time and double support time;
Calculating a double-task motion cost based on the single-task motion parameter and the double-task motion parameter;
And taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information.
In an embodiment, the extracting module 20 is further configured to:
matching the double-task motion parameters with the single-task motion parameters aiming at each double-task motion parameter,
If the matched single-task motion parameters matched with the double-task motion parameters exist in the single-task motion parameters, determining a parameter difference value of the matched single-task motion parameters minus the double-task motion parameters, and taking the ratio between the parameter difference value and the matched single-task motion parameters as the double-task motion cost corresponding to the double-task motion parameters.
In an embodiment, the extracting module 20 is further configured to:
Inputting the single-task motion data and the double-task motion data into a preset gesture estimation model respectively, and outputting a single-task motion feature map and a double-task motion feature map marked with key points of a human body;
respectively carrying out image preprocessing on the single-task motion feature image and the double-task motion feature image, wherein the image preprocessing comprises one or more of key point correction, missing value interpolation and image denoising;
And respectively carrying out feature extraction based on the single-task motion feature map and the double-task motion feature map after image preprocessing to obtain single-task motion parameters and double-task motion parameters.
In an embodiment, the cognitive function assessment system further comprises a motion data acquisition module for:
receiving a cognitive function evaluation request, and acquiring an operation task based on the cognitive function evaluation request;
collecting multi-frame image data containing target user images in preset time length, and taking the collected multi-frame image data as single-task motion data;
outputting the operation task, acquiring multi-frame image data with preset time length based on target user images fed back by the operation task again, and taking the acquired multi-frame image data as double-task motion data;
and taking the single-task motion data and the double-task motion data as motion data.
In an embodiment, the cognitive function assessment system further comprises a memory data acquisition module for:
Acquiring a memory task based on the cognitive function assessment request;
Outputting the memorizing task, collecting multi-frame voice data containing target user voice based on feedback of the memorizing task, and taking the collected multi-frame voice data as memorizing data.
In an embodiment, the cognitive function assessment system further includes an early warning module, where the early warning module is further configured to output risk early warning information according to the cognitive assessment result, and/or output health management advice related to the cognitive assessment result.
In addition, the embodiment of the application also provides a cognitive function evaluation device, which comprises a memory, a processor and a cognitive function evaluation program stored in the memory and executable on the processor, wherein the cognitive function evaluation program realizes the steps of the cognitive function evaluation method when being executed by the processor.
Referring now to fig. 9, a schematic diagram of a cognitive function assessment device suitable for use in implementing embodiments of the present application is shown. The cognitive function assessment apparatus in the embodiment of the present application may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal DIGITAL ASSISTANT: personal digital assistants), PADs (Portable Application Description: tablet computers), PMPs (Portable MEDIA PLAYER: portable multimedia players), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The cognitive function assessment apparatus shown in fig. 9 is only one example, and should not impose any limitation on the functions and scope of use of the embodiments of the present application.
As shown in fig. 9, the cognitive function assessment apparatus may include a processing system 1001 (e.g., a central processor, a graphics processor, etc.) which may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage system 1003 into a random access Memory (RAM: random Access Memory) 1004. In the RAM1004, various programs and data required for the operation of the cognitive function assessment apparatus are also stored. The processing system 1001, the ROM1002, and the RAM1004 are connected to each other by a bus 1005. An input/output (I/O) interface 1006 is also connected to the bus. In general, the following systems may be connected to the I/O interface 1006: an input system 1007 including, for example, a touch screen, touchpad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; an output system 1008 including, for example, a Liquid crystal display (LCD: liquid CRYSTAL DISPLAY), a speaker, a vibrator, and the like; a storage system 1003 including, for example, a magnetic tape, a hard disk, and the like; and a communication system 1009. The communication system 1009 may allow the cognitive function assessment device to communicate wirelessly or by wire with other devices to exchange data. While a cognitive function assessment device having various systems is shown in the figures, it should be understood that not all of the illustrated systems are required to be implemented or provided. More or fewer systems may alternatively be implemented or provided.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program sensor comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through a communication system, or installed from the storage system 1003, or installed from the ROM 1002. The above-described functions defined in the methods of the disclosed embodiments of the application are performed when the computer program is executed by the processing system 1001.
The cognitive function evaluation device provided by the embodiment of the application can solve the technical problem of cognitive function evaluation by adopting the cognitive function evaluation method in the embodiment. Compared with the prior art, the cognitive function evaluation device provided by the application has the same beneficial effects as the cognitive function evaluation method provided by the embodiment, and other technical features in the cognitive function evaluation device are the same as the features disclosed by the method of the previous embodiment, and are not described in detail herein.
It is to be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof. In the description of the above embodiments, particular features, structures, materials, or characteristics may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative of the present application, and the present application is not limited thereto, and any person skilled in the art will readily recognize that variations or substitutions are within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
In addition, in order to achieve the above object, an embodiment of the present application also provides a readable storage medium having computer readable program instructions (i.e., a computer program) stored thereon, the computer readable program instructions being for executing the cognitive function assessment method in the above embodiment.
The computer readable storage medium according to the embodiments of the present application may be, for example, a usb disk, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access Memory (RAM: random Access Memory), a Read-Only Memory (ROM: read Only Memory), an erasable programmable Read-Only Memory (EPROM: erasable Programmable Read Only Memory or flash Memory), an optical fiber, a portable compact disc Read-Only Memory (CD-ROM: CD-Read Only Memory), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In this embodiment, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, or device. Program code embodied on a computer readable storage medium may be transmitted using any appropriate medium, including but not limited to: wire, fiber optic cable, RF (Radio Frequency), and the like, or any suitable combination of the foregoing.
The above-described computer-readable storage medium may be contained in a cognitive function assessment device; or may be present alone without being assembled into the cognitive function assessment device.
The above computer-readable storage medium carries one or more programs that, when executed by the cognitive function assessment device, cause the cognitive function assessment device to: acquiring motion data containing multi-frame images of a target user and memory data containing voice of the target user; extracting parameters from the motion data to obtain motion information, and extracting parameters from the memory data to obtain memory information, wherein the motion information is a parameter representing the motion behavior of a target user, and the memory information is a parameter representing the memory capacity of the target user; and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result.
Computer program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of remote computers, the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN: local Area Network) or a wide area network (WAN: wide Area Network), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program sensors according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present application may be implemented in software or in hardware. Wherein the name of the module does not constitute a limitation of the unit itself in some cases.
The readable storage medium provided by the application is a computer readable storage medium, and the computer readable storage medium stores computer readable program instructions (namely computer programs) for executing the cognitive function evaluation method, so that the technical problem of cognitive function evaluation can be solved. Compared with the prior art, the beneficial effects of the computer readable storage medium provided by the application are the same as those of the cognitive function evaluation method provided by the above embodiment, and are not described in detail herein.
In addition, the embodiment of the application also provides a computer program sensor, which comprises a cognitive function evaluation program, wherein the cognitive function evaluation program realizes the steps of the cognitive function evaluation method when being executed by a processor.
The specific implementation of the computer program sensor of the present application is substantially the same as the above embodiments of the cognitive function assessment method, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software sensor stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A cognitive function assessment method, characterized in that the cognitive function assessment method comprises the steps of:
Acquiring motion data containing multi-frame images of a target user and memory data containing voice of the target user, wherein the motion data comprises single-task motion data and double-task motion data, the single-task motion data is motion data when the target user freely walks in a first preset time period, and the double-task motion data is motion data when the target user simultaneously performs motion and cognitive paradigm calculation in a second preset time period;
Respectively extracting characteristics of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the single-task motion parameters and the double-task motion parameters comprise one or more of the step length, the stride length, the pace speed, the stride frequency, the step length time, the stride time, the single support time and the double support time of bilateral limbs;
matching the double-task motion parameters with the single-task motion parameters according to each double-task motion parameter;
If matched single-task motion parameters matched with the double-task motion parameters exist in the single-task motion parameters, determining a parameter difference value of the matched single-task motion parameters minus the double-task motion parameters, and taking the ratio between the parameter difference value and the matched single-task motion parameters as the double-task motion cost corresponding to the double-task motion parameters;
taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information;
extracting parameters from the memory data to obtain memory information, wherein the memory information is a parameter representing the memory capacity of a target user;
and inputting the movement information and the memory information into a pre-trained cognitive function evaluation model, and outputting to obtain a cognitive evaluation result.
2. The method of claim 1, wherein the step of extracting parameters from the memory data to obtain memory information comprises:
performing voice recognition on the memory data to obtain a voice recognition result;
Matching the voice recognition result with a preset standard result to obtain the matching degree between the voice recognition result and the preset standard result;
And determining a memory capacity value corresponding to the matching degree based on a preset mapping relation, and taking the memory capacity value corresponding to the matching degree as the extracted memory information, wherein the preset mapping relation comprises the corresponding relation between different matching degrees and the memory capacity value.
3. The method of claim 1, wherein the step of extracting features of the single-task motion data and the double-task motion data to obtain a single-task motion parameter and a double-task motion parameter, respectively, comprises:
Inputting the single-task motion data and the double-task motion data into a preset gesture estimation model respectively, and outputting a single-task motion feature map and a double-task motion feature map marked with key points of a human body;
respectively carrying out image preprocessing on the single-task motion feature image and the double-task motion feature image, wherein the image preprocessing comprises one or more of key point correction, missing value interpolation and image denoising;
And respectively carrying out feature extraction based on the single-task motion feature map and the double-task motion feature map after image preprocessing to obtain single-task motion parameters and double-task motion parameters.
4. The method of claim 1, wherein prior to the step of obtaining motion data comprising a multi-frame image of the target user and memory data comprising speech of the target user, the method further comprises:
receiving a cognitive function evaluation request, and acquiring an operation task based on the cognitive function evaluation request;
collecting multi-frame image data containing target user images in preset time length, and taking the collected multi-frame image data as single-task motion data;
Outputting the operation task, acquiring multi-frame image data which is based on target user images and is fed back by the operation task and has preset duration again, and taking the acquired multi-frame image data as double-task motion data;
and taking the single-task motion data and the double-task motion data as motion data.
5. The method of claim 4, wherein the step of collecting multi-frame speech data comprising the target user's speech comprises:
Acquiring a memory task based on the cognitive function assessment request;
Outputting the memorizing task, collecting multi-frame voice data containing target user voice based on feedback of the memorizing task, and taking the collected multi-frame voice data as memorizing data.
6. A cognitive function assessment system, the cognitive function assessment system comprising:
The acquisition module is used for acquiring motion data comprising multi-frame images of a target user and memory data comprising voice of the target user, wherein the motion data comprises single-task motion data and double-task motion data, the single-task motion data is motion data when the target user freely walks in a first preset time period, and the double-task motion data is motion data when the target user simultaneously performs motion and cognitive paradigm calculation in a second preset time period;
the extraction module is used for extracting parameters of the memory data to obtain memory information, wherein the memory information is a parameter representing the memory capacity of a target user;
the extraction module is further configured to:
Respectively extracting characteristics of the single-task motion data and the double-task motion data to obtain single-task motion parameters and double-task motion parameters, wherein the single-task motion parameters and the double-task motion parameters comprise one or more of the step length, the stride length, the pace speed, the stride frequency, the step length time, the stride time, the single support time and the double support time of bilateral limbs;
matching the double-task motion parameters with the single-task motion parameters according to each double-task motion parameter;
If matched single-task motion parameters matched with the double-task motion parameters exist in the single-task motion parameters, determining a parameter difference value of the matched single-task motion parameters minus the double-task motion parameters, and taking the ratio between the parameter difference value and the matched single-task motion parameters as the double-task motion cost corresponding to the double-task motion parameters;
taking one or more of the single-task motion parameter, the double-task motion parameter and the double-task motion cost as extracted motion information;
The detection module is used for inputting the movement information and the memory information into a pre-trained cognitive function evaluation model and outputting a cognitive evaluation result.
7. A cognitive function assessment device, characterized in that the cognitive function assessment device comprises: memory, a processor and a cognitive function assessment program stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the cognitive function assessment method according to any one of claims 1 to 5.
8. A readable storage medium, characterized in that the readable storage medium comprises a computer readable storage medium having stored thereon a cognitive function assessment program which, when executed by a processor, implements the steps of the cognitive function assessment method according to any one of claims 1 to 5.
CN202410661827.0A 2024-05-27 2024-05-27 Cognitive function evaluation method, system, equipment and readable storage medium Active CN118236041B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410661827.0A CN118236041B (en) 2024-05-27 2024-05-27 Cognitive function evaluation method, system, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410661827.0A CN118236041B (en) 2024-05-27 2024-05-27 Cognitive function evaluation method, system, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN118236041A CN118236041A (en) 2024-06-25
CN118236041B true CN118236041B (en) 2024-08-20

Family

ID=91555000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410661827.0A Active CN118236041B (en) 2024-05-27 2024-05-27 Cognitive function evaluation method, system, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN118236041B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160930A (en) * 2021-04-08 2021-07-23 广东省人民医院 Diagnosis and rehabilitation system and method for Parkinson disease mild cognitive impairment based on VR

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130090562A1 (en) * 2011-10-07 2013-04-11 Baycrest Centre For Geriatric Care Methods and systems for assessing cognitive function
KR101357493B1 (en) * 2012-08-13 2014-02-04 성균관대학교산학협력단 Alzheimer's disease diagnosis apparatus and method using dual-task paradigm
US20170258390A1 (en) * 2016-02-12 2017-09-14 Newton Howard Early Detection Of Neurodegenerative Disease
CN105808970B (en) * 2016-05-09 2018-04-06 南京智精灵教育科技有限公司 A kind of online cognition appraisal procedure
SG11202008367PA (en) * 2018-03-23 2020-10-29 Panasonic Ip Man Co Ltd Cognitive function evaluation device, cognitive function evaluation system, cognitive function evaluation method, and program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160930A (en) * 2021-04-08 2021-07-23 广东省人民医院 Diagnosis and rehabilitation system and method for Parkinson disease mild cognitive impairment based on VR

Also Published As

Publication number Publication date
CN118236041A (en) 2024-06-25

Similar Documents

Publication Publication Date Title
CN110738192B (en) Auxiliary evaluation method, device, equipment, system and medium for human body movement function
AU2018250385B2 (en) Motor task analysis system and method
CN111724879B (en) Rehabilitation training evaluation processing method, device and equipment
KR101846370B1 (en) Method and program for computing bone age by deep neural network
US11763603B2 (en) Physical activity quantification and monitoring
KR20200005987A (en) System and method for diagnosing cognitive impairment using touch input
Zhao et al. Applying incremental Deep Neural Networks-based posture recognition model for ergonomics risk assessment in construction
CN114727766A (en) System for collecting and identifying skin conditions from images and expert knowledge
CN108778097A (en) Device and method for assessing heart failure
Aich et al. Design of a Machine Learning‐Assisted Wearable Accelerometer‐Based Automated System for Studying the Effect of Dopaminergic Medicine on Gait Characteristics of Parkinson’s Patients
Pérez-López et al. Assessing motor fluctuations in Parkinson’s disease patients based on a single inertial sensor
Kelly et al. Automatic prediction of health status using smartphone-derived behavior profiles
US20230090138A1 (en) Predicting subjective recovery from acute events using consumer wearables
CN113485555A (en) Medical image reading method, electronic equipment and storage medium
WO2023038992A1 (en) System and method for determining data quality for cardiovascular parameter determination
Kour et al. Sensor technology with gait as a diagnostic tool for assessment of Parkinson’s disease: a survey
Zhao et al. Motor function assessment of children with cerebral palsy using monocular video
CN118236041B (en) Cognitive function evaluation method, system, equipment and readable storage medium
US20240032820A1 (en) System and method for self-learning and reference tuning activity monitor
KR20200120365A (en) Machine learning method and system for automatically grading severity from separated actions of a parkinsonian patient video
CN115359898A (en) Pain evaluation device, pain evaluation method, storage medium, and electronic apparatus
Haberfehlner et al. A Novel Video-Based Methodology for Automated Classification of Dystonia and Choreoathetosis in Dyskinetic Cerebral Palsy During a Lower Extremity Task
Jackson et al. Computer-assisted approaches for measuring, segmenting, and analyzing functional upper extremity movement: a narrative review of the current state, limitations, and future directions
Cicirelli et al. Skeleton based human mobility assessment by using deep neural networks
US20240324907A1 (en) Systems and methods for evaluating gait

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant