CN111128369A - Method and device for evaluating Parkinson's disease condition of patient - Google Patents

Method and device for evaluating Parkinson's disease condition of patient Download PDF

Info

Publication number
CN111128369A
CN111128369A CN201911128600.5A CN201911128600A CN111128369A CN 111128369 A CN111128369 A CN 111128369A CN 201911128600 A CN201911128600 A CN 201911128600A CN 111128369 A CN111128369 A CN 111128369A
Authority
CN
China
Prior art keywords
patient
facial
facial expression
expression
parkinson
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911128600.5A
Other languages
Chinese (zh)
Inventor
吴艳楠
吴超华
张晓璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sinovation Ventures Beijing Enterprise Management Co ltd
Original Assignee
Sinovation Ventures Beijing Enterprise Management Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sinovation Ventures Beijing Enterprise Management Co ltd filed Critical Sinovation Ventures Beijing Enterprise Management Co ltd
Priority to CN201911128600.5A priority Critical patent/CN111128369A/en
Publication of CN111128369A publication Critical patent/CN111128369A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

It is an object of the present application to provide a method and apparatus for assessing a patient's parkinson's disease status; according to one aspect of the application, a method for evaluating the Parkinson's disease condition of a patient is provided, a video acquires the facial expression of the patient, a facial detection and tracking algorithm is adopted to detect and track the face in the video, a vector is adopted to represent the facial expression of the patient based on the detection and tracking so as to obtain a vectorized facial expression, the vectorized facial expression is subjected to expression analysis, the Parkinson's disease condition of the patient is evaluated, and compared with various wearable expression measuring devices, the method does not need any wearing device, only uses a camera to acquire the video, and avoids the psychological burden and the body burden of the patient; in addition, compared with the Parkinson's disease mask face evaluation table, the method and the device can quantify the expression, and avoid uncertainty caused by subjective evaluation.

Description

Method and device for evaluating Parkinson's disease condition of patient
Technical Field
The present application relates to the field of computer technology, and more particularly, to a technique for assessing a patient's parkinson's disease status.
Background
At present, clinical assessment of symptoms of parkinson's disease is done using assessment scales, such as the unified parkinson's disease rating scale (MDS-UPDRS) sponsored by the society for dyskinesias. The physician performs this quantitative exterior-to-interior scoring and grading by asking and instructing the patient to do a particular action.
Taking the evaluation of D2.28 facial expression in MDS-UPDRS as an example, 0 is normal: normal facial expressions; 1 is mild: a slight mask face, only showing a reduction in the eyes; 2-mild: besides the reduction of eyes, the face mask is also displayed on the lower half of the face, namely the mouth movement is reduced, such as smile reduction but no mouth opening; medium: has a face with a face and sometimes a mouth opening; 4-heavy: has a face with a face and most of the time has a mouth opening.
The evaluation words are as follows: "slight", "reduction of the moment", "sometimes", "most often", are very subjective and uncertain, and even a professional clinically experienced physician often makes different scores. If the words of the degree such as 'slight', 'instantaneous decrease', 'sometimes', 'most often', etc. can be quantified by numerical values, the subjectivity and uncertainty can be effectively reduced.
Therefore, how to objectively, effectively and accurately evaluate the Parkinson's disease condition of a patient becomes one of the problems that the technicians in the field need to solve urgently.
Disclosure of Invention
It is an object of the present application to provide a method and apparatus for assessing the parkinson's condition of a patient.
According to one aspect of the present application, there is provided a method for assessing the parkinson's disease status of a patient, wherein the method comprises:
acquiring the facial expression of a patient through a video;
detecting and tracking the face in the video by adopting a face detection and tracking algorithm;
based on the detecting and tracking, adopting a vector to represent the facial expression of the patient to obtain a vectorized facial expression;
and performing expression analysis on the vectorized facial expression to evaluate the Parkinson's disease condition of the patient.
According to another aspect of the present application, there is also provided an apparatus for assessing the parkinson's disease status of a patient, wherein the apparatus comprises:
the acquisition device is used for video acquisition of the facial expression of the patient;
the detection device is used for detecting and tracking the face in the video by adopting a face detection and tracking algorithm;
vectorizing means for representing the facial expression of the patient with a vector based on the detecting and tracking to obtain a vectorized facial expression;
and the evaluation device is used for performing expression analysis on the vectorized facial expression and evaluating the Parkinson's disease condition of the patient.
According to yet another aspect of the application, there is also provided a computer readable storage medium storing computer code which, when executed, performs a method as in any one of the preceding claims.
According to yet another aspect of the application, there is also provided a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
According to yet another aspect of the present application, there is also provided a computer apparatus, including:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
The application provides a method for assessing Parkinson's disease of a patient, video-collecting facial expressions of the patient, adopting a facial detection and tracking algorithm to detect and track the facial expressions in the video, expressing the facial expressions of the patient by using vectors based on the detection and tracking to obtain vectorized facial expressions, performing expression analysis on the vectorized facial expressions to assess the Parkinson's disease of the patient, and performing video collection by using a camera only without using any wearable device relative to various wearable expression measurement devices such as Electromyograph (EMG), Electrochardiogram (ECG), electrochemoraphagraphraph (EEG) and the like, so that psychological burden and physical burden of the patient are avoided; in addition, compared with the Parkinson's disease mask face evaluation table, the method and the device can quantify the expression, and avoid uncertainty caused by subjective evaluation.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application;
FIG. 2 shows a schematic diagram of an apparatus for assessing a patient's Parkinson's disease status according to one aspect of the present application;
FIG. 3 shows a flow diagram of a method for assessing a patient's Parkinson's disease condition, according to another aspect of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The term "computer device" or "computer" in this context refers to an intelligent electronic device that can execute predetermined processes such as numerical calculation and/or logic calculation by running predetermined programs or instructions, and may include a processor and a memory, wherein the processor executes a pre-stored instruction stored in the memory to execute the predetermined processes, or the predetermined processes are executed by hardware such as ASIC, FPGA, DSP, or a combination thereof. Computer devices include, but are not limited to, servers, personal computers, laptops, tablets, smart phones, and the like.
The computer equipment comprises user equipment and network equipment. Wherein the user equipment includes but is not limited to computers, smart phones, PDAs, etc.; the network device includes, but is not limited to, a single network server, a server group consisting of a plurality of network servers, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of computers or network servers, wherein Cloud Computing is one of distributed Computing, a super virtual computer consisting of a collection of loosely coupled computers. The computer equipment can be independently operated to realize the application, and can also be accessed into a network to realize the application through the interactive operation with other computer equipment in the network. The network in which the computer device is located includes, but is not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, and the like.
It should be noted that the user equipment, the network device, the network, etc. are only examples, and other existing or future computer devices or networks may also be included in the scope of the present application, if applicable, and are included by reference.
The methods discussed below, some of which are illustrated by flow diagrams, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present application. This application may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. In contrast, when an element is referred to as being "directly connected" or "directly coupled" to another element, there are no intervening elements present. Other words used to describe the relationship between elements (e.g., "between" versus "directly between", "adjacent" versus "directly adjacent to", etc.) should be interpreted in a similar manner.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present application is described in further detail below with reference to the attached figures.
FIG. 1 illustrates a block diagram of an exemplary computer system/server 12 suitable for use in implementing embodiments of the present application. The computer system/server 12 shown in FIG. 1 is only one example and should not be taken to limit the scope of use or the functionality of embodiments of the present application.
As shown in FIG. 1, computer system/server 12 is in the form of a general purpose computing device. The components of computer system/server 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Computer system/server 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The computer system/server 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 1, and commonly referred to as a "hard drive"). Although not shown in FIG. 1, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally perform the functions and/or methodologies of the embodiments described herein.
The computer system/server 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with the computer system/server 12, and/or with any devices (e.g., network card, modem, etc.) that enable the computer system/server 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the computer system/server 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN) and/or a public network, such as the Internet) via the network adapter 20. As shown, network adapter 20 communicates with the other modules of computer system/server 12 via bus 18. It should be appreciated that although not shown in FIG. 1, other hardware and/or software modules may be used in conjunction with the computer system/server 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing by executing programs stored in the memory 28.
For example, the memory 28 stores a computer program for executing the functions and processes of the present application, and when the processing unit 16 executes the corresponding computer program, the present application realizes the identification of the intention of an incoming call on the network side.
The specific apparatus/procedure used in the present application for assessing a patient's parkinson's condition will be described in detail below.
FIG. 2 shows a schematic diagram of an apparatus for assessing a patient's Parkinson's disease condition, according to one aspect of the present application.
The device 1 comprises acquisition means 201, detection means 202, vectorization means 203 and evaluation means 204.
Wherein, the acquisition device 201 video-acquires the facial expression of the patient.
Specifically, the capturing device 201 captures a video of the facial expression of the patient, for example, via a camera on the device 1. For example, the patient is required to be right in front of the camera during video acquisition, and the face of the patient faces the camera, and further, the included angle between the direction perpendicular to the face of the patient and the main optical axis of the camera can be required to be less than 60 degrees. Here, the camera is required to be able to collect a face image, and includes, but is not limited to, a depth camera, a high-speed camera, a web camera, a mobile phone camera, and the like.
The method for acquiring the facial expression of the patient by the video acquisition device 201 includes, but is not limited to, acquiring the facial expression of the patient when the patient performs other parkinson's disease assessment, for example, when the doctor makes an inquiry, or when the doctor requires some action; alternatively, acquisition may occur during the calibration phase of some pose estimation systems. The facial expression of the patient is collected through the collection mode, so that the facial expression is more natural, and the psychological burden and the physical burden of the patient are avoided.
It will be appreciated by those skilled in the art that the above-described camera is merely exemplary and should not be considered as limiting the present application, and that other existing or future cameras that may be used, if applicable, are also intended to be included within the scope of the present application and are hereby incorporated by reference.
It should also be understood by those skilled in the art that the above-mentioned manner of video capturing facial expressions of a patient is only an example and should not be considered as a limitation of the present application, and other manners of video capturing facial expressions of a patient that may occur now or in the future, such as may be applicable to the present application, should also be included within the scope of the present application and is incorporated by reference herein.
The detection device 202 detects and tracks the face in the video by using a face detection and tracking algorithm.
Specifically, the detection device 202 uses a face detection and tracking algorithm to detect and track a face in the video acquired by the acquisition device 201, detect a face region in a video frame, and track the position of the face in a subsequent frame. Here, the face detection and tracking algorithm includes, but is not limited to, a deep learning based face detection and tracking algorithm, a conventional face detection and tracking algorithm; and includes, but is not limited to, a face detection and tracking algorithm for a single person, a face detection and tracking algorithm for a plurality of persons. For example, the detecting device 202 detects a plurality of face regions in the video frame by using a face detection and tracking algorithm for multiple persons, and the detecting device 202 may further determine a main face region from the plurality of face regions and continue to detect and track the main face region; the detecting device 202 may also continue to detect and track the plurality of face regions, and obtain detection values of the plurality of face regions respectively.
In a preferred embodiment, the face detection and tracking algorithm comprises a deep learning based face detection and tracking algorithm.
In the deep learning, the intrinsic rules and the expression levels of sample data are learned, the information obtained in the learning process is very helpful for explaining data such as characters, images and sounds, and the final aim is to enable a machine to have the analysis and learning capability like a human and to recognize the data such as the characters, the images and the sounds.
Deep learning is a general term of a type of pattern analysis method, and mainly relates to three types of methods in terms of specific research contents:
(1) a neural network system based on convolution operations, i.e. a Convolutional Neural Network (CNN).
(2) self-Coding neural networks based on multi-layer neurons include both self-Coding (Auto encoder) and Sparse Coding (Sparse Coding) which has received much attention in recent years.
(3) And pre-training in a multilayer self-coding neural network mode, and further optimizing a Deep Belief Network (DBN) of the neural network weight by combining the identification information.
Through multi-layer processing, after the initial low-layer feature representation is gradually converted into the high-layer feature representation, the complex learning tasks such as classification can be completed by using a simple model. Thus, deep learning can be understood as "feature learning" or "representation learning".
Here, the detection device 202 detects and tracks the face in the video by using a face detection and tracking algorithm based on deep learning.
It should be understood by those skilled in the art that the above-mentioned face detection and tracking algorithm is only an example, and should not be considered as a limitation of the present application, and other existing or future face detection and tracking algorithms, if applicable, are also included in the scope of the present application and are incorporated herein by reference.
The vectorization unit 203 represents the facial expression of the patient with a vector based on the detection and tracking to obtain a vectorized facial expression.
Specifically, the vectorization device 203 extracts the expression of the detected and tracked face based on the detection and tracking of the face in the video by the detection device 202, and uses a vector to represent a facial expression of the patient. The vectorization of the facial expression may be, for example, first to extract a plurality of expression features in the facial expression, and separately vectorize the plurality of expression features, so as to comprehensively obtain the vectorized facial expression. For example, the vector may include codes of the exercise intensity and muscle exercise status of Facial muscles generated by a Facial motion Coding System (FACS), or may further include other auxiliary expression features such as feature points of the face of the patient, eye movement direction, head posture angle, and the like. The vectorization of the facial expression of the patient by the vectorization device 203 can obtain the vectorized facial expression, which is more beneficial to the evaluation of the parkinsonism condition of the patient.
In a preferred embodiment, the vectorization unit 203 represents the facial expression of the patient with vectors based on the coding of the exercise intensity and muscle exercise status of the facial muscles generated according to the facial motion coding system.
For example, with the facial motion coding system FACS, the motion intensity and muscle motion state of facial muscles of a detected human face can be coded, based on which a vector is used to represent the facial expression of the patient.
Here, the facial motion coding system FACS is a correspondence of different facial muscle actions and different expressions depicted by a large number of observations and biofeedback. The facial motion coding system FACS divides a human face into a number of motion units (AU) that are independent and interrelated according to the anatomical features of the face, analyzes the motion characteristics of these motion units and their controlled main areas and their associated expressions, and gives a large number of picture descriptions. Facial motion coding system FACS classifies many real-life human expressions and is today the authoritative reference standard for muscle motion for facial expressions. The vectorization means 203 represents the facial expression of the patient with vectors based on the codes of the exercise intensity and the muscle exercise state of the facial muscles generated according to the facial motion coding system, thereby obtaining a vectorized facial expression.
In a preferred embodiment, the vectorization device 203 represents the facial expression of the patient by using a vector based on the coding of the movement intensity and the muscle movement state of the facial muscles generated according to the facial movement coding system and in combination with the auxiliary expression features of the patient;
wherein the auxiliary expressive features comprise at least any one of:
feature points of the patient's face;
a direction of eye movement of the patient;
a head pose angle of the patient.
Specifically, the vectorization device 203 may further combine one or more auxiliary expression features of the patient on the basis of vectorizing the facial expression of the patient according to the facial motion coding system FACS, where the auxiliary expression features include, but are not limited to, feature points of the face of the patient, an eye movement direction of the patient, a head posture angle of the patient, and the like, and the vectorization device 203 may further vectorize the auxiliary expression features, and then represent the facial expression of the patient by using vectors together with the coding of the movement intensity and muscle movement state of the facial muscles generated according to the facial motion coding system FACS, so as to obtain the vectorized facial expression.
It will be understood by those skilled in the art that the above-described supplementary expressive features are merely examples and should not be construed as limiting the application, and other existing or future expressive features, as applicable, are included within the scope of the present application and are incorporated by reference herein.
The evaluation device 204 performs expression analysis on the vectorized facial expressions to evaluate the Parkinson's disease condition of the patient.
Specifically, the evaluation device 204 performs expression analysis on the vectorized facial expression obtained by the vectorization device 203, for example, vector values such as the number of blinks, the duration of the open mouth, the magnitude of the mouth angle, and the like of the patient in a certain time period are obtained by the vectorization device 203, and the evaluation device 204 performs expression analysis on the vector values, thereby evaluating the parkinson's disease condition of the patient. For example, the evaluation device 204 compares the vector value corresponding to the facial expression with a predetermined threshold, and determines whether the patient has parkinson according to the comparison result, and if so, further evaluates the parkinson's disease condition of the patient.
In a preferred embodiment, the evaluation means 204 compares the vectorized facial expressions with a predetermined threshold to evaluate the patient's parkinson's condition.
For example, the vectorization unit 203 may calculate various indexes, such as the strength of the average facial muscle of the patient, according to the output of the facial motion coding system FACS, the evaluation unit 204 compares the strength with a first predetermined threshold, and if the strength of the average facial muscle of the patient is smaller than the first predetermined threshold, the facial expression of the patient may be defined as a mask face, that is, the patient is determined to suffer from parkinson's disease. Further, the evaluation device 204 may also set a plurality of predetermined thresholds, for example, when the intensity of the average facial muscle of the patient is greater than the second predetermined threshold but less than the third predetermined threshold, the parkinsonism of the patient may be evaluated as level one face of the mask, when the intensity of the average facial muscle of the patient is greater than or equal to the third predetermined threshold but less than the fourth predetermined threshold, the parkinsonism of the patient may be evaluated as level two face of the mask, and so on. It will be appreciated by those skilled in the art that the above-described manner of assessing the facial grade of a mask is merely to better illustrate how the patient's condition for Parkinson's disease is assessed and should not be considered a limitation of the present application.
Further, the various indexes calculated by the vectorization unit 203 according to the output of the facial motion coding system FACS may further include other facial expression features, such as the number of blinks of the patient in a certain time period, the duration of opening the mouth, the size of the mouth, the amplitude of the mouth angle, and the like, and the evaluation unit 204 may integrate a plurality of facial expression features to evaluate the parkinson's disease condition of the patient. For example, if the number of blinks of the patient within the predetermined time period is less than a fifth predetermined threshold and the intensity of the average facial muscle of the patient is less than the sixth predetermined threshold, the evaluation device 204 evaluates the parkinson's condition of the patient to be one level of the mask face. Other ways of setting the threshold and evaluating the corresponding Parkinson's disease condition are similar to those described above, and therefore are not described herein again and are included herein by reference.
In a preferred embodiment, the device 1 further comprises a learning device (not shown) for collecting the scores of the Parkinson's disease condition of the doctor and the corresponding expression vectors; utilizing machine learning to obtain a corresponding scoring model; wherein the evaluation device 204 evaluates the Parkinson's disease condition of the patient according to the vectorized facial expression by using the scoring model.
Specifically, the learning device collects scores of Parkinson's disease conditions of doctors in multiple times and expression vectors of patients corresponding to the scores, and learns the sampling data and generates corresponding score models by machine learning. Subsequently, the evaluation device 204 uses the scoring model generated by the learning device to input the vectorized facial expression of the current patient obtained by the vectorization device 203 into the scoring model, so as to directly obtain the scoring of the parkinson's disease condition of the current patient.
Further, the physician can also compare the change in the score values before and after treatment to quantify the effectiveness of the treatment. In addition, the doctor can monitor and track the development change of the Parkinson's disease condition according to the quantification result, evaluate the treatment effect and the like.
It will be understood by those skilled in the art that the above-described means for assessing a patient's parkinson's disease is by way of example only and should not be construed as limiting the present application, as other means for assessing a patient's parkinson's disease that may occur or become known in the future, as applicable to the present application, are intended to be encompassed within the scope of the present application and are incorporated herein by reference.
FIG. 3 shows a flow diagram of a method for assessing a patient's Parkinson's disease condition, according to another aspect of the present application.
In step S301, the apparatus 1 video captures the facial expression of the patient, for example, via a camera thereon. For example, the patient is required to be right in front of the camera during video acquisition, and the face of the patient faces the camera, and further, the included angle between the direction perpendicular to the face of the patient and the main optical axis of the camera can be required to be less than 60 degrees. Here, the camera is required to be able to collect a face image, and includes, but is not limited to, a depth camera, a high-speed camera, a web camera, a mobile phone camera, and the like.
In step S301, the method for acquiring the facial expression of the patient by the device 1 includes, but is not limited to, acquiring the facial expression of the patient when the patient performs other parkinson' S disease assessment, for example, when the doctor makes an inquiry, or when the doctor requires some action; alternatively, acquisition may occur during the calibration phase of some pose estimation systems. The facial expression of the patient is collected through the collection mode, so that the facial expression is more natural, and the psychological burden and the physical burden of the patient are avoided.
It will be appreciated by those skilled in the art that the above-described camera is merely exemplary and should not be considered as limiting the present application, and that other existing or future cameras that may be used, if applicable, are also intended to be included within the scope of the present application and are hereby incorporated by reference.
It should also be understood by those skilled in the art that the above-mentioned manner of video capturing facial expressions of a patient is only an example and should not be considered as a limitation of the present application, and other manners of video capturing facial expressions of a patient that may occur now or in the future, such as may be applicable to the present application, should also be included within the scope of the present application and is incorporated by reference herein.
In step S302, the apparatus 1 detects and tracks the face in the video by using a face detection and tracking algorithm.
Specifically, in step S302, the apparatus 1 employs a face detection and tracking algorithm to detect and track the face in the video acquired in step S301, detect a face region in the video frame, and track the position of the face in the subsequent frame. Here, the face detection and tracking algorithm includes, but is not limited to, a deep learning based face detection and tracking algorithm, a conventional face detection and tracking algorithm; and includes, but is not limited to, a face detection and tracking algorithm for a single person, a face detection and tracking algorithm for a plurality of persons. For example, in step S302, the apparatus 1 detects a plurality of face regions in the video frame by using a face detection and tracking algorithm for multiple persons, and in step S302, the apparatus 1 may further determine a main face region from the detected face regions and continue to detect and track the main face region; in step S302, the apparatus 1 may also continue to detect and track the face regions, and obtain detection values of the face regions respectively.
In a preferred embodiment, the face detection and tracking algorithm comprises a deep learning based face detection and tracking algorithm.
In the deep learning, the intrinsic rules and the expression levels of sample data are learned, the information obtained in the learning process is very helpful for explaining data such as characters, images and sounds, and the final aim is to enable a machine to have the analysis and learning capability like a human and to recognize the data such as the characters, the images and the sounds.
Deep learning is a general term of a type of pattern analysis method, and mainly relates to three types of methods in terms of specific research contents:
(1) a neural network system based on convolution operations, i.e. a Convolutional Neural Network (CNN).
(2) self-Coding neural networks based on multi-layer neurons include both self-Coding (Auto encoder) and Sparse Coding (Sparse Coding) which has received much attention in recent years.
(3) And pre-training in a multilayer self-coding neural network mode, and further optimizing a Deep Belief Network (DBN) of the neural network weight by combining the identification information.
Through multi-layer processing, after the initial low-layer feature representation is gradually converted into the high-layer feature representation, the complex learning tasks such as classification can be completed by using a simple model. Thus, deep learning can be understood as "feature learning" or "representation learning".
Here, in step S302, the apparatus 1 detects and tracks the face in the video by using a face detection and tracking algorithm based on deep learning.
It should be understood by those skilled in the art that the above-mentioned face detection and tracking algorithm is only an example, and should not be considered as a limitation of the present application, and other existing or future face detection and tracking algorithms, if applicable, are also included in the scope of the present application and are incorporated herein by reference.
In step S303, the apparatus 1 represents the facial expression of the patient with a vector based on the detection and tracking to obtain a vectorized facial expression.
Specifically, in step S303, the apparatus 1 extracts the expression of the detected and tracked face based on the aforementioned detection and tracking of the face in the video in step S302, and uses a vector to represent a facial expression of the patient. The vectorization of the facial expression may be, for example, first to extract a plurality of expression features in the facial expression, and separately vectorize the plurality of expression features, so as to comprehensively obtain the vectorized facial expression. For example, the vector may include codes of the exercise intensity and muscle exercise status of Facial muscles generated by a Facial motion Coding System (FACS), or may further include other auxiliary expression features such as feature points of the face of the patient, eye movement direction, head posture angle, and the like. By vectorizing the facial expression of the patient in step S303, the vectorized facial expression can be obtained, which is more favorable for evaluating the parkinson' S disease condition of the patient.
In a preferred embodiment, in step S303, the apparatus 1 represents the facial expression of the patient with a vector based on the coding of the motor intensity and the motor state of the facial muscles generated according to the facial motion coding system.
For example, with the facial motion coding system FACS, the motion intensity and muscle motion state of facial muscles of a detected human face can be coded, based on which a vector is used to represent the facial expression of the patient.
Here, the facial motion coding system FACS is a correspondence of different facial muscle actions and different expressions depicted by a large number of observations and biofeedback. The facial motion coding system FACS divides a human face into a number of motion units (AU) that are independent and interrelated according to the anatomical features of the face, analyzes the motion characteristics of these motion units and their controlled main areas and their associated expressions, and gives a large number of picture descriptions. Facial motion coding system FACS classifies many real-life human expressions and is today the authoritative reference standard for muscle motion for facial expressions. In step S303, the apparatus 1 represents the facial expression of the patient with a vector based on the coding of the exercise intensity and the muscle movement state of the facial muscles generated according to the facial motion coding system, thereby obtaining a vectorized facial expression.
In a preferred embodiment, in step S303, the apparatus 1 represents the facial expression of the patient by using a vector based on the coding of the exercise intensity and muscle exercise state of the facial muscles generated by the facial motion coding system in combination with the auxiliary expression features of the patient;
wherein the auxiliary expressive features comprise at least any one of:
feature points of the patient's face;
a direction of eye movement of the patient;
a head pose angle of the patient.
Specifically, in step S303, the apparatus 1 may further combine one or more auxiliary expression features of the patient, including but not limited to feature points of the face of the patient, the eye movement direction of the patient, the head pose angle of the patient, and the like, on the basis of vectorizing the facial expression of the patient according to the facial motion coding system FACS, and in step S303, the apparatus 1 may further vectorize these auxiliary expression features, and then represent the facial expression of the patient with vectors together with the coding of the movement intensity and muscle movement state of the facial muscles generated according to the facial motion coding system FACS, so as to obtain a vectorized facial expression.
It will be understood by those skilled in the art that the above-described supplementary expressive features are merely examples and should not be construed as limiting the application, and other existing or future expressive features, as applicable, are included within the scope of the present application and are incorporated by reference herein.
In step S304, the apparatus 1 performs expression analysis on the vectorized facial expression to evaluate the parkinson' S disease condition of the patient.
Specifically, in step S304, the apparatus 1 performs expression analysis on the vectorized facial expression obtained in step S303, for example, in step S303, the apparatus 1 derives vector values such as the number of blinks, the duration of the open mouth, the magnitude of the mouth angle, etc. of the patient in a certain period of time, and in step S304, the apparatus 1 performs expression analysis on the same, thereby evaluating the parkinson' S disease condition of the patient. For example, in step S304, the apparatus 1 compares the vector value corresponding to the facial expression with a predetermined threshold, and determines whether the patient has parkinson according to the comparison result, and if so, further evaluates the parkinson' S disease condition of the patient.
In a preferred embodiment, in step S304, the apparatus 1 compares the vectorized facial expression with a predetermined threshold value, thereby assessing the patient 'S parkinson' S disease condition.
For example, in step S303, the apparatus 1 may calculate various indexes, such as the strength of the average facial muscle of the patient, according to the output of the facial motion coding system FACS, in step S304, the apparatus 1 compares the strength with a first predetermined threshold, and if the strength of the average facial muscle of the patient is less than the first predetermined threshold, the facial expression of the patient may be defined as a mask face, that is, it is determined that the patient suffers from parkinson' S disease. Further, in step S304, the apparatus 1 may further set a plurality of predetermined thresholds, for example, when the intensity of the average facial muscle of the patient is greater than the second predetermined threshold but less than the third predetermined threshold, the parkinsonism condition of the patient may be evaluated as grade one mask, when the intensity of the average facial muscle of the patient is greater than or equal to the third predetermined threshold but less than the fourth predetermined threshold, the parkinsonism condition of the patient may be evaluated as grade two mask, and so on. It will be appreciated by those skilled in the art that the above-described manner of assessing the facial grade of a mask is merely to better illustrate how the patient's condition for Parkinson's disease is assessed and should not be considered a limitation of the present application.
Further, in step S303, the various indexes calculated by the apparatus 1 according to the output of the facial motion coding system FACS may further include other facial expression features, such as the number of blinks of the patient within a certain time period, the duration of the open mouth, the size, the amplitude of the mouth angle, etc., and in step S304, the apparatus 1 may integrate a plurality of facial expression features to evaluate the parkinson' S disease condition of the patient. For example, if the number of blinks of the patient within the predetermined period of time is less than the fifth predetermined threshold and the intensity of the average facial muscle of the patient is less than the sixth predetermined threshold, the apparatus 1 evaluates the parkinson' S condition of the patient to be one level of the mask face in step S304. Other ways of setting the threshold and evaluating the corresponding Parkinson's disease condition are similar to those described above, and therefore are not described herein again and are included herein by reference.
In a preferred embodiment, the method further comprises a step S305 (not shown), in which step S305, the device 1 collects the scores of the parkinson' S disease situation of the doctor and the corresponding expression vectors; utilizing machine learning to obtain a corresponding scoring model; wherein in step S304, the apparatus 1 estimates the parkinson' S disease condition of the patient according to the vectorized facial expression by using the scoring model.
Specifically, in step S305, the apparatus 1 collects scores of parkinson' S disease conditions of a plurality of times of doctor history and expression vectors of patients corresponding to the scores, and the apparatus 1 learns the sampling data and generates a corresponding score model by machine learning. Subsequently, in step S304, the apparatus 1 inputs the vectorized facial expression of the current patient obtained in step S303 into the scoring model using the scoring model generated in step S305, and can directly obtain the score of the parkinson' S disease condition of the current patient.
Further, the physician can also compare the change in the score values before and after treatment to quantify the effectiveness of the treatment. In addition, the doctor can monitor and track the development change of the Parkinson's disease condition according to the quantification result, evaluate the treatment effect and the like.
It will be understood by those skilled in the art that the above-described means for assessing a patient's parkinson's disease is by way of example only and should not be construed as limiting the present application, as other means for assessing a patient's parkinson's disease that may occur or become known in the future, as applicable to the present application, are intended to be encompassed within the scope of the present application and are incorporated herein by reference.
The present application also provides a computer readable storage medium having stored thereon computer code which, when executed, performs a method as in any one of the preceding.
The present application also provides a computer program product, which when executed by a computer device, performs the method of any of the preceding claims.
The present application further provides a computer device, comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any preceding claim.
It is noted that the present application may be implemented in software and/or a combination of software and hardware, for example, the various means of the present application may be implemented using Application Specific Integrated Circuits (ASICs) or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (15)

1. A method for assessing a patient's parkinson's disease condition, wherein the method comprises:
acquiring the facial expression of a patient through a video;
detecting and tracking the face in the video by adopting a face detection and tracking algorithm;
based on the detecting and tracking, adopting a vector to represent the facial expression of the patient to obtain a vectorized facial expression;
and performing expression analysis on the vectorized facial expression to evaluate the Parkinson's disease condition of the patient.
2. The method of claim 1, wherein the face detection and tracking algorithm comprises a deep learning based face detection and tracking algorithm.
3. The method of claim 1 or 2, wherein said representing the patient's facial expression with a vector comprises:
the facial expression of the patient is represented using vectors based on the coding of the motor intensity and the motor state of the facial muscles generated according to the facial motion coding system.
4. The method of claim 3, wherein said employing a vector to represent a facial expression of the patient further comprises:
representing the facial expression of the patient by adopting a vector based on the coding of the movement intensity and the muscle movement state of the facial muscles generated according to the facial movement coding system and combining the auxiliary expression characteristics of the patient;
wherein the auxiliary expressive features comprise at least any one of:
feature points of the patient's face;
a direction of eye movement of the patient;
a head pose angle of the patient.
5. The method of any of claims 1-4, wherein the expression analysis of the vectorized facial expression comprises:
comparing the vectorized facial expression to a predetermined threshold to assess the patient's Parkinson's disease condition.
6. The method of any of claims 1 to 4, wherein the method further comprises:
collecting scores of doctors on the Parkinson's disease and corresponding expression vectors;
utilizing machine learning to obtain a corresponding scoring model;
wherein the performing expression analysis on the vectorized facial expression comprises:
and evaluating the Parkinson's disease condition of the patient by utilizing the grading model according to the vectorized facial expression.
7. An apparatus for assessing a patient's parkinson's disease condition, wherein the apparatus comprises:
the acquisition device is used for video acquisition of the facial expression of the patient;
the detection device is used for detecting and tracking the face in the video by adopting a face detection and tracking algorithm;
vectorizing means for representing the facial expression of the patient with a vector based on the detecting and tracking to obtain a vectorized facial expression;
and the evaluation device is used for performing expression analysis on the vectorized facial expression and evaluating the Parkinson's disease condition of the patient.
8. The apparatus of claim 7, wherein the face detection and tracking algorithm comprises a deep learning based face detection and tracking algorithm.
9. The apparatus of claim 7 or 8, wherein the vectoring means is to:
the facial expression of the patient is represented using vectors based on the coding of the motor intensity and the motor state of the facial muscles generated according to the facial motion coding system.
10. The apparatus of claim 9, wherein the vectorization means is to:
representing the facial expression of the patient by adopting a vector based on the coding of the movement intensity and the muscle movement state of the facial muscles generated according to the facial movement coding system and combining the auxiliary expression characteristics of the patient;
wherein the auxiliary expressive features comprise at least any one of:
feature points of the patient's face;
a direction of eye movement of the patient;
a head pose angle of the patient.
11. The apparatus of any one of claims 7 to 10, wherein the evaluation apparatus is to:
comparing the vectorized facial expression to a predetermined threshold to assess the patient's Parkinson's disease condition.
12. The apparatus of any one of claims 7 to 10, wherein the apparatus further comprises learning means for:
collecting scores of doctors on the Parkinson's disease and corresponding expression vectors;
utilizing machine learning to obtain a corresponding scoring model;
wherein the evaluation device is configured to:
and evaluating the Parkinson's disease condition of the patient by utilizing the grading model according to the vectorized facial expression.
13. A computer readable storage medium storing computer code which, when executed, performs the method of any of claims 1 to 6.
14. A computer program product, the method of any one of claims 1 to 6 being performed when the computer program product is executed by a computer device.
15. A computer device, the computer device comprising:
one or more processors;
a memory for storing one or more computer programs;
the one or more computer programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-6.
CN201911128600.5A 2019-11-18 2019-11-18 Method and device for evaluating Parkinson's disease condition of patient Pending CN111128369A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911128600.5A CN111128369A (en) 2019-11-18 2019-11-18 Method and device for evaluating Parkinson's disease condition of patient

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911128600.5A CN111128369A (en) 2019-11-18 2019-11-18 Method and device for evaluating Parkinson's disease condition of patient

Publications (1)

Publication Number Publication Date
CN111128369A true CN111128369A (en) 2020-05-08

Family

ID=70495714

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911128600.5A Pending CN111128369A (en) 2019-11-18 2019-11-18 Method and device for evaluating Parkinson's disease condition of patient

Country Status (1)

Country Link
CN (1) CN111128369A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539430A (en) * 2021-07-02 2021-10-22 广东省人民医院 Immersive VR-based Parkinson's disease depression cognitive behavior treatment system
CN114171194A (en) * 2021-10-20 2022-03-11 中国科学院自动化研究所 Quantitative assessment method, device, electronic device and medium for Parkinson multiple symptoms
CN116052872A (en) * 2023-01-06 2023-05-02 南昌大学 Facial expression-based intelligent data evaluation method and system for Parkinson disease
CN117137442A (en) * 2023-09-04 2023-12-01 佳木斯大学 Parkinsonism auxiliary detection system based on biological characteristics and machine-readable medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7434176B1 (en) * 2003-08-25 2008-10-07 Walt Froloff System and method for encoding decoding parsing and translating emotive content in electronic communication
CN104254863A (en) * 2011-10-24 2014-12-31 哈佛大学校长及研究员协会 Enhancing diagnosis of disorder through artificial intelligence and mobile health technologies without compromising accuracy
CN105184239A (en) * 2015-08-27 2015-12-23 沈阳工业大学 Ward auxiliary medical-care system and auxiliary medical-care method based on facial expression recognition of patients
US20170169177A1 (en) * 2015-12-14 2017-06-15 The Live Network Inc Treatment intelligence and interactive presence portal for telehealth
CN108305680A (en) * 2017-11-13 2018-07-20 陈霄 Intelligent parkinsonism aided diagnosis method based on multi-element biologic feature and device
CN108509905A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Health state evaluation method, apparatus, electronic equipment and storage medium
CN108564042A (en) * 2018-04-17 2018-09-21 谭红春 A kind of facial expression recognition system based on hepatolenticular degeneration patient
CN109034079A (en) * 2018-08-01 2018-12-18 中国科学院合肥物质科学研究院 A kind of human facial expression recognition method under the non-standard posture for face
CN109063714A (en) * 2018-08-06 2018-12-21 浙江大学 The construction method of Parkinson's disease bradykinesia video detection model based on deep neural network
US20190122403A1 (en) * 2017-10-23 2019-04-25 Paypal, Inc. System and method for generating emoji mashups with machine learning
CN109691983A (en) * 2018-11-13 2019-04-30 深圳市和缘科技有限公司 A kind of intelligence disturbances in patients with Parkinson disease monitor system
CN110069989A (en) * 2019-03-15 2019-07-30 上海拍拍贷金融信息服务有限公司 Face image processing process and device, computer readable storage medium
CN110084259A (en) * 2019-01-10 2019-08-02 谢飞 A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature
CN110428908A (en) * 2019-07-31 2019-11-08 广西壮族自治区人民医院 A kind of eyelid movement functional assessment system based on artificial intelligence

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7434176B1 (en) * 2003-08-25 2008-10-07 Walt Froloff System and method for encoding decoding parsing and translating emotive content in electronic communication
CN104254863A (en) * 2011-10-24 2014-12-31 哈佛大学校长及研究员协会 Enhancing diagnosis of disorder through artificial intelligence and mobile health technologies without compromising accuracy
CN105184239A (en) * 2015-08-27 2015-12-23 沈阳工业大学 Ward auxiliary medical-care system and auxiliary medical-care method based on facial expression recognition of patients
US20170169177A1 (en) * 2015-12-14 2017-06-15 The Live Network Inc Treatment intelligence and interactive presence portal for telehealth
US20190122403A1 (en) * 2017-10-23 2019-04-25 Paypal, Inc. System and method for generating emoji mashups with machine learning
CN108305680A (en) * 2017-11-13 2018-07-20 陈霄 Intelligent parkinsonism aided diagnosis method based on multi-element biologic feature and device
CN108509905A (en) * 2018-03-30 2018-09-07 百度在线网络技术(北京)有限公司 Health state evaluation method, apparatus, electronic equipment and storage medium
CN108564042A (en) * 2018-04-17 2018-09-21 谭红春 A kind of facial expression recognition system based on hepatolenticular degeneration patient
CN109034079A (en) * 2018-08-01 2018-12-18 中国科学院合肥物质科学研究院 A kind of human facial expression recognition method under the non-standard posture for face
CN109063714A (en) * 2018-08-06 2018-12-21 浙江大学 The construction method of Parkinson's disease bradykinesia video detection model based on deep neural network
CN109691983A (en) * 2018-11-13 2019-04-30 深圳市和缘科技有限公司 A kind of intelligence disturbances in patients with Parkinson disease monitor system
CN110084259A (en) * 2019-01-10 2019-08-02 谢飞 A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature
CN110069989A (en) * 2019-03-15 2019-07-30 上海拍拍贷金融信息服务有限公司 Face image processing process and device, computer readable storage medium
CN110428908A (en) * 2019-07-31 2019-11-08 广西壮族自治区人民医院 A kind of eyelid movement functional assessment system based on artificial intelligence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐勇: "不同运动功能障碍对帕金森病患者日常生活活动能力的影响" *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113539430A (en) * 2021-07-02 2021-10-22 广东省人民医院 Immersive VR-based Parkinson's disease depression cognitive behavior treatment system
CN114171194A (en) * 2021-10-20 2022-03-11 中国科学院自动化研究所 Quantitative assessment method, device, electronic device and medium for Parkinson multiple symptoms
CN116052872A (en) * 2023-01-06 2023-05-02 南昌大学 Facial expression-based intelligent data evaluation method and system for Parkinson disease
CN116052872B (en) * 2023-01-06 2023-10-13 南昌大学 Facial expression-based intelligent data evaluation method and system for Parkinson disease
CN117137442A (en) * 2023-09-04 2023-12-01 佳木斯大学 Parkinsonism auxiliary detection system based on biological characteristics and machine-readable medium
CN117137442B (en) * 2023-09-04 2024-03-29 佳木斯大学 Parkinsonism auxiliary detection system based on biological characteristics and machine-readable medium

Similar Documents

Publication Publication Date Title
Javeed et al. Wearable sensors based exertion recognition using statistical features and random forest for physical healthcare monitoring
CN111128369A (en) Method and device for evaluating Parkinson's disease condition of patient
KR102058884B1 (en) Method of analyzing iris image for diagnosing dementia in artificial intelligence
US10262196B2 (en) System and method for predicting neurological disorders
JP4860749B2 (en) Apparatus, system, and method for determining compatibility with positioning instruction in person in image
Liu et al. Vision-based method for automatic quantification of parkinsonian bradykinesia
JP7185805B2 (en) Fall risk assessment system
Avola et al. Deep temporal analysis for non-acted body affect recognition
US20220036058A1 (en) Method and Apparatus for Privacy Protected Assessment of Movement Disorder Video Recordings
CN113537005A (en) On-line examination student behavior analysis method based on attitude estimation
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
Lin et al. Bradykinesia recognition in Parkinson’s disease via single RGB video
Zhang et al. A human-in-the-loop deep learning paradigm for synergic visual evaluation in children
CN115329818A (en) Multi-modal fusion attention assessment method, system and storage medium based on VR
Likitlersuang et al. Interaction detection in egocentric video: Toward a novel outcome measure for upper extremity function
Kupryjanow et al. Updrs tests for diagnosis of parkinson's disease employing virtual-touchpad
Hristov Real-time abnormal human activity detection using 1DCNN-LSTM for 3D skeleton data
TWI646438B (en) Emotion detection system and method
Al-Shakarchy et al. Open and closed eyes classification in different lighting conditions using new convolution neural networks architecture
CN112885435B (en) Method, device and system for determining image target area
Perreira Da Silva et al. Real-time face tracking for attention aware adaptive games
KR102616230B1 (en) Method for determining user's concentration based on user's image and operating server performing the same
KR102549558B1 (en) Ai-based emotion recognition system for emotion prediction through non-contact measurement data
Adnan et al. Unmasking Parkinson's Disease with Smile: An AI-enabled Screening Framework
Karunarathne et al. Utilizing Ensemble Learning in Detecting Parkinson's Disease with Reduced Facial Expressions and Hand-Written Drawings

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200508