CN113506628A - Device and method for determining risk of great vessel occlusion - Google Patents

Device and method for determining risk of great vessel occlusion Download PDF

Info

Publication number
CN113506628A
CN113506628A CN202110319823.0A CN202110319823A CN113506628A CN 113506628 A CN113506628 A CN 113506628A CN 202110319823 A CN202110319823 A CN 202110319823A CN 113506628 A CN113506628 A CN 113506628A
Authority
CN
China
Prior art keywords
data
determining
arm
patient
evaluation value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110319823.0A
Other languages
Chinese (zh)
Inventor
郭秀海
王兴维
邰从越
蔡宇衡
李静莉
李轶飞
邓轩
华萍萍
范永超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Senyint International Digital Medical System Dalian Co ltd
Xuanwu Hospital
Original Assignee
Senyint International Digital Medical System Dalian Co ltd
Xuanwu Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Senyint International Digital Medical System Dalian Co ltd, Xuanwu Hospital filed Critical Senyint International Digital Medical System Dalian Co ltd
Priority to CN202110319823.0A priority Critical patent/CN113506628A/en
Publication of CN113506628A publication Critical patent/CN113506628A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/68Analysis of geometric attributes of symmetry
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/66Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for extracting parameters related to health condition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Theoretical Computer Science (AREA)
  • Epidemiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • Quality & Reliability (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present disclosure discloses a device and a method for determining a risk of macrovascular occlusion, including the following parts: an acquisition module for acquiring at least one characteristic data of a patient, the characteristic data comprising at least eye movement data; a first determination module that determines an evaluation value corresponding to the feature data based on the feature data; a second determination module for determining a level of risk of macrovascular occlusion for the patient based on all of the assessed values. The risk of the large vessel occlusion of a suspected apoplexy patient is identified and judged from the fixation dimension of eyeballs and other dimensions such as face, arm movement, voice recognition and the like, and the assessment value is given based on different conditions and is evaluated in a risk grading manner based on a FAST-ED scale, so that whether the large vessel occlusion risk exists is identified, a corresponding stroke center is recommended to the patient for treatment, the transit time of the patient is shortened, the treatment rate of the stroke large vessel occlusion is improved, and the fatality rate and the disability rate of the stroke patient are reduced.

Description

Device and method for determining risk of great vessel occlusion
Technical Field
The present disclosure relates to the field of medical device control, and in particular, to a device and a method for determining a risk of macrovascular occlusion.
Background
The stroke is one of the major diseases causing death and disability of human beings and is the first death cause in China. The incidence of stroke in China rises year by year, and national stroke screening data show that the incidence of the first stroke of a 40-74-year-old resident rises from 189/10 ten thousand in 2002 to 379/10 ten thousand in 2013, the average annual increase is 8.3% in new patients, and ischemic stroke accounts for 70% of the total. Although venous thrombolysis is an effective method for treating acute ischemic stroke, in acute large vessel occlusive ischemic stroke, venous thrombolysis has low blood vessel recanalization rate and poor curative effect, and needs to be treated by intravascular treatment methods such as arterial drug thrombolysis, mechanical thrombus breaking, stent implantation, mechanical thrombus extraction and the like to open occluded blood vessels.
The intravascular treatment needs to be carried out in a high-grade stroke center with intravascular treatment qualification, and a common primary stroke center needs to be transported to the high-grade stroke center due to the fact that the intravascular treatment cannot be carried out, so that the treatment time is delayed, and therefore the great vessel occlusion evaluation of a suspected stroke patient in the pre-hospital transportation link has important significance. However, reports by the 118 agencies in the united states show that the mortality rate for directly transporting patients to high-grade stroke centers is lower than for patients passing through primary stroke centers before being transported to high-grade stroke centers.
The problems existing in the prior art include that in the process of rescue and transfer of an emergency ambulance, only a patient is evaluated whether the patient is a stroke patient, and large vessel occlusion is not further evaluated, so that accurate transfer cannot be carried out, and the stroke patient with large vessel occlusion is sent to a primary stroke center without intravascular treatment, so that the rescue time is prolonged; in addition, pre-hospital emergency time is urgent, large vessel occlusion assessment projects are more, pre-hospital emergency rescue personnel are uneven, the subjective difference of assessment results is larger, and the existing intelligent FAST assessment method only identifies whether stroke exists or not and does not further assess large vessel occlusion, so that large vessel occlusion risks and the like cannot be identified more accurately before a hospital.
Disclosure of Invention
An object of the embodiment of the present disclosure is to provide a great vessel occlusion risk determining device, so as to solve the problem existing in the prior art that the risk of great vessel occlusion is not accurately identified before a hospital.
In order to solve the technical problem, the embodiment of the present disclosure adopts the following technical solutions: a device for determining the risk of occlusion of a large blood vessel, comprising: an acquisition module for acquiring at least one characteristic data of a patient, the characteristic data comprising at least eye movement data; a first determination module for determining an evaluation value corresponding to the feature data based on the feature data; a second determination module for determining a level of risk of macrovascular occlusion for the patient based on all of the assessed values.
In some embodiments, the first determining module comprises: a first determination unit configured to determine, based on the eye movement data, an eye area, base coordinates of a pupil, and eye midline position coordinates, when the feature data is eye movement data; a first acquisition unit configured to acquire motion data of the pupil based on the eye area, the base coordinates of the pupil, and the eye center position coordinates; a second determination unit for determining a first evaluation value corresponding to the eye movement data based on the movement data of the pupil.
In some embodiments, the base coordinates of the pupil and the eye midline location coordinates are determined in the eye region by a pupil detection algorithm.
In some embodiments, the first determining module comprises: a third determination unit configured to determine a face center line based on the face image data in a case where the feature data includes the face image data; a fourth determination unit for determining a face symmetry based on a face monitoring point and the face center line in performing a predetermined face action; a fifth determination unit for determining a second evaluation value based on the face symmetry.
In some embodiments, the facial monitoring points include at least one of an eye corner, an eyelid, an eyebrow center, an eyebrow tip, a person, a mouth corner, and teeth.
In some embodiments, the first determining module comprises: a sixth determination unit configured to determine an arm strength value and an arm symmetry based on the first arm data if the feature data includes the first arm data; a seventh determining unit for determining a third evaluation value based on the arm strength value and the arm symmetry.
In some embodiments, the arm strength value is determined by whether the position coordinates of the same double upper limb identification points are consistent within a predetermined time, and the arm symmetry is determined by whether the position coordinates of the symmetrical double upper limb identification points on different arms are symmetrical within a predetermined time.
In some embodiments, the first determining module comprises: a second acquisition unit configured to acquire arm movement data within a predetermined time based on second arm data when the feature data includes the second arm data; an eighth determination unit for determining a fourth evaluation value based on the arm motion data.
In some embodiments, the second arm data is acquired by an acceleration sensor and a gyroscope sensor.
In some embodiments, the first determining module comprises: a ninth determining unit for determining a cognitive accuracy and/or a voice matching degree based on the language-awareness data in a case where the feature data includes the language-awareness data; a tenth determining unit for determining a fifth evaluation value based on the cognitive accuracy and/or the voice matching degree.
The embodiment of the disclosure provides a device for determining the risk of large vessel occlusion, which is rapid, objective and intelligent. The embodiment of the disclosure identifies and judges the risk of the large vessel occlusion of a suspected stroke patient through the eyeball fixation dimension and other dimensions such as face, arm movement, voice recognition and the like, gives an evaluation value based on different conditions according to a FAST-ED scale, and carries out risk grading evaluation based on the evaluation value, thereby identifying whether the large vessel occlusion risk exists or not, recommending a corresponding stroke center to the treatment of the patient, shortening the transit time of the patient, improving the treatment rate of the stroke large vessel occlusion, and reducing the fatality rate and disability rate of the stroke patient.
Drawings
In order to more clearly illustrate the embodiments of the present disclosure or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present disclosure, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram illustrating the acquisition of eye movement data in a determination device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram illustrating the acquisition of eye movement data in a determination device according to an embodiment of the present disclosure;
FIG. 3 is a schematic view of an analysis of eye movement in a determination device according to an embodiment of the disclosure;
FIG. 4 is a schematic view of an analysis of eye movement in a determination device according to an embodiment of the disclosure;
FIG. 5 is a schematic view of an analysis of eye movement in a determination device according to an embodiment of the disclosure;
fig. 6 is a schematic diagram of acquisition of facial image data in a determination device according to an embodiment of the present disclosure;
FIG. 7 is a schematic diagram illustrating the acquisition of first arm data in a determination device according to an embodiment of the disclosure;
FIG. 8 is a schematic diagram illustrating the acquisition of first arm data in a determination device according to an embodiment of the disclosure;
fig. 9 is a schematic diagram of acquisition of second arm data in a determination device according to an embodiment of the disclosure;
fig. 10 is a schematic diagram of acquiring language-aware data in the determination device according to the embodiment of the present disclosure.
Detailed Description
Various aspects and features of the disclosure are described herein with reference to the drawings.
It will be understood that various modifications may be made to the embodiments of the present application. Accordingly, the foregoing description should not be construed as limiting, but merely as exemplifications of embodiments. Other modifications will occur to those skilled in the art within the scope and spirit of the disclosure.
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and, together with a general description of the disclosure given above, and the detailed description of the embodiments given below, serve to explain the principles of the disclosure.
These and other characteristics of the present disclosure will become apparent from the following description of preferred forms of embodiment, given as non-limiting examples, with reference to the attached drawings.
It should also be understood that, although the present disclosure has been described with reference to some specific examples, a person of skill in the art shall certainly be able to achieve many other equivalent forms of the disclosure, having the characteristics as set forth in the claims and hence all coming within the field of protection defined thereby.
The above and other aspects, features and advantages of the present disclosure will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.
Specific embodiments of the present disclosure are described hereinafter with reference to the accompanying drawings; however, it is to be understood that the disclosed embodiments are merely exemplary of the disclosure that may be embodied in various forms. Well-known and/or repeated functions and structures have not been described in detail so as not to obscure the present disclosure with unnecessary or unnecessary detail. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present disclosure in virtually any appropriately detailed structure.
The specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may each refer to one or more of the same or different embodiments in accordance with the disclosure.
The disclosed embodiments relate to a macrovascular occlusion risk determination device for pre-hospital assessment of macrovascular occlusion of stroke by the patient himself or a family member. The heart of the device for determining the risk of macrovascular occlusion according to the embodiment of the present disclosure is to use Field Assessment of Stroke Emergency center sorting (FAST-ED), that is, to determine the risk level of macrovascular occlusion of a patient by using the value of FAST-ED score. The research shows that compared with other methods such as RACE (arterial occlusion rapid scoring scale) and CPSS (Xinxina institute of technology for stroke), the sensitivity of FAST-ED is more than or equal to 4, RACE is more than or equal to 5, and the sensitivity of CPSS is more than or equal to 2, and is respectively 0.60, 0.55 and 0.56; the specificity is 0.89, 0.87 and 0.85 respectively; positive predictive values are 0.72, 0.68 and 0.65 respectively; the negative predictive value was 0.82, 0.79, 0.78, respectively. In particular, FAST-ED is a simple scale, on the basis of a classical FAST score, the accuracy of assessing the risk of the large vessel occlusion is remarkably improved by increasing the judgment of an eyeball gaze factor with a cortical injury symptom, when the FAST-ED is more than or equal to 4, the sensitivity of the FAST-ED to the evaluation of the large vessel occlusion is 0.60, the specificity of the FAST-ED is 0.89, and a medical emergency professional can identify and determine the risk level of the large vessel occlusion of a patient by using the scale, so that the patient can be shunted quickly and transported accurately.
As most pre-hospital emergency personnel are out of the way of neurologic specialization, the judgment difficulty is increased for the evaluation of the large vessel occlusion, and based on the technical problem, the corresponding images are intelligently identified by acquiring videos of faces, arms and voice of the patient, the images are scored according to different dimensions, and the score of FAST-ED (FAST open-decision) is finally output, namely the final evaluation value of the risk level of the large vessel occlusion is output, so that the pre-hospital emergency personnel are helped to evaluate and predict the large vessel occlusion conveniently and quickly. In addition, the embodiment of the disclosure can be realized by mobile terminals such as mobile phones and the like, and can also be realized by special test equipment.
The great vessel occlusion risk determination device related to the embodiment of the present disclosure is used for determining a risk of a patient suffering from great vessel occlusion, and includes an acquisition module, a first determination module, and a second determination module, which are coupled to each other, specifically:
an acquisition module for acquiring at least one characteristic data of a patient, the characteristic data comprising at least eye movement data.
In order to determine the risk of a patient developing a large vessel occlusion, it is necessary to first acquire, by means of the acquisition module, at least one characteristic datum of the patient, where the characteristic datum relates to each of the patient's own condition and response to the outside world, and includes at least eye movement data. It should be noted that when the FAST-ED scale is used to evaluate the risk level of macrovascular occlusion of a patient, the degree of the patient's eye movement in a patient suffering from stroke can often reflect the degree of the patient's disease, and particularly the risk of macrovascular occlusion of the patient. By means of the acquisition module, at least one characteristic data of the patient is first acquired, said characteristic data comprising at least eye movement data. The evaluation accuracy of the risk of the occlusion of the large blood vessel can be obviously improved through the eyeball motion data. Of course, the feature data may include other physiological or motor data, and the feature data may include face image data, arm data, language recognition data, and the like.
In acquiring the characteristic data of the patient, the characteristic data may be acquired in a manner corresponding to the characteristic data. For example, when acquiring the eye movement data, a mobile phone may be placed at a certain distance, for example, 30cm to 70cm, directly in front of the eyes of the patient, and first please keep the head of the patient stationary while moving the position of the mobile phone, as shown in fig. 1, so that the eyes of the patient move along the moving direction of the mobile phone, and video data of the eye movement of the patient is recorded by a camera device in the mobile phone. The video data may be referred to herein as eye movement data of the patient. Of course, different modes need to be obtained based on different characteristic data.
A first determination module that determines an evaluation value corresponding to the feature data based on the feature data.
After the feature data of the patient is acquired by the acquisition module, the first determination module may determine an evaluation value corresponding to the feature data based on the feature data, where the evaluation value is used for quantifying the risk of the occlusion of the large blood vessel of the patient based on the feature data so as to perform a specific evaluation later. Specifically, the following parts are included:
a first determination unit configured to determine, based on the eye movement data, eye area, base coordinates of a pupil, and eye midline position coordinates, when the feature data is eye movement data.
With the first determination unit, when the acquired feature data is eye movement data, basic coordinates of the eye area and the pupil of the patient are first determined. Specifically, video data captured by the camera device of the mobile phone is decomposed to acquire an image frame therein, and as shown in fig. 2, 4 coordinates of 4 positions respectively determined based on the canthus of both eyes of the patient when the head of the patient is kept still are detected and acquired in the image frame, and the eye areas of both eyes are determined by the 4 coordinates; upon determining the eye region, performing a deep learning based pupil detection algorithm to determine base coordinates of pupils of both eyes and to determine eye midline position coordinates of both eyes of the patient. The pupil detection algorithm based on deep learning may adopt an algorithm in the prior art, and is not described in detail herein.
A first acquisition unit for acquiring motion data of the pupil based on the eye area, the base coordinates of the pupil, and the eye center position coordinates.
After the eye area, the basic coordinates of the pupil and the eye midline position coordinates are determined by the first determining unit, the eye of the patient moves along the moving direction of the mobile phone by moving the position of the mobile phone, so that real-time video data of eyeball motion data is formed, and the motion data of the pupil is acquired from the video data based on the eye area, the basic coordinates of the pupil and the eye center position coordinates.
Specifically, in the acquired video data, for example, the mobile phone is first moved to the left, it is determined whether pupils of both eyes of the patient can simultaneously move along with the movement, real-time position coordinates of the pupils are acquired, and the movement data of the pupils are acquired through the real-time position coordinates based on the eye area, the base coordinates of the pupils, and the eye center position coordinates. It is particularly necessary to acquire motion data of the pupil in motion, including s1 — pupil initial position coordinates, for example; s 2-coordinates of the pupil movement end position; l-the distance between two canthi in the horizontal direction, i.e. the eye length; f-pupil movement amplitude, f ═ s2-s 1/l, but may of course also include other data or information, such as whether the pupil position crosses the eye midline position coordinate and whether the eye corner position coordinate is reached. Then, the mobile phone is moved to the right, whether pupils of two eyes of the patient can move along with the movement is judged, real-time position coordinates of the pupils are obtained, and motion data of the pupils are obtained through the real-time position coordinates based on the eye area, the basic coordinates of the pupils and the eye center position coordinates, and similarly, the motion data also comprises whether the positions of the pupils cross the eye midline position coordinates and whether the position coordinates reach the eye corners. It is to be noted here that only the movement in the horizontal direction is evaluated, and the movement in the other directions is not evaluated when the pupil movement direction is evaluated. In general, whether pupils of the two eyes of the patient can move in a consistent manner along with the moving direction of the mobile phone and move to the corresponding canthus is judged.
A second determination unit for determining a first evaluation value corresponding to the eye movement data based on the movement data of the pupil.
After the movement data of the pupil is acquired by the above-mentioned acquisition unit, an evaluation value corresponding to the eyeball movement data is determined based on the movement data of the pupil. For example, as shown in fig. 3, if the pupils of both eyes of the patient can move to the right or left simultaneously according to the moving direction of the mobile phone, and the pupils of both eyes cross the eye midline position coordinates, i.e. f > 50% of both eyes, and both the pupils of both eyes can move to the limit of the eye area, i.e. can reach the eye corner position, then the gaze is considered normal, and the evaluation value can be set to 0 point; as shown in fig. 4, the evaluation value is set to 1 point in the case where the pupils including (1) both eyes can move only to one side at the same time, for example, can move to the right at the same time, but cannot move to the left at the same time; (2) the pupil of both eyes is not moved enough, e.g. the pupil of both eyes crosses the eye midline position coordinate, i.e. f > 50% of both eyes, but cannot be moved to the canthus position; (3) the pupil of one of the eyes can move, but the pupil of the other eye cannot follow; as shown in fig. 5, the evaluation value was set to 2 points in a case including (1) the pupils of both eyes of the patient were completely motionless, i.e., f is 0; (2) the pupil has only slight movement and cannot cross the midline of the eye position coordinate, i.e., f < 50%.
In another embodiment, in acquiring at least one feature data of a patient, the feature data may further include face image data, and in the case where the feature data includes face image data, the first determination module for determining an evaluation value corresponding to the feature data based on the feature data further includes:
a third determination unit configured to determine a face center line based on the face image data, in a case where the feature data includes the face image data.
By a third determination unit, the coordinates of the line in the face are determined based on the face image data. When the facial image data is acquired by the mobile phone, the facial image data may be acquired by, for example, taking a facial image in real time or taking a facial image within a predetermined time by using an imaging device of the mobile phone, as shown in fig. 6. Determining a facial centerline based on the facial image data or facial imagery data.
A fourth determination unit for determining a face symmetry based on the face monitoring point and the face center line in performing a predetermined face action.
After the face center line is determined by the third determination unit, a face symmetry is determined by the fourth determination unit based on the face monitoring point and the face center line in the process of performing a predetermined face action. Firstly, the position coordinates of the symmetrical face monitoring points of the face to be monitored are determined, the face monitoring points can be the monitoring points of the corner of the eye, the eyelid, the eyebrow tip, the middle of the person, the corner of the mouth, the teeth and the like of the face of the patient, and the predetermined face action can be smiling, tooth exposing and the like. In the course of performing the predetermined facial action, position coordinates of the face monitoring points are acquired and the degree of symmetry on both sides of the face is detected from the face center line, thereby determining the face symmetry degree, which is acquired based on the position coordinates of the face monitoring points in the face triangle (the region between the eyes and the mouth), in particular.
A fifth determination unit for determining a second evaluation value based on the face symmetry.
After the face symmetry is determined by the above-described fourth determination unit, a second evaluation value may be determined by the fifth determination unit based on the face symmetry. When the face symmetry is equal to or greater than 90%, for example, the face symmetry is considered substantially normal, defined as normal, and the second evaluation value is set to 0 point; when the face symmetry is < 90%, such as the case where unilateral face spasm or significant asymmetry occurs, the second evaluation value is set to 1 point.
In another embodiment, in acquiring at least one feature data of a patient, the feature data further includes first arm data, and in the case where the feature data includes the first arm data, the first determination module for determining an evaluation value corresponding to the feature data based on the feature data further includes:
a sixth determining unit for determining an arm strength value and an arm symmetry based on the first arm data if the feature data includes the first arm data.
Determining, by a sixth determining unit, an arm strength value and an arm symmetry based on the first arm data. When the first arm data is obtained by the mobile phone, for example, the arm of the patient is made to perform a predetermined arm motion, as shown in fig. 7, for example, the patient is made to raise both hands horizontally and put down sequentially, and further as shown in fig. 8, position information of the two upper limb identification points of the key parts of both upper limbs in a predetermined time is collected, where the two upper limb identification points include, but are not limited to, positions of main joint parts such as shoulders, elbows, and wrists, and also include special positions such as fingertips, and highest positions and lowest positions of the main parts. The arm strength value and the arm symmetry of the patient are determined based on the position information of the double upper limb identification points within the preset time, for example, the arm strength value can be determined by whether the position coordinates of the same double upper limb identification points are consistent within the preset time, and the arm symmetry can be determined by whether the position coordinates of the symmetrical double upper limb identification points on different arms are symmetrical within the preset time.
A seventh determining unit for determining a third evaluation value based on the arm strength value and the arm symmetry.
After the arm strength value and the arm symmetry are determined by the above-described sixth determination unit, a third evaluation value may be determined based on the arm strength value and the arm symmetry. For example, the arm strength value of the patient is greater than 80, that is, no upper limb muscle weakness occurs, and the arm symmetry degree of the symmetrical upper limb identification points of the two upper limbs is greater than or equal to 90%; and the third evaluation value is set as 0 min if the arm symmetry is not changed within 10 seconds; if the arm strength value of the patient is between 40-80, which means moderate weakness, the arm symmetry of the double upper limb identification points of the double upper limb symmetry is < 90%; or the coordinate difference value of the symmetrical double upper limb identification points on different upper limbs in a preset time is increased all the time, the third evaluation value is set to be 1 point; if the arm strength value of the patient is less than 40, that is, is severely weak, and one upper limb is not able to move at all, the third evaluation value is set to 2 points.
In another embodiment, in acquiring at least one feature data of a patient, the feature data further includes second arm data, and in the case where the feature data includes the second arm data, the first determination module for determining the evaluation value corresponding to the feature data based on the feature data further includes:
a second acquisition unit for acquiring arm motion data within a predetermined time based on second arm data in a case where the feature data includes the second arm data.
By the second acquiring unit, based on that when acquiring the second arm data, the patient is allowed to grasp the mobile phone and make the arm perform a predetermined motion, as shown in fig. 9, the basic acceleration data of the mobile phone in the initial state is acquired by the acceleration sensor and the gyroscope sensor in the mobile phone, and is set to be, for example, α x, α y, and α z, which may be respectively recorded as 0,0, and 0; then, arm movement data of a predetermined motion performed by the arm within a predetermined time after the initial state is obtained, for example, within 20 seconds, for example, maximum values of α x, α y, α z of the mobile phone during the first 10 seconds may be obtained and respectively denoted as Max α x, Max α y, Max α z, and angular velocity values of the mobile phone during the last 10 seconds during the slow arm falling may be obtained and respectively denoted as ω α x, ω α y, ω α z, and three-axis angular velocity ω α x ═ Max- α x)/10, ω α y ═ Max- α y)/10, and ω α z ═ Max α z- α z)/10.
When actually acquiring the first arm data or the second arm data, the following test steps may be adopted:
(1) the patient holds the mobile phone with one hand (such as the left hand), and the patient can be in a standing or lying state at the moment and clicks to start evaluation.
(2) The mobile phone is held by a single hand, and according to the prompt voice, the arm is slowly (for example, at a constant speed) lifted to a horizontal state within 10 seconds.
(3) According to the prompt voice, the arm is slowly (uniformly) put down after 10 seconds.
And (3) when the one-hand test is finished, repeating the step (1) on the other hand for evaluation again.
An eighth determination unit for determining a fourth evaluation value based on the arm motion data.
After the arm movement data within the predetermined time is acquired by the above-described second acquisition unit, a fourth evaluation value may be determined by an eighth determination unit based on the arm movement data, specifically, (1) if [ | Max α x- α x | ≧ 60 | | | Max α y- α y | ≧ 60 | | | Max α z- α z | ≧ 60 ° ] & [ ω α x ≦ 15 °/S | | | ω α y ≦ 15 °/S | | | | | ω α z ≦ 15 °/S ] is true, it is defined that no upper limb muscular weakness occurs, the fourth evaluation value is set to 0 point; (2) if [30 ° ≦ Max α x- α x | <60 ° |30 ° ≦ Max α y- α y | <60 ° | |30 ° ≦ | Max α z- α z | <60] & [ ω α x > 15 °/S | | | ω y > 15 °/S | | ω α z > 15 °/S | | ω z > 15 °/S ] is true, then medium weakness is defined, and the fourth evaluation value is set to 1 point; (3) if | Max α x- α x | <30 ° & | Max α y- α y | <30 ° & | Max α z- α z | <30 ° is true, it is defined as severe weakness, and the fourth evaluation value is set to 2 points.
Of course, the fourth evaluation value may be obtained by any one of the arms of the patient, and may also be obtained after the respective tests of the left and right arms. For example, if only one arm is moderately weak or severely weak, for example, the left hand is severely weak, and the left hand is moderately weak, it is determined that the fourth evaluation value is set to 2 points.
In another embodiment, in acquiring at least one feature data of a patient, the feature data further includes language-awareness data, and in the case where the feature data includes language-awareness data, the first determination module for determining the evaluation value corresponding to the feature data based on the feature data further includes:
a ninth determining unit for determining a cognitive accuracy and/or a voice matching degree based on the language-awareness data, in a case where the feature data includes the language-awareness data.
Determining, by a ninth determining unit, a cognitive accuracy and a speech matching degree based on the language-recognition data. When the language-recognition data is obtained by the mobile phone, as shown in fig. 10, this can be achieved by, for example, playing a predetermined picture by the mobile phone, obtaining voice feedback data of the patient for the picture, and determining the recognition accuracy of the predetermined picture by the patient through the voice feedback data, where the recognition accuracy includes pronunciation that the content in the predetermined picture can be accurately recognized and the pronunciation can have a similar pronunciation with a name corresponding to the predetermined picture, that is, obtaining pronunciation similarity; and playing a preset sentence through a mobile phone, and acquiring voice feedback data of the preset sentence read by the patient through a voice receiving device of the mobile phone. Further, the speech read by the patient is identified and compared with the preset sentences to judge the word accuracy between the inhibition of the words and the pause of the sentences in the voice feedback data and the preset sentences played and the consistency and the speed of the language. For example, a preset picture of a common article is displayed for a patient, the patient is allowed to recognize the picture, an answer is spoken, and a preset text voice is played through voice recording and mobile phone voice, for example: "XXXXXXX, XXXXX".
A tenth determining unit for determining a fifth evaluation value based on the cognitive accuracy and/or the voice matching degree.
After the cognitive accuracy and/or the voice matching degree are determined by the above-described ninth determining unit, a fifth evaluation value may be determined based on the cognitive accuracy and the voice matching degree. For example, in the case where it is determined that the cognitive accuracy is high, if the patient can accurately recognize 2 or more items and the utterance similarity is not less than 75%, the language cognitive ability of the patient is considered to be normal, and the fifth evaluation value is set to 0 point; if the patient can accurately recognize 2 or less items and the pronunciation similarity is < 75%, the speech movement is defined as abnormal, and the fifth evaluation value is set to 1 point.
In the case that it is determined that the speech matching degree is equal to or greater than 90%, where semantic recognition is mainly used, language perception is defined as normal, and the fifth evaluation value is set to 0 score, if the speech matching degree of the patient's speech recording to standard speech is equal to or greater than 90%; if the patient's voice recording matches < 90% speech to standard speech, here predominantly semantic recognition, language perception is defined as normal and the fifth assessment value is set to 1 point.
It should be noted that, based on different feature data, different modes or different feedback behaviors of the patient may be obtained, or multiple feature data may be obtained in the same mode or the same feedback behavior of the patient, for example, based on that not only eye movement data but also facial image data or arm data may be obtained through video data.
The following uses a mobile phone as an example to illustrate a method for using the device for determining the risk of large vessel occlusion according to the embodiment of the present disclosure:
before entering the test, a confirmation key is displayed to confirm that the patient himself holds the mobile phone or an evaluator (a third person) holds the mobile phone.
One, when the evaluator (third person) is selected to hold the mobile phone
The evaluator holds the mobile phone, aims the mobile phone at the patient and carries out the next operation according to the voice prompt.
S1 "please smile up the lens and expose the teeth", collect and acquire facial image data.
S2, please keep the head unchanged and the eyes see the mobile phone moving direction, and the eyeball movement data is collected and acquired.
S3, namely 'please lift the left hand', after 2 seconds, 'please lift the right hand', automatically timing for 10 seconds, 'put down the two hands in turn according to the prompt', 'please put down the left hand', and after 2 seconds, 'please put down the right hand', acquiring and acquiring arm data;
s4 "please follow reading 'XXXXXX, XXXXXX'", and collect and acquire linguistic knowledge data.
After the relevant feature data is acquired, the final evaluation value can be directly output based on the FAS-ED scale.
Secondly, when the patient himself holds the mobile phone
The patient enters a video shooting mode and please follow the voice prompt/picture prompt to operate.
S1 "please smile up the lens and expose the teeth", facial image data is collected.
S2, please keep the head unchanged and move the eyes along with the icon, and the eyeball motion data is collected. .
S3, please hold the mobile phone with the left hand to lift, automatically count for 10 seconds, then, please put down the left hand, please hold the mobile phone with the right hand to lift, automatically count for 10 seconds, then, please put down the right hand, and collect arm data.
S4 "please follow the reading of 'XXXXXX, xxxxxxxx'", and collect linguistic cognitive data.
After the relevant feature data is acquired, the final evaluation value can be directly output based on the FAS-ED scale.
For example, it may also be:
s1 "please smile up the lens and expose the teeth", facial image data is collected.
S2, please hold the mobile phone with the left hand to lift to the horizontal position, please keep the head unchanged, move the mobile phone to the left, move the eyes along with the icon, automatically time 10 seconds later, "please put down the left hand", please hold the mobile phone with the right hand to lift to the horizontal position, please keep the head unchanged, move the mobile phone to the right, move the eyes along with the icon ", automatically time 10 seconds later," please put down the right hand ", and collect arm data and eyeball movement data.
S3 "please follow the reading of 'XXXXXX, xxxxxxxx'", and collect linguistic cognitive data.
A second determination module for determining a level of risk of macrovascular occlusion for the patient based on all of the assessed values.
After determining the evaluation value corresponding to the feature data by the above-described first determination module, a level of risk of occlusion of a large blood vessel of the patient may be determined based on the evaluation value, and when the acquired feature data is 1 item, only the evaluation value may be used as a final evaluation value, and when the acquired feature data is a plurality of items, each item of the feature data has the corresponding evaluation value, and the plurality of evaluation values may be added to obtain a final evaluation value; further, for the case of the obtained assessment values, the final assessment value is compared with different threshold values, the risk of occlusion of the large vessels of the patient is determined on the basis of the result of the comparison, e.g. if the final assessment value exceeds different threshold values, it is considered to have different risk levels, on the basis of which further processing protocols can be determined. For example, the final evaluation value is the first evaluation value + the second evaluation value + the third evaluation value + the fourth evaluation value + the fifth evaluation value, where if the final evaluation value is 0, the risk of occlusion of a large blood vessel is small; if the final evaluation value is between 1 point and 3 points, the risk of large vessel occlusion is high, and the large vessel occlusion can be transported to the primary stroke center or the advanced stroke center nearby; if the final assessment value is more than or equal to 4 points, the risk of large vessel occlusion is high, and the direct transportation to the advanced stroke center with thrombus removal capability is needed.
Of course, different judgments may also be performed according to the situation of the evaluation values, and corresponding threshold values are set for different evaluation values, for example, 1 to 2 items of the evaluation values exceed the corresponding threshold values to consider that there is a tertiary risk, 3 to 4 items of the evaluation values exceed the corresponding threshold values to consider that there is a secondary risk, and if all the evaluation values exceed the corresponding threshold values, there is a primary risk.
Another embodiment of the present disclosure provides a method for determining a risk of occlusion of a large blood vessel, which is implemented by the determination device of the above embodiment, including the steps of:
acquiring at least one characteristic data of a patient, the characteristic data at least comprising eye movement data;
determining an evaluation value corresponding to the feature data based on the feature data;
determining a macrovascular occlusion risk level for the patient based on all of the assessed values.
In a specific embodiment, further, the determining, based on the feature data, an evaluation value corresponding to the feature data includes:
when the characteristic data is eyeball motion data, determining the eye area, the basic coordinates of the pupil and the position coordinates of the eye midline based on the eyeball motion data;
acquiring motion data of the pupil based on the eye area, the basic coordinates of the pupil and the coordinates of the eye center position;
based on the movement data of the pupil, a first evaluation value corresponding to the eye movement data is determined.
In another specific embodiment, further, the determining an evaluation value corresponding to the feature data based on the feature data includes
Determining a face centerline based on the face image data in a case where the feature data includes the face image data;
determining face symmetry based on face monitoring points and the face centerline during execution of a predetermined facial action;
based on the face symmetry, a second evaluation value is determined.
The face monitoring points herein include at least one of an eye corner, an eyelid, an eyebrow center, an eyebrow tip, a person, a mouth corner, and teeth.
In another specific embodiment, further, the determining, based on the feature data, an evaluation value corresponding to the feature data includes:
determining an arm strength value and an arm symmetry based on the first arm data when the feature data comprises the first arm data;
determining a third evaluation value based on the arm strength value and the arm symmetry.
The arm strength value is determined by whether the position coordinates of the same double upper limb identification points are consistent within a preset time, and the arm symmetry is determined by whether the position coordinates of the symmetrical double upper limb identification points on different arms are symmetrical within a preset time.
In another specific embodiment, further, the determining, based on the feature data, an evaluation value corresponding to the feature data includes:
when the feature data comprises second arm data, acquiring arm motion data within a preset time based on the second arm data;
determining a fourth evaluation value based on the arm movement data.
The second arm data here is acquired by an acceleration sensor and a gyro sensor.
In another specific embodiment, further, the determining, based on the feature data, an evaluation value corresponding to the feature data includes:
determining a cognitive accuracy and/or a voice matching degree based on the language cognition data when the feature data comprises language cognition data;
determining a fifth evaluation value based on the cognitive accuracy and/or the voice matching degree.
In the embodiment of the disclosure, the level of risk of macrovascular occlusion of a patient is calculated and determined according to eye movement data, facial image data, arm data, language cognition data and the like, wherein intelligent evaluation is realized by adopting a face, limb, eye and pupil recognition algorithm based on deep learning.
The embodiment of the disclosure identifies and judges the risk of the large vessel occlusion of a suspected stroke patient through the eyeball fixation dimension and other dimensions such as face, arm movement, voice recognition and the like, gives an evaluation value based on different conditions according to a FAST-ED scale, and carries out risk grading evaluation based on the evaluation value, thereby identifying whether the large vessel occlusion risk exists or not, recommending a corresponding stroke center to the treatment of the patient, shortening the transit time of the patient, improving the treatment rate of the stroke large vessel occlusion, and reducing the fatality rate and disability rate of the stroke patient.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.

Claims (10)

1. A device for determining the risk of occlusion of a large blood vessel, comprising:
an acquisition module for acquiring at least one characteristic data of a patient, the characteristic data comprising at least eye movement data;
a first determination module for determining an evaluation value corresponding to the feature data based on the feature data;
a second determination module for determining a level of risk of macrovascular occlusion for the patient based on all of the assessed values.
2. The apparatus according to claim 1, wherein the first determining means comprises:
a first determination unit configured to determine, based on the eye movement data, an eye area, base coordinates of a pupil, and eye midline position coordinates, when the feature data is eye movement data;
a first acquisition unit configured to acquire motion data of the pupil based on the eye area, the base coordinates of the pupil, and the eye center position coordinates;
a second determination unit for determining a first evaluation value corresponding to the eye movement data based on the movement data of the pupil.
3. The apparatus according to claim 2, wherein the first determining means comprises:
a third determination unit configured to determine a face center line based on the face image data in a case where the feature data includes the face image data;
a fourth determination unit for determining a face symmetry based on a face monitoring point and the face center line in performing a predetermined face action;
a fifth determination unit for determining a second evaluation value based on the face symmetry.
4. The apparatus of claim 3, wherein the facial monitoring points comprise at least one of an eye corner, an eyelid, an eyebrow center, an eyebrow tip, a person, a mouth corner, and teeth.
5. The apparatus according to claim 2, wherein the first determining means comprises:
a sixth determination unit configured to determine an arm strength value and an arm symmetry based on the first arm data if the feature data includes the first arm data;
a seventh determining unit for determining a third evaluation value based on the arm strength value and the arm symmetry.
6. The determination apparatus according to claim 5, wherein the arm strength value is determined by whether position coordinates of identical double upper limb identification points are kept uniform for a predetermined time, and the arm symmetry is determined by whether position coordinates of symmetrical double upper limb identification points on different arms are kept symmetrical for a predetermined time.
7. The apparatus according to claim 2, wherein the first determining means comprises:
a second acquisition unit configured to acquire arm movement data within a predetermined time based on second arm data when the feature data includes the second arm data;
an eighth determination unit for determining a fourth evaluation value based on the arm motion data.
8. The determination device according to claim 7, characterized in that the second arm data is acquired by an acceleration sensor and a gyro sensor.
9. The apparatus according to claim 2, wherein the first determining means comprises:
a ninth determining unit for determining a cognitive accuracy and/or a voice matching degree based on the language-awareness data in a case where the feature data includes the language-awareness data;
a tenth determining unit for determining a fifth evaluation value based on the cognitive accuracy and/or the voice matching degree.
10. A method for determining the risk of macrovascular occlusion comprising the steps of:
acquiring at least one characteristic data of a patient, the characteristic data at least comprising eye movement data;
determining an evaluation value corresponding to the feature data based on the feature data;
determining a macrovascular occlusion risk level for the patient based on all of the assessed values.
CN202110319823.0A 2021-03-25 2021-03-25 Device and method for determining risk of great vessel occlusion Pending CN113506628A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110319823.0A CN113506628A (en) 2021-03-25 2021-03-25 Device and method for determining risk of great vessel occlusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110319823.0A CN113506628A (en) 2021-03-25 2021-03-25 Device and method for determining risk of great vessel occlusion

Publications (1)

Publication Number Publication Date
CN113506628A true CN113506628A (en) 2021-10-15

Family

ID=78008326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110319823.0A Pending CN113506628A (en) 2021-03-25 2021-03-25 Device and method for determining risk of great vessel occlusion

Country Status (1)

Country Link
CN (1) CN113506628A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115810425A (en) * 2022-11-30 2023-03-17 广州中医药大学第一附属医院 Method and device for predicting mortality risk level of septic shock patient

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106793942A (en) * 2014-02-10 2017-05-31 华柏恩视觉诊断公司 System, method and apparatus for measuring eye movement and pupillary reaction
CN109545299A (en) * 2018-11-14 2019-03-29 严洋 Cranial vascular disease risk based on artificial intelligence quickly identifies aid prompting system and method
CN110050308A (en) * 2016-12-02 2019-07-23 心脏起搏器股份公司 The detection of multisensor apoplexy
CN110400301A (en) * 2019-07-25 2019-11-01 中山大学中山眼科中心 A kind of cerebral apoplexy artificial intelligence screening method based on eye feature
CN111312389A (en) * 2020-02-20 2020-06-19 万达信息股份有限公司 Intelligent cerebral apoplexy diagnosis system
WO2021026552A1 (en) * 2019-08-06 2021-02-11 The Johns Hopkins University Platform to detect patient health condition based on images of physiological activity of a patient

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106793942A (en) * 2014-02-10 2017-05-31 华柏恩视觉诊断公司 System, method and apparatus for measuring eye movement and pupillary reaction
CN110050308A (en) * 2016-12-02 2019-07-23 心脏起搏器股份公司 The detection of multisensor apoplexy
CN109545299A (en) * 2018-11-14 2019-03-29 严洋 Cranial vascular disease risk based on artificial intelligence quickly identifies aid prompting system and method
CN110400301A (en) * 2019-07-25 2019-11-01 中山大学中山眼科中心 A kind of cerebral apoplexy artificial intelligence screening method based on eye feature
WO2021026552A1 (en) * 2019-08-06 2021-02-11 The Johns Hopkins University Platform to detect patient health condition based on images of physiological activity of a patient
CN111312389A (en) * 2020-02-20 2020-06-19 万达信息股份有限公司 Intelligent cerebral apoplexy diagnosis system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115810425A (en) * 2022-11-30 2023-03-17 广州中医药大学第一附属医院 Method and device for predicting mortality risk level of septic shock patient
CN115810425B (en) * 2022-11-30 2023-12-08 广州中医药大学第一附属医院 Method and device for predicting mortality risk level of sepsis shock patient

Similar Documents

Publication Publication Date Title
US20210202090A1 (en) Automated health condition scoring in telehealth encounters
US20220331028A1 (en) System for Capturing Movement Patterns and/or Vital Signs of a Person
JP2024016249A (en) Digital eye examination method for remote assessment by physician
US20120323087A1 (en) Affective well-being supervision system and method
CN111724879A (en) Rehabilitation training evaluation processing method, device and equipment
CN110269587B (en) Infant motion analysis system and infant vision analysis system based on motion
CN112233516A (en) Grading method and system for physician CPR examination training and examination
US20180249967A1 (en) Devices, systems, and associated methods for evaluating a potential stroke condition in a subject
CN111312389A (en) Intelligent cerebral apoplexy diagnosis system
WO2022141894A1 (en) Three-dimensional feature emotion analysis method capable of fusing expression and limb motion
Gupta StrokeSave: a novel, high-performance mobile application for stroke diagnosis using deep learning and computer vision
Okada et al. Dementia scale classification based on ubiquitous daily activity and interaction sensing
CN113506628A (en) Device and method for determining risk of great vessel occlusion
Delva et al. Investigation into the potential to create a force myography-based smart-home controller for aging populations
CN115040114A (en) Remote rehabilitation system and training method based on virtual reality and man-machine interaction
Mahmoud et al. Occupational therapy assessment for upper limb rehabilitation: A multisensor-based approach
TW202221621A (en) Virtual environment training system for nursing education
EP4325517A1 (en) Methods and devices in performing a vision testing procedure on a person
CN115497621A (en) Old person cognitive status evaluation system
CN115661101A (en) Premature infant retinopathy detection system based on random sampling and deep learning
EP4258979A1 (en) System and method for artificial intelligence baded medical diagnosis of health conditions
Perera et al. Intelligent Wheelchair with Emotion Analysis and Voice Recognition
CN111428540A (en) Method and device for outputting information
WO2023189313A1 (en) Program, information processing device, and information processing method
Zhang et al. Recognition for Robot First Aid: Recognizing a Person's Health State after a Fall in a Smart Environment with a Robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination