CN115192003A - Automatic assessment method of sedation level and related product - Google Patents

Automatic assessment method of sedation level and related product Download PDF

Info

Publication number
CN115192003A
CN115192003A CN202210720035.7A CN202210720035A CN115192003A CN 115192003 A CN115192003 A CN 115192003A CN 202210720035 A CN202210720035 A CN 202210720035A CN 115192003 A CN115192003 A CN 115192003A
Authority
CN
China
Prior art keywords
sedation
level
result
evaluation
assessment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210720035.7A
Other languages
Chinese (zh)
Inventor
孙巧杰
孙永樯
孙牧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN202210720035.7A priority Critical patent/CN115192003A/en
Publication of CN115192003A publication Critical patent/CN115192003A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1103Detecting eye twinkling
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Physiology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Ophthalmology & Optometry (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application discloses a method for automatically evaluating sedation level and related products. The method comprises the following steps: acquiring image data and optical flow field data of a target object at the same time period; obtaining a first sedation level assessment result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data; obtaining a result of the integrated assessment of the level of sedation of the target subject based on the result of the assessment of the first level of sedation and the result of the assessment of the second level of sedation. The intelligent assessment of the sedation level of the personnel and the real-time monitoring can be realized, and the complete intelligent assessment or the small amount of medical care force can be used for the sedation level assessment, so that the important contribution can be made to the realization of the intelligent intensive care.

Description

Automatic assessment method of sedation level and related product
Technical Field
The application relates to the technical field of artificial intelligence, in particular to an automatic assessment method of a sedation level and a related product.
Background
With the development of modern medicine, intensive Care Unit (ICU) has become a very important department in hospital clinic, and the improvement of the informatization degree of the ICU has become an indispensable part in the informatization construction of hospitals. Patients monitored in an ICU department are generally critical patients, and medical care personnel in the department are required to have strong professional theoretical level and clinical actual working capacity.
Because the illness state of the ICU patient is critical, the rescue work is bound to be in minutes, the ICU medical personnel can immediately know the illness state of the patient when the illness state of the patient changes, and the patient can be timely treated, so that the optimal opportunity for rescuing the patient is not missed. The pressure of medical staff to complete related nursing is very high, and the medical staff may have the final influence on the nursing of patients due to fatigue, wrong judgment caused by busy and other problems, and wrong judgment.
Sedation level assessment is a very important task in intensive care, which generally requires medical staff to pay attention to patients at all times, and burdens the medical staff. Meanwhile, the sedation level assessment of the patient in a manual mode cannot be performed all the time, so that the abnormality of the patient may not be found in time, and even the diagnosis and treatment time of the patient is delayed.
Disclosure of Invention
Based on the problems, the application provides an automatic assessment method of the sedation level and a related product, and aims to solve the problems that the sedation level of a patient in an ICU is too heavy to be manually assessed by medical staff and the assessment is not timely.
The embodiment of the application discloses the following technical scheme:
a first aspect of the present application provides a method for automated assessment of a level of sedation comprising:
acquiring image data and optical flow field data of a target object at the same time period;
obtaining a first sedation level assessment result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data;
obtaining a result of the integrated assessment of the level of sedation of the target subject based on the result of the assessment of the first level of sedation and the result of the assessment of the second level of sedation.
In an alternative implementation manner, the integrated assessment result of the sedation level is a first type assessment result or a second type assessment result, wherein the first type assessment result indicates a higher sedation level than the second type assessment result;
after the obtaining a result of the integrated assessment of the sedation level in the target subject, the method further comprises:
and judging whether a prompt signal related to the sedation level of the target object is generated or not according to the accumulated times of the second type of evaluation results of the assessment results of the sedation level from the initial counting time to the current time.
In an optional implementation manner, the determining whether to generate a prompt signal about the sedation level of the target subject currently according to the accumulated number of times that the assessment result of the sedation level from the initial counting time to the current time is the second type assessment result includes:
generating a prompt signal regarding the level of sedation of the target subject when a ratio of the accumulated number of times to a number of times the level of sedation of the target subject was cumulatively evaluated from the initial count time to the current time exceeds a preset threshold.
In an optional implementation, the method further includes:
establishing a data set; the data set includes a plurality of sets of training samples; the training samples comprise image training data, optical flow field training data and a sedation level label;
training a first evaluation model through image training data and a sedation level label in the training sample, and training a second evaluation model through optical flow field training data and a sedation level label in the training sample;
training a fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label;
evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the training effect evaluation is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient;
wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
In an alternative implementation, the training the fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label specifically includes:
fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample;
comparing the sedation level fusion prediction result with a sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error;
back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models.
In an optional implementation manner, the acquiring image data and optical flow field data of the target object in the same time period specifically includes:
acquiring video data of the target object in the time period;
separating the image data and the optical flow field data from the video data.
A second aspect of the present application provides an apparatus for automatically evaluating a sedation level, comprising:
the acquisition module is used for acquiring image data and optical flow field data of a target object at the same time period;
an evaluation module for obtaining a first sedation level evaluation result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data;
a fusion module for obtaining a result of the integrated assessment of the sedation level of the target subject based on the result of the assessment of the first and second sedation levels.
In an alternative implementation manner, the integrated assessment result of the sedation level is a first type assessment result or a second type assessment result, wherein the first type assessment result indicates a higher sedation level than the second type assessment result;
the device further comprises:
and the prompting module is used for judging whether to generate a prompting signal related to the sedation level of the target object according to the accumulated times of the second type of assessment results of the sedation level from the initial counting time to the current time.
In an optional implementation manner, the prompting module is specifically configured to generate a prompting signal related to the sedation level of the target subject when a ratio of the accumulated number of times to the number of times the sedation level of the target subject is cumulatively evaluated from the initial counting time to the current time exceeds a preset threshold.
In an optional implementation, the apparatus further includes:
the data set establishing module is used for establishing a data set; the data set includes a plurality of sets of training samples; the training samples comprise image training data, optical flow field training data and a sedation level label;
the model training module is used for training a first evaluation model through the image training data and the sedation level label in the training sample, and training a second evaluation model through the optical flow field training data and the sedation level label in the training sample; training a fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label; evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the training effect evaluation is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient;
wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
In an optional implementation manner, the model training module specifically includes:
the fusion prediction unit is used for fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample;
the error evaluation unit is used for comparing the sedation level fusion prediction result with the sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error;
a parameter optimization unit for back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models.
In an optional implementation manner, the obtaining module specifically includes:
a video acquisition unit, configured to acquire video data of the target object in the time period;
and the data separation unit is used for separating the image data and the optical flow field data from the video data.
A third aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of a method for automated assessment of a level of sedation according to any of the first aspects.
A fourth aspect of the present application provides an electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing said computer program in said memory to carry out the steps of the method for automated assessment of any one of the sedation levels of the first aspect.
Compared with the prior art, the method has the following beneficial effects:
according to the automatic assessment method for the sedation level and the related products, firstly, image data and optical flow field data of a target object in the same time period are obtained; then, a first sedation level evaluation result of the target object is obtained by processing the image data, and a second sedation level evaluation result of the target object is obtained by processing the optical flow field data; and finally, obtaining a comprehensive assessment result of the sedation level of the target object based on the assessment result of the first sedation level assessment result and the second sedation level assessment result. The sedation level is evaluated through the image data and the optical flow field data respectively, and finally, a comprehensive evaluation result is obtained based on the sedation levels evaluated through the image recognition and the behavior recognition, so that the sedation level evaluation result evaluated through one way can be corrected accurately. Therefore, the embodiment of the application can be automatically realized, the workload of medical staff can be reduced, and the timeliness of the assessment of the sedation level is improved. The accuracy of the sedation level evaluation result can be improved while the automation is realized, so that the final evaluation result is more reliable and timely.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a flow chart of a method for automated assessment of a level of sedation provided by an embodiment of the present application;
FIG. 2 is a flow chart of another method for automated assessment of sedation level provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of training, testing and applying a first evaluation model according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a fusion training of two evaluation models according to an embodiment of the present application;
FIG. 5 is a flow chart of yet another method for automated assessment of a level of sedation as provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of an apparatus for automatically evaluating a sedation level according to an embodiment of the present application;
fig. 7 is a hardware configuration diagram of an automatic sedation level assessment apparatus according to an embodiment of the present application.
Detailed Description
Sedation level assessment is a very important task in intensive care, which generally requires medical staff to pay attention to patients at all times, and burdens the medical staff. Meanwhile, the manual sedation level assessment for the patient cannot be performed all the time, so that the abnormality of the patient may not be found in time, and even the diagnosis and treatment time of the patient may be delayed. The patient may be finally badly affected by misjudgment, misjudgment and the like due to fatigue, busy and other problems.
In view of the above, the present application provides a method for automatically evaluating a sedation level and related products. The sedation level of the target object is evaluated in the image recognition mode and the behavior recognition mode, results evaluated in the two modes are comprehensively considered, and finally, a sedation level evaluation result with higher accuracy is obtained. The scheme can also reduce the workload of medical personnel and realize the assessment of the sedation state in real time.
The technical solution of the present application is described below with reference to the following examples and accompanying drawings.
Fig. 1 is a flowchart of a method for automatically evaluating a level of sedation according to an embodiment of the present application. The automated assessment method of sedation level as shown in fig. 1 comprises:
step 101, acquiring image data and optical flow field data of a target object in the same time period.
In the embodiment of the present application, the target subject refers to a subject whose sedation level is to be evaluated. In particular implementations, the target subject may be a subject in any setting where assessment of a level of sedation is desired, such as an intensive care patient monitored in a hospital ICU. It is noted that in some scenarios, the assessment of the target subject's level of sedation may require permission or consent from the target subject or the subject's guardian. And without permission or consent of the subject or subject guardian, the data collected during the assessment is not used in ways other than assessing their level of sedation. If the scheme is used in a medical scene, it may be assumed that the data acquisition in the scene such as an ICU is performed for the life and health of the target object, and it is necessary to perform the acquisition, but the data acquisition in the medical scene needs to be restricted by relevant units or personnel to ensure the reasonable and legal data use. In an alternative implementation, video data of the target object over a certain period of time may be first acquired, and then image data and optical flow field data are separated therefrom. In other implementations, the above process may also be performed in real time.
The image data is static data of the image itself, and is two-dimensional data. Optical flow field data refers to the displacement relationship of a target object between two frames with very short time intervals. The optical flow field is a two-dimensional instantaneous velocity field formed by all pixel points in an image, wherein the two-dimensional velocity vector is the projection of a three-dimensional velocity vector of a visible point in a scene on an imaging surface. In space, motion can be described by motion fields, and in an image plane, motion of an object is often represented by different gray scale distributions of different images in an image sequence, so that the motion fields in space are transferred to the images and represented as optical flow fields. The optical flow field is a two-dimensional vector field which reflects the change trend of the gray scale of each point on the image and can be regarded as an instantaneous velocity field generated by the movement of a pixel point with the gray scale on an image plane. The contained information is the instantaneous motion velocity vector information of each image point.
The image data includes at least an eye image, and the optical flow field data may include an optical flow field of a whole body. The eye image shows the static effect of the eye of the target object at one moment in a certain period; the whole-body optical flow field refers to the displacement relationship of the whole body of the target object between two frames with very short time distance in the period, so that the displacement of the whole body action is reflected, and the action is convenient to identify. In other implementations, the image data may also relate to parts of the face such as eyebrows, lips, etc. Thereby improving the accuracy of the assessed level of sedation.
In particular implementations, the video data may be captured by a video capture device, such as a surveillance camera mounted on the ICU. The separation of the image data and the optical flow field data can be realized by equipment with a video processing function and an information extracting function. On the one hand, it requires timed sampling, sampling the video into images. On the other hand, it needs to form time sequence data of the collected images according to time sequence and convert the images into photo columns arranged according to time sequence so as to obtain optical flow field data.
For example, video data up to the current 3 minutes is acquired and separated. The time period is the time period from 3 minutes ago to the current time. The time interval and the length of the time interval can be set according to actual requirements, and are not limited here.
Step 102, obtaining a first sedation level assessment result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data.
In practical applications, an image recognition algorithm (for example, yolov5 algorithm) may be used to analyze and evaluate multiple frames of images within a time period, so as to obtain a result of evaluating the sedation level of the target subject. Evaluating the patient's eye-closure status in the image, where the eyes are predominantly closed due to the critically ill patient's general sleep state under sedation; when the critically ill patient is in the non-sedated state, the eyes are opened, so that the sedation level of the patient can be preliminarily judged according to the opened and closed states of the eyes. For example, if the number of frames in the image with the eyes open is large, it is determined that the level of sedation is low; on the contrary, if the number of frames in the image with eyes closed is large, the sedation level is determined to be high. A higher level of sedation means a more proximate sedated state and a lower level of sedation means a more proximate non-sedated state.
In addition, the sedation level evaluation result of the target object can also be obtained through an action evaluation algorithm (such as iDT algorithm) of the optical flow field data. For example, if the optical flow field data can reflect that the hand-raising action of the target object is frequent, the sedation level is determined to be low; and when the optical flow field data reflects that the hand-lifting action of the target object is less, judging that the sedation level is higher. Of course the hand-lifting action is only one example of a recognition action, but also other types of actions can be recognized, such as shaking head, lifting legs, etc. In order to facilitate distinguishing the sedation level evaluation results obtained by the two manners (i.e., image recognition and behavior recognition), the sedation level evaluation result obtained according to the image data and the sedation level evaluation result obtained according to the optical flow field data are respectively named as a first sedation level evaluation result and a second sedation level evaluation result in the embodiment of the present application.
The above is only an example way of the step 102, and other ways may also be used in practical applications. For example, by a pre-trained evaluation model. This implementation will be described in more detail later. The details are not repeated here.
Step 103, obtaining a result of the integrated assessment of the sedation level of the target subject based on the result of the assessment of the first sedation level and the second sedation level.
In practical applications, the first sedation level evaluation result obtained by image recognition may have a leak in accuracy and/or evaluation accuracy due to the imaging angle, the size of human eyes, and the like. For example, the target object is an eye-open state, but the eye-open state may be detected as an eye-closed state due to the angle of the camera or the target object's eyes being too small, resulting in an error in the final evaluation result. As a result of investigations, it was found that open-eye patients tend to have more small movements, while sleeping patients have less small movements, and therefore the first sedation level assessment result may be corrected using a second sedation level assessment result obtained by means of a movement assessment. In step 103, the first sedation level evaluation result and the second sedation level evaluation result may be fused to obtain a comprehensive assessment result of the sedation level of the target subject. Compared with the first sedation level evaluation result, the sedation level comprehensive evaluation result is obtained by combining the results of the two ways, so that the precision and/or accuracy is improved.
The above provides an automated assessment method of sedation levels for embodiments of the present application. The sedation level is evaluated through the image data and the optical flow field data respectively, and finally, a comprehensive evaluation result is obtained based on the sedation levels evaluated through the image recognition and the behavior recognition, so that the sedation level evaluation result evaluated through one way can be corrected accurately. The embodiment of the application can be automatically realized, the workload of medical staff can be reduced, and the timeliness of the assessment of the sedation level is improved. The accuracy of the sedation level evaluation result can be improved while the automation is realized, so that the final evaluation result is more reliable and timely.
The application also provides a method for automatically evaluating the sedation level through the model. Fig. 2 is a flow chart of another method for automated assessment of a level of sedation provided by an embodiment of the present application. As shown in FIG. 2, the training process for the model is first introduced through steps 201-204.
Step 201, establishing a data set, wherein the data set comprises a plurality of groups of training samples, and the training samples comprise image training data, optical flow field training data and a sedation level label.
The image training data and the optical flow field training data are the same in nature as the image data and the optical flow field data described in the foregoing embodiments, respectively. That is, the image training data and the optical flow field training data also belong to different types of data obtained in the same period. In particular implementations, the image training data and the optical flow field training data may be extracted from historically acquired videos of the patient. The sedation level label may then be a level of sedation manually assessed by experienced medical personnel for the period of time to which the image training data and optical flow field training data pertain for the patient to which these data pertain. During the training process, the sedation level label is used as a true value to guide model training.
Step 202, training a first evaluation model through the image training data and the sedation level label in the training sample, and training a second evaluation model through the optical flow field training data and the sedation level label in the training sample.
In this step, the first evaluation model is a model that requires to output an evaluation result of the sedation level according to image data, the second evaluation model is a model that requires to output an evaluation result of the sedation level according to optical flow field data, and both the first evaluation model and the second evaluation model are to be trained.
In an alternative implementation, the image training data and the sedation level labels in the training samples are considered as a set of data corresponding to the training of the first assessment model; the optical flow field training data and the sedation level assessment model in the training sample are taken as a set of data corresponding to the training of the second assessment model. The training of the first evaluation model and the second evaluation model may be performed sequentially or synchronously.
Fig. 3 is a schematic diagram of training, testing and applying a first evaluation model according to an embodiment of the present disclosure. In this example implementation, a critical patient open-closed eye status data set was constructed, which was divided into a training set, a validation set, and a test set of a scale of 6. That is, in the data set constructed here, 6 pieces of data are used to train the model, 1 piece of data are used to verify the model, and 3 pieces of data are used to test the model. In this example implementation, the training of the model refers to training the evaluation model by using a training set and a verification set, and the testing of the model refers to testing the accuracy and robustness of the obtained model by using a test set; the application of the model refers to predicting new data using the model after passing the test.
As shown in FIG. 3, the training phase of the model involves several terms: dropout, BN, cross validation, learning rate self decay, etc. Dropout means that a Dropout strategy can be used for preventing over-fitting of the model in the model training process, BN means that a BN strategy is used for preventing the problem of gradient disappearance in the model gradient back propagation process, and cross validation means a validation strategy adopted in the training process; learning self-decay refers to the learning strategy employed in training the first evaluation model. As shown in FIG. 3, the testing phase of the model involves several terms: recall rate, precision rate, multi-class average precision, IOU (cross-over-parallel ratio), etc. Recall rate, accuracy rate, multi-class average accuracy and IOU value can be used as evaluation indexes of the training effect of the first evaluation model. As shown in fig. 3, the application phase of the model relates to terms of accuracy, robustness, and the like. The accuracy and the robustness reflect the performance of the model in the actual using process.
Similarly to training the first evaluation model, the data set for training the second evaluation model may be divided and subjected to training, testing, and other phases with reference to the process shown in fig. 3. And a proper strategy is set in the training stage of the second evaluation model, and a proper evaluation index is set in the testing stage.
It should be noted that, in the trained first evaluation model and second evaluation model, the algorithm used for finally determining the sedation level of the target subject is the softmax algorithm. For ease of understanding, the following is expressed by a vector. Vector quantity
Figure BDA0003710909520000111
In (a) of 1 ,a 2 Representing two categories respectively. For example, a 1 ,a 2 Two different levels of sedation, respectively. For example a 1 Indicating sedation, a 2 Indicating no sedation. It is understood that the whole model evaluation process can be understood as a classification task, vector
Figure BDA0003710909520000112
The number of elements in (b) is equal to the number of categories in the classification task. Vector quantity
Figure BDA0003710909520000113
Representing the feature vectors obtained after passing through the fully connected layer of the model. P (class) in the following formula i ) Representing the probability that the evaluation result is the ith classification in the plurality of classifications:
Figure BDA0003710909520000114
in the above example i takes the values 1 and 2. And c represents the total number of categories. It can be seen from the formula
Figure BDA0003710909520000115
After passing softmax, a new vector is obtained
Figure BDA0003710909520000116
Wherein, b 1 ,b 2 Is a real number of 0 or more and 1 or less, and satisfies b 1 +b 2 =1. And in the classification process, selecting the corresponding class with the maximum value as a classification result.
Step 203, training a fusion coefficient for fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model and the sedation level label.
However, there is a certain error in the motion estimation and the estimation of the open/close eye state, and even a patient who is asleep may have motion states, so it is not reasonable to directly negate the image estimation result using the motion estimation result, and for this reason, the present application has designed a method of fusing the first estimation model and the second estimation model to train a model having better estimation performance. The final output result solves the technical problems existing in the prior art. The evaluation results using the fusion coefficient fusion model are shown in the following formula:
f(x)=αf 1 (x)+(1-α)f 2 (x)
wherein f (x) represents the sedation level fusion prediction result, f 1 (x) Representing the output of the first evaluation module, f 2 (x) Representing the output of the second evaluation model. In the algorithm, if f 1 (x)=0,f 2 (x) =0, then f (x) =0; if f 1 (x)=0,f 2 (x) =1, then f (x) = (1- α) f 2 (x) In that respect If, f 1 (x)=1,f 2 (x) =0, then f (x) = α f 1 (x) In that respect If f is 1 (x)=1,f 2 (x) =1, then f (x) =1. From the above, the algorithm satisfies the end-point case. Meanwhile, the value range of f (x) is [0,1]。
α represents a fusion coefficient (which can be regarded as a weight greater than 0 and less than 1), and the value of the coefficient can be determined in training. In step 203 of the embodiment of the present application, joint training needs to be performed on the two models to determine the final value of α.
An example implementation of this step is described below: fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample; comparing the sedation level fusion prediction result with a sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error; back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models. Fig. 4 is a schematic diagram of fusion training of two evaluation models. And the evaluation results of the first evaluation model and the second evaluation model are propagated to the final fusion stage in a forward direction. The fused feedback is used to optimize the parameters of both models. Cross entropy in fig. 4 means that the error calculation is a cross entropy loss; BP refers to an error back-propagation algorithm, and parameter optimization refers to optimizer optimization.
And 204, evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the evaluation of the training effects is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient.
Wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
And step 205, acquiring image data and optical flow field data of the target object in the same time period.
Here, the image data and the optical flow field data may be data separated from the patient video data acquired in real time. These data serve as input for the actual application of the trained model described above.
Step 206, processing the image data through the trained first evaluation model to obtain a first sedation level evaluation result of the target object; and processing the optical flow field data through a trained second evaluation model to obtain a second sedation level evaluation result for the target subject.
Since the first evaluation model and the second evaluation model are trained, the training effect is verified and tested sufficiently. At this time, the image data and the optical flow field data may be obtained and then respectively input to the trained first evaluation model and the trained second evaluation model, and then the first evaluation model and the second evaluation model are respectively evaluated.
And step 207, fusing the first sedation level evaluation result and the second sedation level evaluation result through the trained fusion coefficient to obtain a comprehensive assessment result of the sedation level.
Since there is a certain error in both the motion recognition and the image recognition, and there is a possibility that the motion state occurs even in a patient who is asleep, it is not reasonable to directly deny the image evaluation result using the motion evaluation result. By fusing the two evaluation results through the fusion coefficient, the problem that the sedation level evaluation is inaccurate due to the camera angle or the size of human eyes in one evaluation mode and the problem that the evaluation precision is poor due to the fact that the consideration factor is too single can be effectively solved. By comprehensively considering the two evaluation results, the effective improvement on the sedation level evaluation accuracy and precision can be realized.
In practice the target subject may go to sleep again after a brief wake. In a transient waking state, at this moment, the evaluation results of the image data and the optical flow field data are in a non-sedation state, but if the sedation level prompt is carried out at this moment, it is obvious that incorrect guidance is frequently caused to medical staff. To address this issue, the present application further provides another automated assessment method of sedation level, see the flowchart shown in fig. 5.
And step 501, acquiring image data and optical flow field data of the target object in the same time period.
Step 502, obtaining a first sedation level assessment result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data.
Step 503, obtaining the integrated assessment result of the sedation level for the target subject based on the assessment result of the first sedation level and the second sedation level.
The implementation of steps 501 to 503 is substantially the same as that of steps 101 to 103 in the foregoing embodiments, and the related description may refer to the embodiments described above, and will not be repeated here. It should be noted that the above evaluation process is performed continuously, and a comprehensive evaluation result of the sedation level can be obtained after each execution of step 103. The time windows for each evaluation may be time-overlapping or independent. The assessment is performed according to a preset frequency or period, so that the real-time performance of the assessment of the sedation level is improved. It will be appreciated that even if the above evaluation actions of steps 501-503 are performed at a predetermined frequency or period, if the quality of the acquired raw data is not good, or if a valid eye or body part is not evaluated therein, it may be impossible to evaluate the sedation level.
In one example implementation, the integrated assessment of sedation level is a first type of assessment result or a second type of assessment result, where the first type of assessment result indicates a higher level of sedation than the second type of assessment result indicates a level of sedation.
After obtaining the result of comprehensively evaluating the sedation level of the target subject, the method for automatically evaluating the sedation level provided by the embodiment of the present application further includes:
and step 504, judging whether a prompt signal related to the sedation level of the target object is generated according to the accumulated times of the second type of assessment results of the sedation level assessment results from the initial counting time to the current time.
In the embodiment of the present application, an initial counting time may be set. For example, it is necessary to change from the current day 15:00 begin to focus on the level of sedation of the target subject and help in time when the level of sedation is poor. Therefore, 15. That is, the initial count time represents the start time of the level of sedation of the subject of interest for a certain time interval.
Once the evaluation result is the second type evaluation result, the result indicates that the target object is found to have a poor sedation level during the execution of the evaluation. But the second category of evaluation results evaluated several times individually may be due to a brief wake after sleep of the target subject. It is impossible to tell whether attention is needed. Therefore, in this step, the number of times from the initial counting time to the current time when the result of the evaluation of the sedation level is the second type evaluation result is accumulated. And judging whether to generate a prompt signal of the sedation level of the target object according to the accumulated times.
The initial counting time can be set according to the actual monitoring and evaluating requirement, and the counting value of the initial counting time can also be reset to zero.
In an alternative implementation, this step may be implemented as follows:
generating a prompt signal regarding the level of sedation of the target subject when a ratio of the accumulated number of times to a number of times the level of sedation of the target subject was cumulatively evaluated from the initial count time to the current time exceeds a preset threshold.
In this implementation, the ratio of the number of accumulations to the number of accumulations of the sedation level of the target subject to the current time of the accumulated assessment is used as a primary basis for generating a prompt signal for the level of sedation. A preset threshold is set, and a ratio exceeding the threshold indicates that a proportion with poor sedation level is estimated to be too large, and medical staff needs to be prompted to pay attention to the target object. The following inequality characterizes this implementation:
Figure BDA0003710909520000151
in the above inequality, t represents the current time, t 0 Which represents the time of the initial counting,
Figure BDA0003710909520000152
represents from t 0 The evaluation result is the accumulated times of the second type evaluation result at the time t, D i Represents from t 0 The total number of sedation levels was assessed by time t. k represents a predetermined threshold, alternatively referred to as a hyperparameter, and a suitable value of k may be determined through a number of experiments.
The above is only one example implementation manner of step 504, and in other implementation manners, the determination may be further performed according to a value of the accumulated number of the second type of evaluation results of the sedation level evaluation result from the initial counting time to the current time, for example, the accumulated number of times is greater than a preset value, and a prompt signal about the sedation level of the target subject is generated. Therefore, the specific implementation manner of step 504 is not limited in the embodiment of the present application.
The automatic assessment method for the sedation level realizes the intelligent assessment and the real-time monitoring of the sedation level of the personnel, and can be carried out on the sedation level assessment in a fully intelligent way or only by using a small amount of medical care force, thereby making an important contribution to the realization of the intelligent intensive care.
On the basis of the method described above, the application also provides a device for automatically evaluating the sedation level. The following is a detailed description of the embodiments.
Fig. 6 is a schematic structural diagram of an apparatus for automatically evaluating a sedation level according to an embodiment of the present application. The automatic evaluation apparatus 600 of the sedation level shown in fig. 6 includes:
an obtaining module 601, configured to obtain image data and optical flow field data of a target object at the same time period;
an evaluation module 602 for obtaining a first sedation level evaluation result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data;
a fusion module 603 configured to obtain a result of the integrated assessment of the sedation level of the target subject based on the result of the assessment of the first and second sedation levels.
The sedation level is evaluated through the image data and the optical flow field data respectively, and finally, a comprehensive evaluation result is obtained based on the sedation levels evaluated through the image recognition and the behavior recognition, so that the sedation level evaluation result evaluated through one way can be corrected accurately. Therefore, the accuracy of the sedation level evaluation result is improved, and the final evaluation result is more reliable and timely. In addition, the embodiment of the application can be automatically realized, the workload of medical staff can be relieved, and the timeliness of sedation level assessment is improved.
Optionally, the integrated assessment of sedation level is a first type of assessment result or a second type of assessment result, wherein the first type of assessment result indicates a higher level of sedation than the second type of assessment result;
the automated assessment apparatus 600 of sedation level may further comprise:
and the prompting module is used for judging whether to generate a prompting signal related to the sedation level of the target object according to the accumulated times of the second type of assessment results of the sedation level from the initial counting time to the current time.
Optionally, the prompting module is specifically configured to generate a prompting signal related to the sedation level of the target subject when a ratio of the accumulated number of times to a number of times that the sedation level of the target subject is assessed cumulatively from the initial counting time to the current time exceeds a preset threshold.
Optionally, the automatic assessment apparatus 600 of sedation level may further comprise:
the data set establishing module is used for establishing a data set; the data set comprises a plurality of sets of training samples; the training samples comprise image training data, optical flow field training data and a sedation level label;
the model training module is used for training a first evaluation model through the image training data and the sedation level label in the training sample, and training a second evaluation model through the optical flow field training data and the sedation level label in the training sample; training a fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label; evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the training effect evaluation is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient;
wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
Optionally, the model training module specifically includes:
the fusion prediction unit is used for fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample;
the error evaluation unit is used for comparing the sedation level fusion prediction result with the sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error;
a parameter optimization unit for back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models.
Optionally, the obtaining module 601 specifically includes:
a video acquisition unit, configured to acquire video data of the target object in the time period;
and the data separation unit is used for separating the image data and the optical flow field data from the video data.
Based on the automatic assessment method and device for the sedation level provided by the foregoing embodiments, the present application embodiment also provides a computer-readable storage medium. The storage medium has stored thereon a program which, when executed by a processor, performs some or all of the steps of the method for automated assessment of a level of sedation as claimed in the aforementioned method embodiments of the present application.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
Based on the automatic assessment method, device and storage medium of the sedation level provided by the foregoing embodiments, the embodiments of the present application provide a processor. The processor is configured to run a program, wherein the program is configured to perform some or all of the steps of the automated assessment method of sedation level provided by the foregoing method embodiments.
Based on the storage medium and the processor provided by the foregoing embodiments, the present application also provides an apparatus for automatic assessment of a level of sedation. Referring to fig. 7, there is provided a hardware configuration diagram of the automatic evaluation apparatus for a sedation level according to the present embodiment.
As shown in fig. 7, the automated sedation level assessment apparatus includes: a memory 1401, a processor 1402, a communication bus 1403, and a communication interface 1404.
The memory 1401 stores a program that can be executed on the processor, and when the program is executed, part or all of the steps of the method for automatically evaluating a level of sedation provided by the foregoing method embodiments of the present application are implemented. The memory 1401 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
In this automated assessment of sedation level facility, processor 1402 communicates signaling, logic instructions, etc. with memory 1401 via communications bus 1403. The device can communicatively interact with other devices via a communication interface 1404.
In the embodiment of the present application, the device for automatically evaluating the sedation level may be implemented by a device for locally generating the medical image, or may be implemented by other devices, such as a terminal (a laptop, a desktop, a mobile phone, etc.) for communicating with the device for generating the medical image, or may be implemented by a physical server. In addition, the automatic assessment method and device for the sedation level provided by the embodiment of the application can also be realized on a cloud server.
The above description is only one specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A method for automated assessment of a level of sedation, comprising:
acquiring image data and optical flow field data of a target object at the same time period;
obtaining a first sedation level assessment result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data;
obtaining a result of the integrated assessment of the level of sedation of the target subject based on the result of the assessment of the first level of sedation and the result of the assessment of the second level of sedation.
2. The method of claim 1, wherein the integrated assessment of sedation level is a first type of assessment result or a second type of assessment result, wherein the first type of assessment result indicates a higher level of sedation than the second type of assessment result;
after the obtaining a result of the integrated assessment of the level of sedation of the target subject, the method further comprises:
and judging whether a prompt signal related to the sedation level of the target object is generated or not according to the accumulated times of the second type of evaluation results of the assessment results of the sedation level from the initial counting time to the current time.
3. The method according to claim 2, wherein the determining whether a prompt signal regarding the sedation level of the target subject is currently generated according to the accumulated number of times that the sedation level assessment result is the second type assessment result from the initial counting time to the current time comprises:
when the ratio of the accumulated number of times to the number of times of accumulated evaluation of the sedation level of the target subject from the initial counting time to the current time exceeds a preset threshold, generating a prompt signal regarding the sedation level of the target subject.
4. The method for automated assessment of sedation level according to claim 1, further comprising:
establishing a data set; the data set comprises a plurality of sets of training samples; the training samples comprise image training data, optical flow field training data and a sedation level label;
training a first evaluation model through image training data and a sedation level label in the training sample, and training a second evaluation model through optical flow field training data and a sedation level label in the training sample;
training a fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label;
evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the training effect evaluation is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient;
wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
5. The method according to claim 4, wherein the training of the fusion coefficient for fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label comprises:
fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample;
comparing the sedation level fusion prediction result with a sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error;
back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models.
6. The method for automatically evaluating the sedation level according to claim 1, wherein the acquiring image data and optical flow field data of the target subject at the same time interval specifically comprises:
acquiring video data of the target object in the time period;
separating the image data and the optical flow field data from the video data.
7. An apparatus for automated assessment of sedation level, comprising:
the acquisition module is used for acquiring image data and optical flow field data of a target object at the same time period;
an evaluation module for obtaining a first sedation level evaluation result for the target subject by processing the image data; and obtaining a second sedation level assessment result for the target subject by processing the optical flow field data;
a fusion module for obtaining a result of the integrated assessment of the sedation level of the target subject based on the result of the assessment of the first and second sedation levels.
8. The apparatus according to claim 7, wherein the integrated evaluation result of the level of sedation is a first type evaluation result or a second type evaluation result, wherein the first type evaluation result indicates a higher level of sedation than the second type evaluation result indicates a higher level of sedation;
the device further comprises:
and the prompting module is used for judging whether to generate a prompting signal related to the sedation level of the target object according to the accumulated times of the second type of assessment results of the sedation level from the initial counting time to the current time.
9. The apparatus according to claim 8, wherein the prompting module is specifically configured to generate a prompting signal related to the sedation level of the target subject when a ratio of the accumulated number of times to a number of times the sedation level of the target subject was cumulatively evaluated from the initial counting time to the current time exceeds a preset threshold.
10. The apparatus for automated assessment of sedation level according to claim 7, further comprising:
the data set establishing module is used for establishing a data set; the data set comprises a plurality of sets of training samples; the training samples comprise image training data, optical flow field training data and a sedation level label;
the model training module is used for training a first evaluation model through the image training data and the sedation level label in the training sample, and training a second evaluation model through the optical flow field training data and the sedation level label in the training sample; training a fusion coefficient fusing the first evaluation model and the second evaluation model based on the output result of the first evaluation model, the output result of the second evaluation model, and the sedation level label; evaluating the training effects of the first evaluation model and the second evaluation model, and stopping training when the training effect evaluation is qualified to obtain a trained first evaluation model, a trained second evaluation model and a trained fusion coefficient;
wherein the trained first evaluation model is used for processing the image data to obtain a first sedation level evaluation result for the target object, the trained second evaluation model is used for processing the optical flow field data to obtain a second sedation level evaluation result for the target object, and the trained fusion coefficient is used for fusing the first sedation level evaluation result and the second sedation level evaluation result to obtain the integrated sedation level evaluation result.
11. The apparatus for automated assessment of sedation level according to claim 10, wherein the model training module comprises:
the fusion prediction unit is used for fusing the output result of the first evaluation model and the output result of the second evaluation model by adopting a current fusion coefficient to obtain a sedation level fusion prediction result corresponding to the training sample;
the error evaluation unit is used for comparing the sedation level fusion prediction result with the sedation level label in the training sample to obtain cross entropy as a sedation level evaluation error;
a parameter optimization unit for back-propagating the sedation level assessment error to optimize current fusion coefficients and parameters of the first and second assessment models.
12. The apparatus for automated assessment of sedation level according to claim 7, wherein the means for obtaining comprises:
a video acquisition unit, configured to acquire video data of the target object in the time period;
and the data separation unit is used for separating the image data and the optical flow field data from the video data.
13. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
14. An electronic device, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 6.
CN202210720035.7A 2022-06-23 2022-06-23 Automatic assessment method of sedation level and related product Pending CN115192003A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210720035.7A CN115192003A (en) 2022-06-23 2022-06-23 Automatic assessment method of sedation level and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210720035.7A CN115192003A (en) 2022-06-23 2022-06-23 Automatic assessment method of sedation level and related product

Publications (1)

Publication Number Publication Date
CN115192003A true CN115192003A (en) 2022-10-18

Family

ID=83578764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210720035.7A Pending CN115192003A (en) 2022-06-23 2022-06-23 Automatic assessment method of sedation level and related product

Country Status (1)

Country Link
CN (1) CN115192003A (en)

Similar Documents

Publication Publication Date Title
US11170545B2 (en) Systems and methods for diagnostic oriented image quality assessment
US20220414464A1 (en) Method and server for federated machine learning
WO2020010668A1 (en) Human body health assessment method and system based on sleep big data
KR20200005987A (en) System and method for diagnosing cognitive impairment using touch input
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN109949280B (en) Image processing method, image processing apparatus, device storage medium, and growth evaluation system
CN110706786B (en) Non-contact intelligent psychological parameter analysis and evaluation system
WO2019137538A1 (en) Emotion representative image to derive health rating
CN110348385B (en) Living body face recognition method and device
KR102174345B1 (en) Method and Apparatus for Measuring Degree of Immersion
Erekat et al. Enforcing multilabel consistency for automatic spatio-temporal assessment of shoulder pain intensity
Petersen et al. The path toward equal performance in medical machine learning
Endo et al. A convolutional neural network for estimating synaptic connectivity from spike trains
Sim et al. Improving the accuracy of erroneous-plan recognition system for Activities of Daily Living
CN117338234A (en) Diopter and vision joint detection method
CN112528890A (en) Attention assessment method and device and electronic equipment
CN115192003A (en) Automatic assessment method of sedation level and related product
Wang et al. Causality analysis of fMRI data based on the directed information theory framework
Yin Prediction algorithm of young Students’ Physical health risk factors based on deep learning
Fraza et al. The Extremes of Normative Modelling
Sundharamurthy et al. Cloud‐based onboard prediction and diagnosis of diabetic retinopathy
Wong et al. Artificial intelligence analysis of videos to augment clinical assessment: an overview
CN116091963B (en) Quality evaluation method and device for clinical test institution, electronic equipment and storage medium
CN113421643B (en) AI model reliability judging method, device, equipment and storage medium
Ye et al. Confidence contours: Uncertainty-aware annotation for medical semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination