WO2024092955A1 - 医学培训考核评价方法、装置、电子设备及存储介质 - Google Patents
医学培训考核评价方法、装置、电子设备及存储介质 Download PDFInfo
- Publication number
- WO2024092955A1 WO2024092955A1 PCT/CN2022/137057 CN2022137057W WO2024092955A1 WO 2024092955 A1 WO2024092955 A1 WO 2024092955A1 CN 2022137057 W CN2022137057 W CN 2022137057W WO 2024092955 A1 WO2024092955 A1 WO 2024092955A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- assessment
- evaluation
- subject
- data
- virtual scene
- Prior art date
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 242
- 238000012549 training Methods 0.000 title claims abstract description 126
- 238000000034 method Methods 0.000 claims abstract description 93
- 230000009471 action Effects 0.000 claims abstract description 71
- 230000008569 process Effects 0.000 claims abstract description 67
- 230000000007 visual effect Effects 0.000 claims abstract description 29
- 230000004044 response Effects 0.000 claims abstract description 26
- 238000004088 simulation Methods 0.000 claims abstract description 20
- 238000004590 computer program Methods 0.000 claims description 30
- 238000004891 communication Methods 0.000 claims description 13
- 238000010801 machine learning Methods 0.000 claims description 9
- 238000012854 evaluation process Methods 0.000 abstract description 4
- 230000000694 effects Effects 0.000 description 23
- 230000006870 function Effects 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 13
- 238000004458 analytical method Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000012545 processing Methods 0.000 description 7
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 5
- 238000013210 evaluation model Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012360 testing method Methods 0.000 description 4
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 3
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 3
- 238000011158 quantitative evaluation Methods 0.000 description 3
- 238000001356 surgical procedure Methods 0.000 description 3
- 101000827703 Homo sapiens Polyphosphoinositide phosphatase Proteins 0.000 description 2
- 102100023591 Polyphosphoinositide phosphatase Human genes 0.000 description 2
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 2
- 210000003423 ankle Anatomy 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013506 data mapping Methods 0.000 description 2
- 238000005315 distribution function Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 210000003127 knee Anatomy 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000013334 tissue model Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/063—Operations research, analysis or management
- G06Q10/0639—Performance analysis of employees; Performance analysis of enterprise or organisation operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/10—Services
- G06Q50/20—Education
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Definitions
- the present application relates to the field of computer technology. Specifically, the present application relates to a medical training assessment method, device, electronic equipment and storage medium.
- the standard medical operation process includes whether the surgical instruments are selected correctly, whether the operation position is accurate, whether the order of the operation steps is reasonable, etc. It is understandable that different operations have different execution times, priorities, and execution orders. An incorrect operation will produce different error results at different times, and the wrong order of executing operations may also produce error results.
- medical training assessment is particularly important.
- medical training assessment and evaluation mainly rely on manual assessment, which not only has low evaluation efficiency, but also has certain subjectivity that affects the accuracy of the evaluation.
- assessment content of medical training assessment and evaluation is too limited, and there is still a lack of intelligent assessment solutions to achieve a comprehensive and objective evaluation of the assessment objects.
- the present application provides a medical training assessment and evaluation method, device, electronic device and storage medium, which can solve the problem that the medical training assessment and evaluation process in the related technology cannot achieve a comprehensive and objective evaluation of the assessment object.
- a medical training assessment evaluation method includes: displaying a virtual scene constructed for medical training assessment; obtaining first evaluation data in response to a trigger operation of an assessment subject in the virtual scene; the trigger operation includes a simulated operation of a surgical instrument performed by the assessment subject in the virtual scene and a reply operation performed by the assessment subject in the virtual scene with respect to the assessment content; for the simulated operation process of the surgical instrument performed by the assessment subject in the virtual scene, inputting a corresponding operation video acquired by an image acquisition device and corresponding sensor data acquired by a mixed reality device into a visual evaluation network model, evaluating key actions of the assessment subject in the real scene, and obtaining second evaluation data; and comprehensively evaluating the medical training assessment of the assessment subject based on the first evaluation data and the second evaluation data, outputting a comprehensive evaluation result of the assessment subject.
- a medical training assessment and evaluation device includes: a virtual scene display module, which is used to display a virtual scene constructed for medical training assessment; a virtual scene evaluation module, which is used to obtain first evaluation data in response to a trigger operation of an assessment subject in the virtual scene; the trigger operation includes a simulated operation of a surgical instrument performed by the assessment subject in the virtual scene and a reply operation performed by the assessment subject in the virtual scene with respect to the assessment content; a computer vision evaluation module, which is used to input a corresponding operation video collected by an image acquisition device and corresponding sensor data collected by a mixed reality device into a visual evaluation network model for the simulated operation process of the surgical instrument performed by the assessment subject in the virtual scene, evaluate the key actions of the assessment subject in the real scene, and obtain second evaluation data; a comprehensive evaluation module, which is used to comprehensively evaluate the medical training assessment of the assessment subject based on the first evaluation data and the second evaluation data, and output a comprehensive evaluation result of the assessment subject.
- an electronic device includes: at least one processor, at least one memory, and at least one communication bus, wherein a computer program is stored in the memory, and the processor reads the computer program in the memory through the communication bus; when the computer program is executed by the processor, the medical training assessment and evaluation method as described above is implemented.
- a storage medium stores a computer program, which implements the medical training assessment and evaluation method as described above when executed by a processor.
- a computer program product includes a computer program, the computer program is stored in a storage medium, a processor of a computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the medical training assessment and evaluation method as described above is implemented when the computer device executes the computer program.
- the first evaluation data can be obtained, and the second evaluation data can be obtained based on the corresponding video collected by the image acquisition device and the corresponding sensor data collected by the mixed reality device. Then, based on the first evaluation data and the second evaluation data, a comprehensive evaluation is performed on the medical training assessment of the assessment subject, and a comprehensive evaluation result of the assessment subject is output.
- the above evaluation can not only start from virtual simulation, but also evaluate the simulated operation of surgical instruments performed by the assessment subject in the virtual scene and the response operation performed by the assessment subject to the assessment content in the virtual scene, and can also use mixed reality technology to evaluate the operation effects corresponding to the key actions of the assessment subject in the real scene, thereby realizing a new type of multimodal intelligent assessment scheme, avoiding the defects of low efficiency and low accuracy caused by manual assessment, and then effectively solving the problem that the medical training assessment evaluation process in related technologies cannot achieve a comprehensive and objective evaluation of the assessment subject.
- FIG1 is a schematic diagram of an implementation environment involved in the present application.
- FIG2 is a flow chart of a medical training assessment method according to an exemplary embodiment
- FIG3 is a flow chart of step 330 in the embodiment corresponding to FIG2 in one embodiment
- FIG4 is a flow chart of step 330 in the embodiment corresponding to FIG2 in another embodiment
- FIG5 is a flow chart of step 330 in the embodiment corresponding to FIG2 in another embodiment
- FIG6 is a flowchart of a training process of a visual evaluation network model according to an exemplary embodiment
- FIG7 is a schematic diagram of a learning effect comparison analysis curve in a comprehensive evaluation result according to an exemplary embodiment
- FIG8 is a flow chart of step 350 in one embodiment of the embodiment corresponding to FIG2 ;
- FIG9 is a schematic diagram of a specific implementation of a medical training assessment and evaluation method in an application scenario
- FIG10 is a structural block diagram of a medical training assessment and evaluation device according to an exemplary embodiment
- FIG11 is a hardware structure diagram of an electronic device according to an exemplary embodiment
- Fig. 12 is a structural block diagram of an electronic device according to an exemplary embodiment.
- the existing medical training assessment is not only too limited in content, but also unable to evaluate the operational effectiveness of medical surgical operations (such as the standardization, logic, proficiency, sequence of medical surgical operations, and the rationality of dealing with complex situations such as emergency treatment, etc.), and it mainly relies on manual implementation, which is inefficient and too subjective.
- virtual reality and mixed reality technologies have very significant advantages. They can conduct low-cost, repeatable, objective and quantitative digital assessments, allowing the assessment subjects to learn and grow in a repeatable assessment environment, thus effectively improving the problem of assessments being too dependent on manual implementation.
- the medical training assessment and evaluation method can effectively improve the accuracy of medical training assessment and evaluation.
- the medical training assessment and evaluation method is applicable to a medical training assessment and evaluation device, and the medical training assessment and evaluation device can be deployed in an electronic device.
- the electronic device can be a computer device configured with a von Neumann architecture, and the computer device includes but is not limited to desktop computers, laptops, servers, etc.
- FIG1 is a schematic diagram of an implementation environment involved in a medical training assessment method, wherein the implementation environment includes an assessment device 110 , an image acquisition device 130 , and a mixed reality device 150 .
- the evaluation device 110 can be operated by a client having a medical training and assessment function, and can be an electronic device such as a desktop computer, a laptop computer, a server, etc., which is not limited here.
- the client is used to conduct medical training assessment on the assessment object, and can be in the form of an application program or a web page. Accordingly, the user interface of the client for conducting medical training assessment on the assessment object can be in the form of a program window or a web page, which is not limited here.
- the image acquisition device 130 may be an electronic device with an image acquisition function, for example, the electronic device may be a camera, a video camera, or a smart phone with a camera, etc., which is not limited here.
- the image acquisition device 130 When the image acquisition device 130 is deployed in the space where the medical training assessment is conducted, it can correspondingly acquire images of the assessment subject, for example, the image may be a corresponding operation video acquired by the image acquisition device 130 during the assessment subject's simulated operation of a surgical instrument in a virtual scene.
- sensor data about the assessment subject can be collected accordingly.
- the sensor data can be the corresponding sensor data collected by the mixed reality device 150 during the assessment subject's simulated operation of surgical instruments in the virtual scene.
- the mixed reality device 150 can be a posture sensor, smart glasses, a smart helmet, etc., which is not limited here.
- the evaluation device 110 establishes communication connections with the image acquisition device 130 and the mixed reality device 150 respectively in advance through wired or wireless means, so as to realize data transmission between each other through the communication connection.
- the transmitted data includes but is not limited to operation videos, sensor data, etc.
- the evaluation device 110 can receive the corresponding operation video and the corresponding sensor data and input them into the visual evaluation network model, thereby obtaining the second evaluation data by evaluating the key actions of the assessment subject in the real scene.
- the first evaluation data is obtained, and based on the first evaluation data and the second evaluation data, a comprehensive evaluation of the medical training assessment of the assessment subject is conducted, and finally a comprehensive evaluation result of the assessment subject is output, thereby achieving a comprehensive and objective evaluation of the assessment subject.
- image acquisition devices and mixed reality devices can also be integrated into the same electronic device.
- a smart helmet carrying a camera can also integrate image acquisition devices, mixed reality devices and evaluation devices into the same electronic device, so that the medical training assessment and evaluation method can be independently completed by the electronic device. This does not constitute a specific limitation.
- An embodiment of the present application provides a medical training assessment method.
- the method is applicable to an electronic device, which may be the assessment device 110 in the implementation environment shown in FIG. 1 .
- the method may include the following steps:
- Step 310 displaying a virtual scene constructed for medical training assessment.
- the virtual scene is a digital scene constructed for medical training and assessment using computer technology. It can be presented by simulating the environment required for medical training and assessment (such as a medical operating laboratory), and then the medical training and assessment of the assessment subjects can be carried out through virtual simulation.
- the evaluation device is an electronic device that can be operated by a client having a medical training and assessment function. Then, when the assessment subject desires to participate in the medical training and assessment, the client can be started to enter the corresponding screen, which is regarded as the display of the virtual scene.
- the displayed virtual scene can be a simulated medical surgery laboratory.
- the real scene refers to the space where the examinee participates in the medical training assessment.
- the real scene can be a computer room, and the desktop computers deployed in the computer room can bring the examinee a sense of immersion in the simulated medical surgery laboratory through the display of virtual scenes in terms of vision, hearing and touch.
- Step 330 obtaining first evaluation data in response to a triggering operation of the assessment subject in the virtual scene.
- the triggering operation includes, but is not limited to: the simulated operation of the surgical instrument by the assessment subject in the virtual scene, and the response operation of the assessment subject to the assessment content in the virtual scene.
- the first evaluation data includes at least one of the following: an execution score of the assessment subject's operation execution order during the corresponding simulation operation process, a duration score of the assessment subject for the corresponding simulation operation process, and a score of the assessment subject for the assessment content.
- the specific behavior of the trigger operation may also be different depending on the input components configured in the electronic device (such as the touch layer covered on the display screen, the mouse, the keyboard, etc.).
- the trigger operation may be a gesture operation such as clicking or sliding; while for the electronic device being a desktop computer configured with a mouse, the trigger operation may be a mechanical operation such as dragging, single-clicking, double-clicking, etc., which is not limited in this embodiment.
- step 330 may include the following steps: step 331, in response to the simulated operation of the assessment subject in the virtual scene, obtaining the operation execution data of the assessment subject; wherein the operation execution data is used to indicate the order in which the assessment subject performs operations during the corresponding simulated operation process; step 332, comparing the order indicated by the operation execution data with the standard execution order of the simulated operation process, obtaining the execution score of the operation execution order of the assessment subject during the corresponding simulated operation process, and adding it to the first evaluation data.
- the standard execution order of the simulation operation process is pre-stored in the electronic device.
- a script file is used to store each simulation operation in the simulation operation process, and each simulation operation has a unique preceding operation and a subsequent operation. Therefore, the standard execution order of the simulation operation process can be obtained by traversing the sequence of each simulation operation in the script file.
- the execution score of the assessment subject in the operation execution order during the corresponding simulated operation process is 100 points.
- step 330 may include the following steps: step 334, in response to the simulated operation of the subject of assessment in the virtual scene, determining the duration of the simulated operation process of the subject of assessment; step 335, according to the difference between the duration of the simulated operation process of the subject of assessment and a set threshold, obtaining the duration score of the corresponding simulated operation process of the subject of assessment, and adding it to the first evaluation data.
- the duration of the simulated operation process of the assessment object can be realized by a timer, etc. Specifically, when the assessment object starts the simulated operation process, the timer is started accordingly, and the timer is stopped when the assessment object finishes the simulated operation process. At this time, the value of the timer can be used as the duration of the simulated operation process of the assessment object.
- the corresponding scoring rules can be set based on the difference between the duration of the simulated operation process of the assessment object and 10 minutes: if the duration of the simulated operation process of the assessment object is within 10 minutes (which can be understood as the difference is a non-positive number), the duration score is 100 points; if the difference between the duration of the simulated operation process of the assessment object and 10 minutes is (0,2), the duration score is 90 points; if the difference between the duration of the simulated operation process of the assessment object and 10 minutes is [2,5], the duration score is 80 points; if the difference between the duration of the simulated operation process of the assessment object and 10 minutes is (5,8), the duration score is 70 points; if the difference between the duration of the simulated operation process of the assessment object and 10 minutes is [8,10], the duration score is 60 points; otherwise, the duration score is failing (i.e., below 60 points).
- the score of assessment subject A for the duration of the corresponding simulated operation process is 90 points; if the duration of the simulated operation process of assessment subject B is 16 minutes, then the score of assessment subject B for the duration of the corresponding simulated operation process is 70 points.
- step 330 may include the following steps: step 337, in response to the reply operation of the assessment subject in the virtual scene, determining the assessment subject's reply data for the assessment content; step 338, comparing the reply data with the standard reply to the assessment content, obtaining the assessment subject's score for the assessment content, and adding it to the first evaluation data.
- the assessment content can be flexibly set according to the actual needs of the application scenario.
- the assessment content can be set as a test question in the form of multiple-choice questions or true-or-false questions, which is not limited here.
- the standard responses to the assessment content may also be different.
- the assessment content is a test question
- the standard response refers to the correct answer to the test question.
- the response data indicates the answer of the assessment subject to the test question. By comparing it with the correct answer pre-stored in the electronic device, the score of the assessment subject for the assessment content can be determined.
- Step 350 with respect to the simulated operation process of the subject of assessment on the surgical instrument in the virtual scene, the corresponding operation video acquired by the image acquisition device and the corresponding sensor data acquired by the mixed reality device are input into the visual evaluation network model to evaluate the key actions of the subject of assessment in the real scene to obtain the second evaluation data.
- the operation video includes multiple frames, each of which is used to describe a key action of the assessment subject in a real scene when the assessment subject simulates the operation of the surgical instrument in a virtual scene. It can be understood that each frame corresponds to a key action. Of course, the corresponding key actions may be different or the same for different frames. In other words, a continuous key action may correspond to multiple frames.
- the operation video is collected based on an image acquisition device and sent to an electronic device.
- the image acquisition device can be an electronic device with an image acquisition function deployed in a real scene, or it can be an electronic device worn on the assessment subject and equipped with an image acquisition function component.
- the key actions of the assessment subject in the real scene described in the operation video will be conducive to evaluating the operation effects corresponding to the key actions of the assessment subject in the real scene.
- the sensor data is collected by the mixed reality device and sent to the electronic device.
- It can be posture data used to describe the position and posture of the assessment subject when performing key actions in the real scene, and it can also be used to describe the electrocardiogram data of the psychological state of the assessment subject when performing key actions in the real scene.
- each frame corresponds to a sensor data, and it can also be understood that the psychological state of the assessment subject when performing key actions is different in the real scene.
- the mixed reality device can be a posture sensor deployed in the real scene, or it can be an electronic device with mixed reality function worn on the assessment subject himself, such as a smart helmet.
- the psychological state of the assessment subject when performing key actions in real-life scenarios described by sensor data can accurately reflect the changes in the assessment subject's mentality when dealing with complex situations such as emergency processing, and then assist in evaluating the operational effects of the assessment subject's key actions in real-life scenarios, which is further conducive to improving the accuracy of the comprehensive and objective evaluation of the assessment subject.
- the key actions of the assessment object in the real scene can be evaluated.
- the evaluation of the key actions of the assessment object in the real scene is implemented based on the visual evaluation network model.
- the second evaluation data is used to indicate the operation effect corresponding to the key actions of the assessment object in the real scene.
- the operation effect can refer to the standardization, logic, proficiency, sequence of the key actions, and the rationality of dealing with complex situations such as emergency processing, etc.
- the visual evaluation network model is a machine learning model that is trained to evaluate the key actions of the assessment object in real scenes.
- the machine learning model can be a convolutional neural network model, etc., which is not limited here.
- the training process of the visual evaluation network model may include the following steps:
- Step 410 construct a training set based on the simulated operation process of the training subject on the surgical instrument in the virtual scene.
- the training set includes training samples with labels, and the labels are used to indicate the evaluation type of key actions of the training objects in real scenarios.
- the training subjects are essentially assessment subjects who participate in medical training assessment in order to train the visual evaluation network model.
- assessment subjects of different levels such as senior clinicians, doctors with three years of clinical experience, ordinary teaching doctors, skilled students, and beginners with no operation experience, can be selected as training subjects to enrich the evaluation types of key actions of the training subjects in real-life scenarios.
- the evaluation type may include excellent, good, medium, pass, fail, etc.
- the evaluation type may also be represented by different scores (0-100), which is not a specific limitation here.
- different evaluation types reflect that the operation effects corresponding to the key actions of different assessment objects in real scenes will be different.
- the evaluation type is excellent, which reflects that when a senior clinical doctor is the assessment object, the operation effect corresponding to the key actions of the doctor in the real scene is the best;
- the evaluation type is fail, which reflects that when a beginner student with no operation experience is the assessment object, the operation effect corresponding to the key actions of the student in the real scene is the worst.
- the training samples essentially include the corresponding operation videos and sensor data collected by the image acquisition device and the mixed reality device respectively when the training subjects simulate the operation of surgical instruments in the virtual scene.
- Step 430 input the training sample into the machine learning model, predict the evaluation type of the key actions of the training object in the real scene, and obtain the prediction data of the training sample.
- the prediction data is used to indicate the evaluation type of the predicted key actions of the training object in the real scene.
- Step 450 calculating a loss value according to the difference between the evaluation type indicated by the label and the predicted evaluation type.
- the calculation of the loss value may be implemented using algorithms such as loss functions.
- the loss function includes but is not limited to: a cosine loss function, a cross entropy function, an intra-class distribution function, an inter-class distribution function, and an activation classification function.
- step 470 If the loss value does not meet the model convergence condition, execute step 470.
- step 490 is executed.
- model convergence condition can be flexibly adjusted according to the actual needs of the application scenario.
- the model convergence condition can refer to the loss value reaching the minimum, which improves the accuracy of the model; the model convergence condition can also refer to the number of iterations exceeding a set threshold, which improves the efficiency of model training. This is not limited here.
- Step 470 update the parameters of the machine learning model and continue training.
- another training sample can be obtained from the training set and input into the machine learning model to continue to predict the evaluation type of the key actions of the training object in the real scene, and obtain the prediction data of the other training sample, that is, return to execute step 430 and execute step 450.
- This cycle is repeated until the loss value meets the model convergence condition, completing the training process of the visual evaluation network model.
- Step 490 obtaining a visual evaluation network model.
- a visual evaluation network model with the ability to evaluate the key actions of the assessment object in the real scene is obtained. Then, by calling the visual evaluation network model, the key actions of the assessment object in the real scene can be evaluated to obtain the second evaluation data.
- Step 370 Comprehensively evaluate the medical training assessment of the assessment subject based on the first evaluation data and the second evaluation data, and output the comprehensive evaluation result of the assessment subject.
- the first evaluation data can be at least one data sub-item of the execution score of the assessment subject's operation execution sequence during the corresponding simulated operation process, the assessment subject's duration score for the corresponding simulated operation process, and the assessment subject's score for the assessment content.
- the second evaluation data can be used to describe the operational effects corresponding to the key actions of the assessment subject in the real scene.
- the comprehensive evaluation includes calculating the comprehensive score of the assessment object based on each data sub-item of the first evaluation data and the second evaluation data and the corresponding weight.
- the comprehensive score ⁇ (each data sub-item ⁇ corresponding weight).
- the comprehensive score execution score ⁇ execution weight + duration score ⁇ duration weight + score ⁇ score weight + evaluation type (expressed by score) ⁇ type weight. It is worth mentioning that the sum of the weights corresponding to each data sub-item is equal to 1.
- the corresponding comprehensive evaluation result can be output to the assessment subject.
- the comprehensive evaluation result includes at least one of the following: a comprehensive score of the assessment subject, an execution score of the assessment subject's operation execution order during the corresponding simulation operation process, a duration score of the assessment subject for the corresponding simulation operation process, the assessment subject's score for the assessment content, and an evaluation type of the assessment subject's key actions in real-life scenarios.
- the comprehensive evaluation result also includes the position information of key points, wherein the key points are used to indicate the key actions of the assessment subject in the real scene, and the position information of the key points is used to indicate the position of the key points in the real scene.
- the assessment subject can timely understand whether the key actions of the assessment subject in the simulated operation of the surgical instrument in the virtual scene are standardized, which is conducive to the assessment subject's learning and growth in a repeatable assessment environment.
- the comprehensive evaluation result also includes a learning effect comparison analysis curve, so that the assessment object can timely understand whether it has made progress in different batches of medical training assessments, etc., which is further conducive to the assessment object's learning and growth in a repeatable assessment environment.
- Figure 7 shows a schematic diagram of a learning effect comparison analysis curve in the comprehensive evaluation result.
- the learning effect comparison analysis curve includes a learning effect comparison analysis curve 701 of the comprehensive score of the assessment object, and a learning effect comparison analysis curve 702 of the duration of the corresponding simulation operation process of the assessment object, wherein the horizontal coordinates of the learning effect comparison analysis curves 701 and 702 are both the time of different batches of medical training assessments, and the vertical coordinate of the learning effect comparison analysis curve 701 is the score Score, and the vertical coordinate of the learning effect comparison analysis curve 702 is the duration of the corresponding simulation operation process of the assessment object. It can be seen that the assessment object's scores gradually improve during the medical training assessment process of different batches, and the duration of the corresponding simulation operation process is getting shorter and shorter.
- the way of outputting the corresponding comprehensive evaluation results to the assessment object may also be different.
- the comprehensive score of the assessment object is broadcast to the assessment object.
- the comprehensive evaluation results of the assessment object are displayed to the assessment object, and this embodiment does not limit this.
- a new type of multimodal intelligent assessment scheme is realized. Not only starting from virtual simulation, it evaluates the simulated operation of surgical instruments performed by the assessment subject in the virtual scene and the response operation completed by the assessment subject in the virtual scene, but also uses mixed reality technology to evaluate the operational effects corresponding to the key actions of the assessment subject in the real scene. It is not only independent of manual assessment, thereby improving the efficiency and accuracy of the evaluation, but also fully considers the evaluation of the operational effects of medical surgical operations, and finally completes a comprehensive and objective evaluation of the assessment subject.
- step 350 may include the following steps:
- Step 351 calling the visual evaluation network model, and identifying the key points of the assessment subject's simulated operation of the surgical instrument in the virtual scene according to each frame in the operation video and the posture data corresponding to each frame.
- the posture data is used to describe the position and posture of the assessment object when performing key actions in the real scene.
- the key points are used to indicate the key actions of the assessment object in the real scene.
- the key actions of the assessment object in the real scene are determined by key point identification. It can be understood that if the key points identified by the picture are not exactly the same, the key actions of the assessment object in the real scene will be different.
- the key points identified by the picture there are at least 14 key points: head key points, neck key points, left shoulder key points, left elbow key points, left hand key points, left hip key points, left knee key points, left ankle key points, right shoulder key points, right elbow key points, right hand key points, right hip key points, right knee key points, right ankle key points, etc. It should be noted that due to the introduction of the corresponding posture data of each picture, the key points identified by the picture reflect the position and posture of the assessment object when performing the key action in the real scene, rather than the corresponding position of the assessment object in the picture.
- Step 353 based on the key points identified from each picture and the electrocardiogram data corresponding to each picture, the evaluation type of the key actions of the assessment subject in the real scene is predicted to obtain second evaluation data.
- ECG data is used to describe the psychological state of the assessment subject when performing key actions in real-life scenarios.
- the visual evaluation network model has the ability to evaluate the key actions of the assessment object in the real scene.
- the visual evaluation network model reflects the mathematical mapping relationship between different evaluation types and the key actions of different assessment objects in the real scene. For example, the data mapping relationship between the key actions of senior clinical doctors as the assessment object in the real scene and the evaluation type of excellent, then based on the data mapping relationship reflected by the visual evaluation network model, after determining the key actions of the assessment object in the real scene, the corresponding evaluation type can be predicted.
- evaluation type prediction can be achieved through a classifier (such as a softmax function) configured in a visual evaluation network model to calculate the probability that key actions of the assessment object in real-world scenarios belong to different evaluation types.
- a classifier such as a softmax function
- the evaluation types include at least excellent, good, average, pass, and fail.
- P1 is the largest, it means that the evaluation type of the key actions of the assessment object in the real scene is excellent; similarly, if P2 is the largest, it means that the evaluation type of the key actions of the assessment object in the real scene is good, and so on. If P5 is the largest, it means that the evaluation type of the key actions of the assessment object in the real scene is fail.
- a credibility and a set threshold are also provided to indicate the credibility of the predicted evaluation type. If the credibility is less than the set threshold, it means that the predicted evaluation type is not credible and the evaluation type needs to be re-predicted.
- the set threshold can be flexibly set according to the actual needs of the application scenario to balance the accuracy and recall rate of the visual evaluation network model. For example, for application scenarios with high accuracy requirements, a relatively high set threshold is set; for application scenarios with high recall rate requirements, a relatively low set threshold is set. This is not specifically limited here.
- the electrocardiogram data corresponding to each screen is also introduced, so as to accurately reflect the mental state of the assessment subject when performing key actions in the real scene, thereby reflecting the changes in the assessment subject's mentality when dealing with complex situations such as emergency processing, and then assisting in evaluating the operational effects corresponding to the key actions of the assessment subject in the real scene, which is further conducive to improving the accuracy of the all-round and objective evaluation of the assessment subject.
- the posture data and ECG data are introduced into the prediction process of the evaluation type by using mixed reality technology, which can more accurately evaluate the standardization, logic, proficiency, sequence of the key actions of the assessment object in the real scene, as well as the rationality of dealing with complex situations such as emergency processing, etc., which is conducive to achieving accurate, comprehensive and objective evaluation of the assessment object.
- Fig. 9 is a schematic diagram of a specific implementation of a medical training assessment method in an application scenario.
- an evaluation framework for implementing the medical training assessment method is provided, and the evaluation framework includes: a virtual reality scene content evaluation module 801, a computer vision evaluation module 802, and an evaluation report module 803.
- the virtual reality scene content evaluation module 801 is responsible for constructing a virtual scene and detecting the trigger operation of the assessment object in the virtual scene, so as to obtain the first evaluation data in response to the trigger operation and input it into the comprehensive performance evaluation model.
- the computer vision evaluation module 802 receives the operation video acquired by the image acquisition device and the posture data and electrocardiogram data acquired by the mixed reality device. Then, the computer vision evaluation module 802 calls the visual evaluation network model pre-trained using the training samples with labels to evaluate the key actions of the subject of the assessment in the real scene, obtains the second evaluation data, and inputs the second evaluation data into the comprehensive performance evaluation model, so that the comprehensive performance evaluation model can conduct a comprehensive evaluation of the medical training assessment of the subject of the assessment based on the first evaluation data and the second evaluation data.
- the evaluation report module 803 is responsible for outputting the comprehensive evaluation results of the assessment object obtained by the comprehensive performance evaluation model, including but not limited to: the comprehensive score of the assessment object, the score for the assessment content, the location information of key points, the evaluation type of key actions in the real scene, the learning effect comparison analysis curve, etc.
- a new type of simulation teaching method achieved through virtual simulation and mixed reality medical training assessment collects multi-dimensional data of the real human body and builds a digital human body or target tissue model through simulation modeling, thereby realizing low-cost, repeatable, and quantifiable digital teaching, allowing the assessment subjects to learn and grow in an environment of repeatable practice, which can effectively shorten the clinical practice learning curve of the assessment subjects, and at the same time fully guarantee the medical safety of the assessment subjects in the learning process, avoiding high-risk issues such as harm to patients; in addition, compared with the traditional medical education system based on animal specimens and teaching auxiliary equipment, through the construction of virtual scenes, it can provide assessment subjects with case-rich, scientific and standardized learning materials, thereby effectively improving the problem of insufficient teaching resources.
- the following is an embodiment of the device of the present application, which can be used to execute the medical training assessment and evaluation method involved in the present application.
- the method embodiment of the medical training assessment and evaluation method involved in the present application please refer to the method embodiment of the medical training assessment and evaluation method involved in the present application.
- a medical training assessment and evaluation device 900 including but not limited to: a virtual scene display module 910, a virtual scene evaluation module 930, a computer vision evaluation module 950 and a comprehensive evaluation module 970.
- the virtual scene display module 910 is used to display a virtual scene constructed for medical training assessment.
- the virtual scene evaluation module 930 is used to obtain first evaluation data in response to a trigger operation of the assessment subject in the virtual scene.
- the trigger operation includes a simulated operation of the assessment subject on the surgical instrument in the virtual scene and a reply operation of the assessment subject in the virtual scene to the assessment content.
- the computer vision evaluation module 950 is used to input the corresponding operation video collected by the image acquisition device and the corresponding sensor data collected by the mixed reality device into the visual evaluation network model for the simulated operation process of the assessment subject on the surgical instrument in the virtual scene, and evaluate the key actions of the assessment subject in the real scene to obtain the second evaluation data.
- the comprehensive evaluation module 970 is used to conduct a comprehensive evaluation on the medical training assessment of the assessment object according to the first evaluation data and the second evaluation data, and output a comprehensive evaluation result of the assessment object.
- the medical training assessment and evaluation device only uses the division of the above-mentioned functional modules as an example when conducting medical training assessment and evaluation.
- the above-mentioned functions can be assigned to different functional modules as needed, that is, the internal structure of the medical training assessment and evaluation device will be divided into different functional modules to complete all or part of the functions described above.
- FIG11 is a schematic diagram of the structure of an electronic device according to an exemplary embodiment. It should be noted that the electronic device is only an example adapted to the present application and cannot be considered as providing any limitation on the scope of use of the present application. The electronic device cannot be interpreted as needing to rely on or having to have one or more components in the exemplary electronic device 2000 shown in FIG11.
- the hardware structure of the electronic device 2000 may vary greatly due to different configurations or performances.
- the electronic device 2000 includes: a power supply 210 , an interface 230 , at least one memory 250 , and at least one central processing unit (CPU) 270 .
- CPU central processing unit
- the power supply 210 is used to provide operating voltage for each hardware device on the electronic device 2000 .
- the interface 230 includes at least one wired or wireless network interface 231 for interacting with external devices, such as the interaction between the evaluation device 110 and the image acquisition device 130 in the implementation environment shown in FIG. 1 .
- the interface 230 may further include at least one serial-to-parallel conversion interface 233, at least one input-output interface 235, and at least one USB interface 237, as shown in FIG. 11, which is not specifically limited here.
- the memory 250 is a carrier for storing resources, which may be a read-only memory, a random access memory, a disk or an optical disk, etc.
- the resources stored thereon include an operating system 251, an application 253 and data 255, etc.
- the storage method may be temporary storage or permanent storage.
- the operating system 251 is used to manage and control various hardware devices and application programs 253 on the electronic device 2000 to enable the central processor 270 to calculate and process the massive data 255 in the memory 250. It can be Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
- the application 253 is a computer program that performs at least one specific task based on the operating system 251, and may include at least one module (not shown in FIG. 11 ), each of which may include a computer program for the electronic device 2000.
- a medical training assessment and evaluation device may be regarded as an application 253 deployed on the electronic device 2000.
- the data 255 may be photos, pictures, etc. stored in a disk, or may be sensor data, etc. stored in the memory 250 .
- the central processor 270 may include one or more processors, and is configured to communicate with the memory 250 through at least one communication bus to read the computer program stored in the memory 250, thereby realizing the operation and processing of the mass data 255 in the memory 250.
- the medical training assessment and evaluation method is completed in the form of the central processor 270 reading a series of computer programs stored in the memory 250.
- present application can also be implemented through hardware circuits or hardware circuits combined with software. Therefore, the implementation of the present application is not limited to any specific hardware circuits, software, or a combination of the two.
- An electronic device 4000 is provided in an embodiment of the present application.
- the electronic device 4000 may include: a desktop computer, a laptop computer, an electronic device, etc.
- the electronic device 4000 includes at least one processor 4001 , at least one communication bus 4002 , and at least one memory 4003 .
- the processor 4001 and the memory 4003 are connected, such as through a communication bus 4002.
- the electronic device 4000 may also include a transceiver 4004, which may be used for data interaction between the electronic device and other electronic devices, such as data transmission and/or data reception.
- the transceiver 4004 is not limited to one, and the structure of the electronic device 4000 does not constitute a limitation on the embodiments of the present application.
- Processor 4001 may be a CPU (Central Processing Unit), a general-purpose processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other programmable logic devices, transistor logic devices, hardware components or any combination thereof. It may implement or execute various exemplary logic blocks, modules and circuits described in conjunction with the disclosure of this application. Processor 4001 may also be a combination that implements computing functions, such as a combination of one or more microprocessors, a combination of a DSP and a microprocessor, etc.
- the communication bus 4002 may include a path for transmitting information between the above components.
- the communication bus 4002 may be a PCI (Peripheral Component Interconnect) bus or an EISA (Extended Industry Standard Architecture) bus, etc.
- the communication bus 4002 may be divided into an address bus, a data bus, a control bus, etc.
- FIG12 only uses a thick line, but it does not mean that there is only one bus or one type of bus.
- the memory 4003 may be a ROM (Read Only Memory) or other types of static storage devices that can store static information and instructions, a RAM (Random Access Memory) or other types of dynamic storage devices that can store information and instructions, or an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store the desired program code in the form of instructions or data structures and can be accessed by a computer, but is not limited thereto.
- ROM Read Only Memory
- RAM Random Access Memory
- EEPROM Electrically Erasable Programmable Read Only Memory
- CD-ROM Compact Disc Read Only Memory
- optical disk storage including compact disc, laser disc, optical disc, digital versatile disc, Blu-ray disc, etc.
- magnetic disk storage medium or other magnetic storage device or any other medium
- the memory 4003 stores a computer program
- the processor 4001 reads the computer program stored in the memory 4003 through the communication bus 4002 .
- a storage medium is provided in an embodiment of the present application, on which a computer program is stored.
- the computer program is executed by a processor, the medical training assessment and evaluation method in the above embodiments is implemented.
- a computer program product includes a computer program, the computer program is stored in a storage medium.
- a processor of a computer device reads the computer program from the storage medium, and the processor executes the computer program, so that the computer device executes the medical training assessment and evaluation method in the above embodiments.
- the adoption of a new multimodal intelligent assessment solution that combines virtual simulation and mixed reality can provide a more accurate, comprehensive and objective evaluation of the assessment subjects, greatly reducing the requirements for examiners and helping to improve the efficiency and accuracy of medical training assessment.
- the requirements for the assessment site during the medical training assessment process are effectively reduced.
Landscapes
- Engineering & Computer Science (AREA)
- Business, Economics & Management (AREA)
- Human Resources & Organizations (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Educational Administration (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Multimedia (AREA)
- Development Economics (AREA)
- General Health & Medical Sciences (AREA)
- Marketing (AREA)
- General Business, Economics & Management (AREA)
- Health & Medical Sciences (AREA)
- Entrepreneurship & Innovation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Game Theory and Decision Science (AREA)
- Computer Hardware Design (AREA)
- Operations Research (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Educational Technology (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
本申请提供了一种医学培训考核评价方法、装置、电子设备及存储介质,涉及计算机技术领域。其中,该方法包括:显示为医学培训考核而构建的虚拟场景;响应于考核对象在虚拟场景中的触发操作,得到第一评价数据;针对考核对象在虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对考核对象在现实场景中的关键动作进行评价,得到第二评价数据;根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价,输出考核对象的综合评价结果。本申请解决了相关技术中医学培训考核评价过程无法实现对考核对象的全方位客观评价的问题。
Description
本申请涉及计算机技术领域,具体而言,本申请涉及一种医学培训考核评价方法、装置、电子设备及存储介质。
通常,标准的医学手术操作过程包括手术器械的选择是否准确、操作位置是否准确、操作步骤的顺序是否合理等。可以理解,不同的操作有不同的执行时间、优先级、执行顺序,一个错误的操作在不同时间发生就会产生不同的错误结果,执行操作的顺序有误也可能产生错误结果。
基于此,医学培训考核尤为重要,目前,医学培训考核评价主要依赖于与人工考核,不仅评价的效率低,而且因具有一定主观性影响评价的准确率;此外,医学培训考核评价的考核内容过于局限,尚缺乏智能考核方案来实现对考核对象的全方位客观评价。
由上可知,如何实现医学培训考核评价过程中对考核对象的全方位客观评价的问题仍有待解决。
本申请各提供了一种医学培训考核评价方法、装置、电子设备及存储介质,可以解决相关技术中存在的医学培训考核评价过程无法实现对考核对象的全方位客观评价的问题。
所述技术方案如下:
根据本申请的一个方面,一种医学培训考核评价方法,所述方法包括:显示为医学培训考核而构建的虚拟场景;响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据;所述触发操作包括所述考核对象在所述虚拟场景中对手术器械进行的模拟操作、所述考核对象在所述虚拟场景中针对考核内容进行的答复操作;针对所述考核对象在所述虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对所述考核对象在现实场景中的关键动作进行评价,得到第二评价数据;根据所述第一评价数据和所述第二评价数据,对所述考核对象的本次医学培训考核进行综合评价,输出所述考核对象的综合评价结果。
根据本申请的一个方面,一种医学培训考核评价装置,所述装置包括:虚拟场景显示模块,用于显示为医学培训考核而构建的虚拟场景;虚拟场景评价模块,用于响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据;所述触发操作包括所述考核对象在所述虚拟场景中对手术器械进行的模拟操作、所述考核对象在所述虚拟场景中针对考核内容进行的答复操作;计算机视觉评价模块,用于针对所述考核对象在所述虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对所述考核对象在现实场景中的关键动作进行评价,得到第二评价数据;综合评价模块,用于根据所述第一评价数据和所述第二评价数据,对所述考核对象的本次医学培训考核进行综合评价,输出所述考核对象的综合评价结果。
根据本申请的一个方面,一种电子设备,包括:至少一个处理器、至少一个存储器、以及至少一条通信总线,其中,存储器上存储有计算机程序,处理器通过通信总线读取存储器中的计算机程序;计算机程序被处理器执行时实现如上所述的医学培训考核评价方法。
根据本申请的一个方面,一种存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现如上所述的医学培训考核评价方法。
根据本申请的一个方面,一种计算机程序产品,计算机程序产品包括计算机程序,计算机程序存储在存储介质中,计算机设备的处理器从存储介质读取计算机程序,处理器执行计算机程序,使得计算机设备执行时实现如上所述的医学培训考核评价方法。
在上述技术方案中,在为医学培训考核而构建的虚拟场景中,若考核对象进行了触发操作,便能够得到第一评价数据,同时基于图像采集设备采集到的相应视频、以及基于混合现实设备采集到的相应传感数据,得到第二评价数据,进而根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价,输出考核对象的综合评价结果,上述评价不仅能够从虚拟仿真出发,针对考核对象在虚拟场景中对手术器械进行的模拟操作、以及考核对象在虚拟场景中针对考核内容进行的答复操作做出评价,而且能够利用混合现实技术,对考核对象在现实场景中的关键动作所对应的操作效果做出评价,从而实现一种新型的多模态智能考核方案,避免因人工考核造成的效率低和准确率不高等缺陷,进而能够有效地解决相关技术中存在的医学培训考核评价过程无法实现对考核对象的全方位客观评价的问题。
为了更清楚地说明本申请实施例中的技术方案,下面将对本申请实施例描述中所需要使用的附图作简单地介绍。
图1是根据本申请所涉及的实施环境的示意图;
图2是根据一示例性实施例示出的一种医学培训考核评价方法的流程图;
图3是图2对应实施例中步骤330在一个实施例的流程图;
图4是图2对应实施例中步骤330在另一个实施例的流程图;
图5是图2对应实施例中步骤330在另一个实施例的流程图;
图6是根据一示例性实施例示出的视觉评价网络模型的训练过程的流程图;
图7是根据一示例性实施例示出的综合评价结果中学习效果对比分析曲线的示意图;
图8是图2对应实施例中步骤350在一个实施例的流程图;
图9是一应用场景中一种医学培训考核评价方法的具体实现示意图;
图10是根据一示例性实施例示出的一种医学培训考核评价装置的结构框图;
图11是根据一示例性实施例示出的一种电子设备的硬件结构图;
图12是根据一示例性实施例示出的一种电子设备的结构框图。
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,仅用于解释本申请,而不能解释为对本申请的限制。
本技术领域技术人员可以理解,除非特意声明,这里使用的单数形式“一”、“一个”、“所述”和“该”也可包括复数形式。应该进一步理解的是,本申请的说明书中使用的措辞“包括”是指存在所述特征、整数、步骤、操作、元件和/或组件,但是并不排除存在或添加一个或多个其他特征、整数、步骤、操作、元件、组件和/或它们的组。应该理解,当我们称元件被“连接”或“耦接”到另一元件时,它可以直接连接或耦接到其他元件,或者也可以存在中间元件。此外,这里使用的“连接”或“耦接”可以包括无线连接或无线耦接。这里使用的措辞“和/或”包括一个或更多个相关联的列出项的全部或任一单元和全部组合。
如前所述,现有医学培训考核不仅考核内容过于局限,尚无法对医学手术操作的操作效果(例如医学手术操作的规范性、逻辑性、熟练程度、顺序以及应对紧急处理等复杂性情况的合理性等)行评价,而且主要依赖于人工实现,效率低且过于主观。
随着近年来虚拟现实和混合现实技术的发展,将虚拟现实和混合现实技术应用到医学培训考核已逐步成为新的趋势。相对于传统的考核模式,虚拟现实和混合现实技术有着十分显著的优势,可以进行低成本、可重复、可客观量化评估的数字化考核,允许考核对象在可重复考核的环境中学习成长,从而能够有效地改善考核过于依赖人工实现的难题。
然而,引入虚拟现实和混合现实技术的医学培训考核,由于评价准确率不够高,例如,医学手术操作过程中有部分操作使用相同的手术器械且动作差别不大时,容易发生误判,而影响对操作执行顺序的评价,故仍然不可避免地依赖于人工考核;并且,过于单一的考核内容缺乏对考核对象在医学培训考核过程中应对紧急处理等复杂性情况的准确评估,仍然无法实现对考核对象的全方位客观评价。
由上可知,相关技术中仍存在医学培训考核评价过程中无法实现对考核对象的全方位客观评价的局限性。
为此,本申请提供的医学培训考核评价方法,能够有效地提升医学培训考核评价的准确率,相应地,该医学培训考核评价方法适用于医学培训考核评价装置、该医学培训考核评价装置可部署于电子设备,例如,该电子设备可以是配置冯诺依曼体系结构的计算机设备,该计算机设备包括但不限于台式电脑、笔记本电脑、服务器等等。
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
图1为一种医学培训考核评价方法所涉及的实施环境的示意图。该实施环境包括评价设备110、图像采集设备130和混合现实设备150。
具体地,评价设备110可供具有医学培训考核功能的客户端运行,可以是台式电脑、笔记本电脑、服务器等等电子设备,在此不进行限定。
其中,客户端,用于对考核对象进行医学培训考核,可以是应用程序形式,也可以是网页形式,相应地,客户端对考核对象进行医学培训考核的用户界面可以是程序窗口形式,还可以是网页页面形式的,此处也并未加以限定。
图像采集设备130可以是具有图像采集功能的电子设备,例如,该电子设备可以是摄像头、摄像机,还可以是携带摄像头的智能手机等,此处并未加以限定。随着图像采集设备130被部署在进行医学培训考核的空间中,便能够相应地采集到关于考核对象的图像,例如,该图像可以是在考核对象在虚拟场景中对手术器械进行模拟操作过程中,由图像采集设备130采集到的相应操作视频。
同理,随着混合现实设备150被部署在进行医学培训考核的空间中,或者该混合现实设备150由考核对象自身佩戴,便能够相应地采集到关于考核对象的传感数据,例如,该传感数据可以是在考核对象在虚拟场景中对手术器械进行模拟操作过程中,由混合现实设备150采集到的相应传感数据。其中,混合现实设备150可以是姿态传感器、智能眼镜、智能头盔等等,此处并未加以限定。
评价设备110分别与图像采集设备130、混合现实设备150之间通过有线或者无线等方式预先建立通信连接,以通过该通信连接实现彼此间的数据传输。例如,传输的数据包括但不限于操作视频、传感数据等。
随着评价设备110分别与图像采集设备130、混合现实设备150之间的交互,在考核对象在虚拟场景中对手术器械进行模拟操作过程中,评价设备110便能够接收到相应操作视频以及相应传感数据输入视觉评价网络模型,从而通过对考核对象在现实场景中的关键动作做出评价,得到第二评价数据。
同时,响应于考核对象在虚拟场景中的触发操作,得到第一评价数据,并根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价,最终输出考核对象的综合评价结果,实现了对考核对象的全方位客观评价。
当然,根据实际营运的需要,图像采集设备和混合现实设备也可以整合在同一个电子设备中,例如,携带摄像头的智能头盔,也可以将图像采集设备、混合现实设备和评价设备整合在同一个电子设备中,以使医学培训考核评价方法由该电子设备独立完成,此处并非构成具体限定。
请参阅图2,本申请实施例提供了一种医学培训考核评价方法,该方法适用于电子设备,该电子设备可以是图1所示出实施环境中的评价设备110。
在下述方法实施例中,为了便于描述,以该方法各步骤的执行主体为电子设备为例进行说明,但是并非对此构成具体限定。
如图2所示,该方法可以包括以下步骤:
步骤310,显示为医学培训考核而构建的虚拟场景。
其中,虚拟场景是利用计算机技术为医学培训考核而构建的数字化场景,可以通过模拟医学培训考核所需环境(例如医学手术实验室)的画面来呈现,进而通过虚拟仿真的方式对考核对象进行医学培训考核。
在图1所示出的实施环境中,评价设备是可供具有医学培训考核功能的客户端运行的电子设备,那么,当考核对象期望参与医学培训考核时,便可启动该客户端而进入到相应画面,视为虚拟场景的显示。例如,显示的虚拟场景可以是模拟的医学手术实验室。
相应地,现实场景,是指考核对象参与医学培训考核时的所在的空间。例如,现实场景可以是机房,该机房中部署的台式电脑能够通过虚拟场景的显示,给考核对象从视觉、听觉以及触觉上带来对模拟的医学手术实验室的沉浸感。
步骤330,响应于考核对象在虚拟场景中的触发操作,得到第一评价数据。
其中,触发操作包括但不限于:考核对象在虚拟场景中对手术器械进行的模拟操作、考核对象在虚拟场景中针对考核内容进行的答复操作。
相应地,针对触发操作中的模拟操作或答复操作,第一评价数据包括以下至少一种:考核对象在相应模拟操作过程中操作执行顺序的执行评分、考核对象针对相应模拟操作过程的时长评分、考核对象针对考核内容的分数。
应当说明的是,根据电子设备所配置输入组件(例如显示屏幕上覆盖的触摸层、鼠标、键盘等)的不同,触发操作的具体行为也可以有所区别。例如,电子设备为配置触摸层的笔记本电脑,触发操作可以是点击、滑动等手势操作;而对于电子设备为配置鼠标的台式电脑来说,触发操作则可以是拖拽、单击、双击等机械操作,本实施例并未对此加以限定。
在一种可能的实现方式,如图3所示,若触发操作为模拟操作,则步骤330可以包括以下步骤:步骤331,响应于考核对象在虚拟场景中的模拟操作,得到考核对象的操作执行数据;其中,操作执行数据用于指示考核对象在相应模拟操作过程中执行操作的顺序;步骤332,将操作执行数据所指示的顺序与模拟操作过程的标准执行顺序进行比对,得到考核对象在相应模拟操作过程中操作执行顺序的执行评分,并添加至第一评价数据。
其中,模拟操作过程的标准执行顺序预先存储于电子设备。例如,利用脚本文件存储模拟操作过程中的各个模拟操作,每一个模拟操作具有唯一的前置操作和后续操作,从而按照脚本文件中各个模拟操作的前后顺序执行遍历,便能够得到模拟操作过程的标准执行顺序。
举例来说,假设模拟操作过程的标准执行顺序为:模拟操作a->模拟操作b->模拟操作c,而操作执行数据所指示的顺序为:模拟操作a->模拟操作b->模拟操作c,那么,通过比对,便可确定考核对象在相应模拟操作过程中操作执行顺序的执行评分为100分。
此种方式下,实现了对考核对象在相应模拟操作过程中执行操作的顺序的准确且客观量化的评价。
在一种可能的实现方式,如图4所示,若触发操作为模拟操作,则步骤330可以包括以下步骤:步骤334,响应于考核对象在虚拟场景中的模拟操作,确定考核对象的模拟操作过程的时长;步骤335,根据考核对象的模拟操作过程的时长与设定阈值之间的差值,得到考核对象针对相应模拟操作过程的时长评分,并添加至第一评价数据。
其中,考核对象的模拟操作过程的时长,可以通过计时器等方式实现。具体而言,当考核对象开始模拟操作过程,相应地启动计时器,直至考核对象结束模拟操作过程,停止计时器,此时,计时器的数值可作为考核对象的模拟操作过程的时长。
进一步假设设定阈值为10分钟,可以基于考核对象的模拟操作过程的时长与10分钟之间的差值,设置相应的评分规则:若考核对象的模拟操作过程的时长在10分钟以内(可以理解为差值为非正数),则时长评分为100分;若考核对象的模拟操作过程的时长与10分钟之间的差值在(0,2),则时长评分为90分;若考核对象的模拟操作过程的时长与10分钟之间的差值在[2,5],则时长评分为80分;若考核对象的模拟操作过程的时长与10分钟之间的差值在(5,8),则时长评分为70分;若考核对象的模拟操作过程的时长与10分钟之间的差值在[8,10],则时长评分为60分;否则,时长评分为不及格(即60分以下)。那么,若考核对象A的模拟操作过程的时长为11分钟,则考核对象A针对相应模拟操作过程的时长评分为90分;若考核对象B的模拟操作过程的时长为16分钟,则考核对象B针对相应模拟操作过程的时长评分为70分。
此种方式下,实现了对考核对象的模拟操作过程的时长的准确且客观量化的评价。
在一种可能的实现方式,如图5所示,若触发操作为答复操作,则步骤330可以包括以下步骤:步骤337,响应于考核对象在虚拟场景中的答复操作,确定考核对象针对考核内容的答复数据;步骤338,将答复数据与考核内容的标准答复进行比对,得到考核对象针对考核内容的分数,并添加至第一评价数据。
其中,考核内容可以根据应用场景的实际需要灵活地设置,例如,考核内容可以设置为选择题或判断题形式的考题,此处并未加以限定。相应地,针对不同的考核内容,考核内容的标准答复也可以有所差异,例如,考核内容为考题时,标准答复即是指考题的正确答案。那么,答复数据指示的是考核对象针对考题的答案,通过与预先存储于电子设备中的正确答案进行比对,便可确定考核对象针对考核内容的分数。
在上述过程中,实现了对考核对象针对考核内容的准确且客观量化的评价。
步骤350,针对考核对象在虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对考核对象在现实场景中的关键动作进行评价,得到第二评价数据。
首先说明的是,操作视频包括多帧画面,各画面分别用于描述考核对象在虚拟场景中对手术器械进行模拟操作时,考核对象在现实场景中的一个关键动作。可以理解,每一帧画面对应一个关键动作,当然,画面不同,所对应的关键动作可以不同,也有可能相同,也就是说,一个持续性关键动作可能对应多帧画面。该操作视频是基于图像采集设备采集到,并发送至电子设备的。该图像采集设备可以是部署于现实场景中的具有图像采集功能的电子设备,还可以是佩戴在考核对象自身且配置有图像采集功能组件的电子设备。
由此,基于操作视频所描述的考核对象在现实场景中的关键动作,将有利于实现对考核对象在现实场景中关键动作所对应的操作效果进行评价。
值得一提的是,由于能够实现对考核对象在现实场景中关键动作所对应的操作效果加以评价,即使是针对因模拟操作的手术器械相同而差别不大的关键动作,也能够实现对该关键动作的准确评价,由此能够降低在虚拟场景中因对该关键动作误判而对操作执行顺序的评价所造成的影响,从而有利于提高对考核对象的全方位客观评价的准确率。
其次,传感数据,是由混合现实设备采集到并发送至电子设备的,可以是用于描述考核对象在现实场景中执行关键动作时的位置和姿态的位姿数据,还可以用于描述考核对象在现实场景中执行关键动作时的心理状态的心电数据。可以理解,每一帧画面对应一个传感数据,也可以理解为,考核对象在现实场景中执行的关键动作不同,其执行关键动作时的心理状态也可能有所差异。该混合现实设备可以是部署于现实场景中的姿态传感器,还可以是佩戴在考核对象自身的具有混合现实功能的电子设备,例如,智能头盔。
那么,基于传感数据所描述的考核对象在现实场景中执行关键动作时的心理状态,能够准确地反映考核对象应对紧急处理等复杂性情况的心态变化,进而辅助评价考核对象在现实场景中关键动作所对应的操作效果,进一步有利于提高对考核对象的全方位客观评价的准确率。
在得到操作视频和传感数据之后,便可对考核对象在现实场景中的关键动作进行评价,本实施例中,对考核对象在现实场景中的关键动作所进行的评价是基于视觉评价网络模型实现的。其中,第二评价数据用于指示考核对象在现实场景中的关键动作所对应的操作效果,例如,操作效果可以是指关键动作的规范性、逻辑性、熟练程度、顺序、以及应对紧急处理等复杂性情况的合理性等等。
在一种可能的实现方式,视觉评价网络模型是经过训练得到的具有评价考核对象在现实场景中的关键动作能力的机器学习模型。该机器学习模型可以是卷积神经网络模型等,此处并未加以限定。
具体而言,如图6所示,视觉评价网络模型的训练过程,可以包括以下步骤:
步骤410,基于训练对象在虚拟场景中对手术器械进行的模拟操作过程,构建训练集。
其中,训练集包括携带标签的训练样本,标签用于指示训练对象在现实场景中关键动作的评价类型。
在此说明的是,该训练对象实质是为了进行视觉评价网络模型的训练而参与医学培训考核的考核对象,例如,可以选取资深临床医生、三年临床经验医生、普通带教医生、操作熟练的学生、初学没有操作经验的学生等水平不同的考核对象,作为训练对象,以此丰富训练对象在现实场景中关键动作的评价类型。
基于此,评价类型可以包括优秀、良好、中等、及格、不及格等。当然,在其他实施例中,评价类型也可以通过不同分数(0~100)表示,此处并非构成具体限定。应当理解,不同评价类型反映的是不同考核对象在现实场景中的关键动作所对应的操作效果将有所区别,例如,评价类型为优秀,反映的是资深临床医生作为考核对象时该医生在现实场景中的关键动作所对应的操作效果最好;评价类型为不及格,则反映了初学没有操作经验的学生作为考核对象时该学生在现实场景中的关键动作所对应的操作效果最差。
那么,训练样本实质包括训练对象在虚拟场景中对手术器械进行模拟操作时,分别由图像采集设备和混合现实设备采集到的相应操作视频和传感数据。
步骤430,将训练样本输入机器学习模型,对训练对象在现实场景中的关键动作进行评价类型预测,得到训练样本的预测数据。
其中,预测数据用于指示预测到的训练对象在现实场景中关键动作的评价类型。
步骤450,根据标签所指示的评价类型与预测到的评价类型之间的差异,计算损失值。
其中,损失值的计算可以采用损失函数等算法实现。在一种可能的实现方式,损失函数包括但不限于:余弦损失函数、交叉熵函数、类内分布函数、类间分布函数、激活分类函数。
若损失值不满足模型收敛条件,则执行步骤470。
反之,若损失值满足模型收敛条件,视为训练完成,则执行步骤490。
在此说明的是,模型收敛条件可以根据应用场景的实际需要灵活地调整,例如,模型收敛条件可以是指损失值达到最小,此种方式下提高模型的精准度;模型收敛条件还可以是指迭代次数超过设定阈值,此种方式下提高模型训练的效率,此处并未加以限定。
步骤470,更新机器学习模型的参数并继续训练。
在机器学习模型的参数更新完成后,便可以从训练集中获取另一个训练样本输入机器学习模型,继续对训练对象在现实场景中的关键动作进行评价类型预测,得到该另一个训练样本的预测数据,即返回执行步骤430,并执行步骤450。
通过如此循环,直至损失值满足模型收敛条件,完成视觉评价网络模型的训练过程。
步骤490,得到视觉评价网络模型。
基于上述训练过程,便得到了具有评价考核对象在现实场景中的关键动作能力的视觉评价网络模型。那么,通过该视觉评价网络模型的调用,便能够对考核对象在现实场景中的关键动作进行评价,得到第二评价数据。
步骤370,根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价,输出考核对象的综合评价结果。
如前所述,第一评价数据可以是考核对象在相应模拟操作过程中操作执行顺序的执行评分、考核对象针对相应模拟操作过程的时长评分、考核对象针对考核内容的分数中的至少一种数据子项,第二评价数据可以用于描述考核对象在现实场景中的关键动作所对应的操作效果。
那么,在得到第一评价数据和第二评价数据后,便能够对考核对象的本次医学培训考核进行综合评价。在一种可能的实现方式,综合评价包括根据第一评价数据和第二评价数据的各数据子项及对应的权重,计算考核对象的综合得分,具体而言,综合得分=∑(各数据子项×对应权重)。例如,综合得分=执行评分×执行权重+时长评分×时长权重+分数×分数权重+评价类型(通过分数表示)×类型权重。值得一提的是,各数据子项对应的权重之和等于1。
在基于第一评价数据和第二评价数据完成对考核对象的本次医学培训考核的综合评价后,可以向考核对象输出相应的综合评价结果。
在一种可能的实现方式,综合评价结果包括以下至少一种:考核对象的综合得分、考核对象在相应模拟操作过程中操作执行顺序的执行评分、考核对象针对相应模拟操作过程的时长评分、考核对象针对考核内容的分数、考核对象在现实场景中的关键动作的评价类型。
可选地,综合评价结果还包括关键点的位置信息,其中,关键点用于指示考核对象在现实场景中的关键动作,关键点的位置信息用于指示关键点在现实场景中的位置。此种方式下,可以让考核对象及时地了解到其自身在虚拟场景中对手术器械进行模拟操作时的关键动作是否规范等,有利于考核对象在可重复考核的环境中学习成长。
可选地,综合评价结果还包括学习效果对比分析曲线,以此让考核对象及时地了解到其自身在不同批次的医学培训考核中是否有进步等,进一步有利于考核对象在可重复考核的环境中学习成长。例如,图7展示了综合评价结果中学习效果对比分析曲线的示意图,在图7中,学习效果对比分析曲线包括考核对象的综合得分的学习效果对比分析曲线701、以及考核对象针对相应模拟操作过程的时长的学习效果对比分析曲线702,其中,学习效果对比分析曲线701、702的横坐标均为不同批次的医学培训考核的时间,而学习效果对比分析曲线701的纵坐标为成绩Score,学习效果对比分析曲线702的纵坐标为考核对象针对相应模拟操作过程的时长,可以看出,该考核对象在不同批次的医学培训考核过程中,成绩逐步提高,针对相应模拟操作过程的时长越来越短。
应当说明的是,根据电子设备所配置输出组件(例如显示屏幕、音响组件等)的不同,向考核对象输出相应综合评价结果的方式也可以有所差异。例如,基于台式电脑所配置的音响组件,向考核对象播报考核对象的综合得分。或者,基于笔记本电脑所配置的显示屏幕,向考核对象展示考核对象的综合评价结果,本实施例并未对此加以限定。
通过上述过程,实现了一种新型的多模态智能考核方案,不仅从虚拟仿真出发,针对考核对象在虚拟场景中对手术器械进行的模拟操作、考核对象在虚拟场景中针对考核内容完成的答复操作做出评价,而且利用混合现实技术,对考核对象在现实场景中的关键动作所对应的操作效果进行评价,既能够不依赖于人工考核,以此提高评价的效率和准确率,而且充分考虑了对医学手术操作的操作效果进行评价,最终完成了对考核对象的全方位客观评价。
请参阅图8,在一示例性实施例中,步骤350可以包括以下步骤:
步骤351,调用视觉评价网络模型,根据操作视频中的各画面及与各画面相应的位姿数据,对考核对象在虚拟场景中对手术器械进行模拟操作时的关键点进行识别。
其中,位姿数据用于描述考核对象在现实场景中执行关键动作时的位置和姿态。关键点用于指示考核对象在现实场景中的关键动作。
也就是说,考核对象在现实场景中的关键动作,是通过关键点识别确定的。可以理解,若由画面识别到的关键点不完全相同,则考核对象在现实场景中的关键动作将有所区别。在一种可能的实现方式,对于考核对象而言,存在至少14个关键点:头关键点、颈部关键点、左肩关键点、左肘关键点、左手关键点、左髋关键点、左膝关键点、左踝关键点、右肩关键点、右肘关键点、右手关键点、右髋关键点、右膝关键点、右踝关键点等。应当说明的是,由于引入各画面相应的位姿数据,由画面识别到的关键点反映的是现实场景中考核对象执行关键动作时的位置和姿态,而不是考核对象在画面中的相应位置。
步骤353,基于由各画面识别到的关键点以及与各画面相应的心电数据,对考核对象在现实场景中的关键动作进行评价类型预测,得到第二评价数据。
其中,心电数据用于描述考核对象在现实场景中执行关键动作时的心理状态。
应当理解,视觉评价网络模型具有评价考核对象在现实场景中关键动作的能力,实质是指该视觉评价网络模型反映了不同评价类型与不同考核对象在现实场景中的关键动作之间的数学映射关系,例如,以资深临床医生作为考核对象在现实场景中的关键动作与评价类型为优秀之间的数据映射关系,那么,基于视觉评价网络模型所反映的数据映射关系,在确定考核对象在现实场景中的关键动作后,便能够预测得到相应的评价类型。
在一种可能的实现方式,评价类型预测,可以通过视觉评价网络模型中配置的分类器(例如softmax函数)实现,用于计算考核对象在现实场景中的关键动作属于不同评价类型的概率。
举例来说,假设评价类型至少包括优秀、良好、中等、及格、不及格。
那么,计算考核对象在现实场景中的关键动作分别属于优秀、良好、中等、及格、不及格等评价类型的概率,分别为P1、P2、P3、P4、P5。若P1最大,则表示考核对象在现实场景中的关键动作的评价类型为优秀;同理,若P2最大,则表示考核对象在现实场景中的关键动作的评价类型为良好,以此类推,若P5最大,则表示考核对象在现实场景中的关键动作的评价类型为不及格。
当然,在其他实施例中,还设置有用于表示预测到的评价类型的可信程度的可信度及设定阈值,若该可信度小于设定阈值,则表示预测到的评价类型不可信,需要重新进行评价类型的预测。其中,设定阈值为可以根据应用场景的实际需要灵活地设置,以使视觉评价网络模型的精确度和召回率保持平衡,例如,对于精确度要求高的应用场景,设置相对较高的设定阈值;对于召回率要求高的应用场景,则设置相对较低的设定阈值,此处并未具体限定。
值得一提的是,本实施例中,在进行评价类型的预测过程中,还引入与各画面相应的心电数据,从而能够准确地反映考核对象在现实场景中执行关键动作时的心态状态,以此来反映考核对象应对紧急处理等复杂性情况的心态变化,进而辅助评价考核对象在现实场景中关键动作所对应的操作效果,进一步有利于提高对考核对象的全方位客观评价的准确率。
在上述实施例的作用下,利用混合现实技术,将位姿数据和心电数据引入评价类型的预测过程,能够更加准确地评价考核对象在现实场景中关键动作的规范性、逻辑性、熟练程度、顺序、以及应对紧急处理等复杂性情况的合理性等,有利于实现对考核对象的准确的全方位客观评价。
图9是一应用场景中一种医学培训考核评价方法的具体实现示意图。该应用场景中,提供了一种用于实现医学培训考核评价方法的评价框架,该评价框架包括:虚拟现实场景内容评判模块801、计算机视觉评判模块802、以及评估报告模块803。
具体地,虚拟现实场景内容评判模块801,负责构建虚拟场景,并检测考核对象在虚拟场景中的触发操作,以便于响应于该触发操作而得到第一评价数据,并输入综合成绩评估模型。
计算机视觉评判模块802,在考核对象在虚拟场景中对手术器械进行模拟操作过程时,接收到基于图像采集设备采集的操作视频、基于混合现实设备采集的位姿数据和心电数据后,便调用利用携带标签的训练样本预先训练得到的视觉评价网络模型,对考核对象在现实场景中的关键动作进行评价,得到第二评价数据,并输入综合成绩评估模型,以便于综合成绩评估模型根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价。
评估报告模块803,负责输出综合成绩评估模型得到的考核对象的综合评价结果,包括但不限于:考核对象的综合得分、针对考核内容的分数、关键点的位置信息、在现实场景中关键动作的评价类型、学习效果对比分析曲线等等。
在本应用场景中,通过虚拟仿真和混合现实的医学培训考核所实现的新型模拟教学方式,通过对真实人体进行多维度数据采集,并通过仿真建模构建数字化人体或者目标组织模型,实现低成本、可重复、可量化评估的数字化教学,允许考核对象在可重复练习的环境中学习成长,能够有效地缩短考核对象临床实践学习曲线,同时可以充分地保证考核对象在学习过程中的医疗安全性,避免对患者造成损害等高风险问题;此外,相较于以动物标本、教学辅助器械为主的传统医科教育体系,通过虚拟场景的构建,可以为考核对象提供案例丰富、科学规范的学习素材,从而能够有效地改善教学资源不足的难题。
下述为本申请装置实施例,可以用于执行本申请所涉及的医学培训考核评价方法。对于本申请装置实施例中未披露的细节,请参照本申请所涉及的医学培训考核评价方法的方法实施例。
请参阅图10,本申请实施例中提供了一种医学培训考核评价装置900,包括但不限于:虚拟场景显示模块910、虚拟场景评价模块930、计算机视觉评价模块950以及综合评价模块970。
其中,虚拟场景显示模块910,用于显示为医学培训考核而构建的虚拟场景。
虚拟场景评价模块930,用于响应于考核对象在虚拟场景中的触发操作,得到第一评价数据。触发操作包括考核对象在虚拟场景中对手术器械进行的模拟操作、考核对象在虚拟场景中针对考核内容进行的答复操作。
计算机视觉评价模块950,用于针对考核对象在虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对考核对象在现实场景中的关键动作进行评价,得到第二评价数据。
综合评价模块970,用于根据第一评价数据和第二评价数据,对考核对象的本次医学培训考核进行综合评价,输出考核对象的综合评价结果。
需要说明的是,上述实施例所提供的医学培训考核评价装置在进行医学培训考核评价时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即医学培训考核评价装置的内部结构将划分为不同的功能模块,以完成以上描述的全部或者部分功能。
另外,上述实施例所提供的医学培训考核评价装置与医学培训考核评价方法的实施例属于同一构思,其中各个模块执行操作的具体方式已经在方法实施例中进行了详细描述,此处不再赘述。
图11根据一示例性实施例示出的一种电子设备的结构示意。需要说明的是,该电子设备只是一个适配于本申请的示例,不能认为是提供了对本申请的使用范围的任何限制。该电子设备也不能解释为需要依赖于或者必须具有图11示出的示例性的电子设备2000中的一个或者多个组件。
电子设备2000的硬件结构可因配置或者性能的不同而产生较大的差异,如图11所示,电子设备2000包括:电源210、接口230、至少一存储器250、以及至少一中央处理器(CPU, Central
Processing Units)270。
具体地,电源210用于为电子设备2000上的各硬件设备提供工作电压。
接口230包括至少一有线或无线网络接口231,用于与外部设备交互。例如,进行图1所示出实施环境中评价设备110与图像采集设备130之间的交互。
当然,在其余本申请适配的示例中,接口230还可以进一步包括至少一串并转换接口233、至少一输入输出接口235以及至少一USB接口237等,如图11所示,在此并非对此构成具体限定。
存储器250作为资源存储的载体,可以是只读存储器、随机存储器、磁盘或者光盘等,其上所存储的资源包括操作系统251、应用程序253及数据255等,存储方式可以是短暂存储或者永久存储。
其中,操作系统251用于管理与控制电子设备2000上的各硬件设备以及应用程序253,以实现中央处理器270对存储器250中海量数据255的运算与处理,其可以是Windows ServerTM、Mac OS XTM、UnixTM、LinuxTM、FreeBSDTM等。
应用程序253是基于操作系统251之上完成至少一项特定工作的计算机程序,其可以包括至少一模块(图11未示出),每个模块都可以分别包含有对电子设备2000的计算机程序。例如,医学培训考核评价装置可视为部署于电子设备2000的应用程序253。
数据255可以是存储于磁盘中的照片、图片等,还可以是传感数据等,存储于存储器250中。
中央处理器270可以包括一个或多个以上的处理器,并设置为通过至少一通信总线与存储器250通信,以读取存储器250中存储的计算机程序,进而实现对存储器250中海量数据255的运算与处理。例如,通过中央处理器270读取存储器250中存储的一系列计算机程序的形式来完成医学培训考核评价方法。
此外,通过硬件电路或者硬件电路结合软件也能同样实现本申请,因此,实现本申请并不限于任何特定硬件电路、软件以及两者的组合。
请参阅图12,本申请实施例中提供了一种电子设备4000,该电子设备400可以包括:台式电脑、笔记本电脑、电子设备等。
在图12中,该电子设备4000包括至少一个处理器4001、至少一条通信总线4002以及至少一个存储器4003。
其中,处理器4001和存储器4003相连,如通过通信总线4002相连。可选地,电子设备4000还可以包括收发器4004,收发器4004可以用于该电子设备与其他电子设备之间的数据交互,如数据的发送和/或数据的接收等。需要说明的是,实际应用中收发器4004不限于一个,该电子设备4000的结构并不构成对本申请实施例的限定。
处理器4001可以是CPU(Central Processing Unit,中央处理器),通用处理器,DSP(Digital Signal Processor,数据信号处理器),ASIC(Application Specific Integrated
Circuit,专用集成电路),FPGA(Field Programmable Gate Array,现场可编程门阵列)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。处理器4001也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等。
通信总线4002可包括一通路,在上述组件之间传送信息。通信总线4002可以是PCI(Peripheral Component
Interconnect,外设部件互连标准)总线或EISA(Extended
Industry Standard Architecture,扩展工业标准结构)总线等。通信总线4002可以分为地址总线、数据总线、控制总线等。为便于表示,图12中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器4003可以是ROM(Read Only Memory,只读存储器)或可存储静态信息和指令的其他类型的静态存储设备,RAM(Random Access Memory,随机存取存储器)或者可存储信息和指令的其他类型的动态存储设备,也可以是EEPROM(Electrically
Erasable Programmable Read Only Memory,电可擦可编程只读存储器)、CD-ROM(Compact
Disc Read Only Memory,只读光盘)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。
存储器4003上存储有计算机程序,处理器4001通过通信总线4002读取存储器4003中存储的计算机程序。
该计算机程序被处理器4001执行时实现上述各实施例中的医学培训考核评价方法。
此外,本申请实施例中提供了一种存储介质,该存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述各实施例中的医学培训考核评价方法。
本申请实施例中提供了一种计算机程序产品,该计算机程序产品包括计算机程序,该计算机程序存储在存储介质中。计算机设备的处理器从存储介质读取该计算机程序,处理器执行该计算机程序,使得该计算机设备执行上述各实施例中的医学培训考核评价方法。
与相关技术相比,一方面,采用结合虚拟仿真和混合现实的新型的多模态智能考核方案,能够对考核对象进行更加准确地全方位客观评价,大大降低了对考官的要求,并且有利于提高医学培训考核评价的效率和准确率;另一方面,通过构建贴近真实考核场所的虚拟场景,有效地减少了医学培训考核过程中对考核场所的要求。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
以上所述仅是本申请的部分实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。
Claims (10)
- 一种医学培训考核评价方法,其特征在于,所述方法包括:显示为医学培训考核而构建的虚拟场景;响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据;所述触发操作包括所述考核对象在所述虚拟场景中对手术器械进行的模拟操作、所述考核对象在所述虚拟场景中针对考核内容进行的答复操作;针对所述考核对象在所述虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对所述考核对象在现实场景中的关键动作进行评价,得到第二评价数据;根据所述第一评价数据和所述第二评价数据,对所述考核对象的本次医学培训考核进行综合评价,输出所述考核对象的综合评价结果。
- 如权利要求1所述的方法,其特征在于,所述响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据,包括:若所述触发操作为所述模拟操作,则响应于所述考核对象在所述虚拟场景中的所述模拟操作,得到所述考核对象的操作执行数据;所述操作执行数据用于指示所述考核对象在相应模拟操作过程中执行操作的顺序;将所述操作执行数据所指示的顺序与所述模拟操作过程的标准执行顺序进行比对,得到所述考核对象在相应模拟操作过程中操作执行顺序的执行评分,并添加至所述第一评价数据。
- 如权利要求1所述的方法,其特征在于,所述响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据,包括:若所述触发操作为所述模拟操作,则响应于所述考核对象在所述虚拟场景中的所述模拟操作,确定所述考核对象的模拟操作过程的时长;根据所述考核对象的模拟操作过程的时长与设定阈值之间的差值,得到所述考核对象针对相应模拟操作过程的时长评分,并添加至所述第一评价数据。
- 如权利要求1所述的方法,其特征在于,所述响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据,包括:若所述触发操作为所述答复操作,则响应于所述考核对象在所述虚拟场景中的所述答复操作,确定所述考核对象针对所述考核内容的答复数据;将所述答复数据与所述考核内容的标准答复进行比对,得到所述考核对象针对所述考核内容的分数,并添加至所述第一评价数据。
- 如权利要求1所述的方法,其特征在于,所述传感数据包括位姿数据和心电数据;所述针对所述考核对象在所述虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对所述考核对象在现实场景中的关键动作进行评价,得到第二评价数据,包括:调用所述视觉评价网络模型,根据所述操作视频中的各画面以及与各所述画面相应的位姿数据,对所述考核对象在所述虚拟场景中对手术器械进行模拟操作时的关键点进行识别;所述关键点用于指示所述考核对象在所述现实场景中的关键动作;基于由各所述画面识别到的关键点以及与各所述画面相应的心电数据,对所述考核对象在所述现实场景中的关键动作进行评价类型预测,得到所述第二评价数据。
- 如权利要求1至5任一项所述的方法,其特征在于,所述视觉评价网络模型是经过训练得到的、具有评价所述考核对象在所述现实场景中的关键动作能力的机器学习模型。
- 如权利要求6所述的方法,其特征在于,所述视觉评价网络模型的训练过程,包括:基于训练对象在所述虚拟场景中对手术器械进行的模拟操作过程,构建训练集;所述训练集包括携带标签的训练样本,所述标签用于指示所述训练对象在所述现实场景中关键动作的评价类型;将所述训练样本输入所述机器学习模型,对所述训练对象在所述现实场景中的关键动作进行评价类型预测,得到所述训练样本的预测数据;所述预测数据用于指示预测到的所述训练对象在所述现实场景中关键动作的评价类型;根据所述标签所指示的评价类型与预测到的评价类型之间的差异,计算损失值;若所述损失值不满足模型收敛条件,则更新所述机器学习模型的参数并继续训练;否则,得到所述视觉评价网络模型。
- 一种医学培训考核评价装置,其特征在于,所述装置包括:虚拟场景显示模块,用于显示为医学培训考核而构建的虚拟场景;虚拟场景评价模块,用于响应于考核对象在所述虚拟场景中的触发操作,得到第一评价数据;所述触发操作包括所述考核对象在所述虚拟场景中对手术器械进行的模拟操作、所述考核对象在所述虚拟场景中针对考核内容进行的答复操作;计算机视觉评价模块,用于针对所述考核对象在所述虚拟场景中对手术器械进行的模拟操作过程,将基于图像采集设备采集到的相应操作视频、以及基于混合现实设备采集到的相应传感数据输入视觉评价网络模型,对所述考核对象的模拟操作过程进行评价,得到第二评价数据;综合评价模块,用于根据所述第一评价数据和所述第二评价数据,对所述考核对象的本次医学培训考核进行综合评价,输出所述考核对象的综合评价结果。
- 一种电子设备,其特征在于,包括:至少一个处理器、至少一个存储器、以及至少一条通信总线,其中,所述存储器上存储有计算机程序,所述处理器通过所述通信总线读取所述存储器中的所述计算机程序;所述计算机程序被所述处理器执行时实现权利要求1至7中任一项所述的医学培训考核评价方法。
- 一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至7中任一项所述的医学培训考核评价方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211378499.0 | 2022-11-04 | ||
CN202211378499.0A CN115713256A (zh) | 2022-11-04 | 2022-11-04 | 医学培训考核评价方法、装置、电子设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024092955A1 true WO2024092955A1 (zh) | 2024-05-10 |
Family
ID=85232303
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/137057 WO2024092955A1 (zh) | 2022-11-04 | 2022-12-06 | 医学培训考核评价方法、装置、电子设备及存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN115713256A (zh) |
WO (1) | WO2024092955A1 (zh) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116052863B (zh) * | 2023-04-03 | 2023-07-04 | 云南医无界医疗网络科技有限公司 | 一种基于医共体大数据模型的智慧管理系统 |
CN117437095B (zh) * | 2023-10-08 | 2024-06-04 | 厦门农芯数字科技有限公司 | 基于虚拟养猪的技能考核方法、系统、设备及存储介质 |
CN118035284B (zh) * | 2023-12-28 | 2024-10-11 | 南京竹石信息科技有限公司 | 一种基于医学数据内容绘制四维内容的智能评价方法 |
CN117745496B (zh) * | 2024-02-19 | 2024-05-31 | 成都运达科技股份有限公司 | 一种基于混合现实技术的智能考评方法、系统及存储介质 |
CN118096460B (zh) * | 2024-04-17 | 2024-07-23 | 湖南晟医智能科技有限公司 | 一种无人执考的远程医学考核监管评分方法及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996507A (zh) * | 2010-11-15 | 2011-03-30 | 罗伟 | 构建外科虚拟手术教学及训练系统的方法 |
US20180293802A1 (en) * | 2017-04-07 | 2018-10-11 | Unveil, LLC | Systems and methods for mixed reality medical training |
CN109658772A (zh) * | 2019-02-11 | 2019-04-19 | 三峡大学 | 一种基于虚拟现实的手术培训与考核方法 |
CN115035767A (zh) * | 2022-06-27 | 2022-09-09 | 西安交通大学 | 一种基于ar和拟人化模型的脊柱手术教学培训系统 |
-
2022
- 2022-11-04 CN CN202211378499.0A patent/CN115713256A/zh active Pending
- 2022-12-06 WO PCT/CN2022/137057 patent/WO2024092955A1/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101996507A (zh) * | 2010-11-15 | 2011-03-30 | 罗伟 | 构建外科虚拟手术教学及训练系统的方法 |
US20180293802A1 (en) * | 2017-04-07 | 2018-10-11 | Unveil, LLC | Systems and methods for mixed reality medical training |
CN109658772A (zh) * | 2019-02-11 | 2019-04-19 | 三峡大学 | 一种基于虚拟现实的手术培训与考核方法 |
CN115035767A (zh) * | 2022-06-27 | 2022-09-09 | 西安交通大学 | 一种基于ar和拟人化模型的脊柱手术教学培训系统 |
Also Published As
Publication number | Publication date |
---|---|
CN115713256A (zh) | 2023-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2024092955A1 (zh) | 医学培训考核评价方法、装置、电子设备及存储介质 | |
Miller et al. | The Auto-eFACE: Machine learning–enhanced program yields automated facial palsy assessment tool | |
US9978288B2 (en) | Communication and skills training using interactive virtual humans | |
Oropesa et al. | Relevance of motion-related assessment metrics in laparoscopic surgery | |
Melero et al. | Upbeat: Augmented Reality‐Guided Dancing for Prosthetic Rehabilitation of Upper Limb Amputees | |
Hah et al. | How clinicians perceive artificial intelligence–assisted technologies in diagnostic decision making: Mixed methods approach | |
Sharma et al. | Sensing technologies and child–computer interaction: Opportunities, challenges and ethical considerations | |
Fung et al. | Determining predictors of sepsis at triage among children under 5 years of age in resource-limited settings: a modified Delphi process | |
US20210312833A1 (en) | Virtual reality platform for training medical personnel to diagnose patients | |
TW201801055A (zh) | 醫學診療教育系統及方法 | |
Xu et al. | A novel facial emotion recognition method for stress inference of facial nerve paralysis patients | |
Cao et al. | Intelligent physical education teaching tracking system based on multimedia data analysis and artificial intelligence | |
Loukas et al. | Surgical performance analysis and classification based on video annotation of laparoscopic tasks | |
Zhang et al. | Human-centered intelligent healthcare: explore how to apply AI to assess cognitive health | |
Moon et al. | Rich representations for analyzing learning trajectories: Systematic review on sequential data analytics in game-based learning research | |
CN117037277A (zh) | Aed急救培训学员的考核方法、装置、系统及存储介质 | |
Mohamadipanah et al. | Sensors and psychomotor metrics: a unique opportunity to close the gap on surgical processes and outcomes | |
Georgiadis et al. | Bolstering stealth assessment in serious games | |
Xiao et al. | Automated assessment of neonatal endotracheal intubation measured by a virtual reality simulation system | |
Fotopoulos et al. | Gamifying rehabilitation: MILORD platform as an upper limb motion rehabilitation service | |
Koryahin et al. | Didactic opportunities of information-communication Technologies in the Control of physical education | |
CN115116087A (zh) | 一种动作考核方法、系统、存储介质和电子设备 | |
Hosseini et al. | Teaching Clinical Decision-Making Skills to Undergraduate Nursing Students via Web-based Virtual Patients during the COVID-19 Pandemic: A New Approach to The CyberPatient TM Simulator. | |
Galuret et al. | Gaze behavior is related to objective technical skills assessment during virtual reality simulator-based surgical training: a proof of concept | |
Guo et al. | [Retracted] Scene Construction and Application of Panoramic Virtual Simulation in Interactive Dance Teaching Based on Artificial Intelligence Technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22964250 Country of ref document: EP Kind code of ref document: A1 |