CN111028949A - Medical image examination training system and method based on Internet of things - Google Patents

Medical image examination training system and method based on Internet of things Download PDF

Info

Publication number
CN111028949A
CN111028949A CN201911314349.1A CN201911314349A CN111028949A CN 111028949 A CN111028949 A CN 111028949A CN 201911314349 A CN201911314349 A CN 201911314349A CN 111028949 A CN111028949 A CN 111028949A
Authority
CN
China
Prior art keywords
training
medical image
simulation training
index
image examination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911314349.1A
Other languages
Chinese (zh)
Inventor
徐梅梅
黄海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Vocational College of Medicine
Original Assignee
Jiangsu Vocational College of Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Vocational College of Medicine filed Critical Jiangsu Vocational College of Medicine
Priority to CN201911314349.1A priority Critical patent/CN111028949A/en
Publication of CN111028949A publication Critical patent/CN111028949A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Biomedical Technology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention belongs to the technical field of simulators for teaching or training and discloses a medical image examination training system and method based on the Internet of things.A user learns the relevant knowledge of medical image examination by using a multimedia technology and watches a teaching video; the user carries out simulation training of corresponding medical image examination according to the watched teaching video; simultaneously, a plurality of cameras are used for collecting operation data of user simulation training in real time; and comparing and analyzing the collected operation data of the user simulation training with the standard teaching data of the medical examination stored in the database in advance, judging whether the user operation is wrong or not, and giving an actual operation score. The invention combines theory and practice, the medical image examination simulation platform can fully mobilize feeling, movement and thinking, greatly improves the learning efficiency, and can more intuitively train the medical image examination; the simulation training is safer, and no serious result is caused.

Description

Medical image examination training system and method based on Internet of things
Technical Field
The invention belongs to the technical field of simulators for teaching or training, and particularly relates to a medical image examination training system and method based on the Internet of things.
Background
Currently, the current state of the art commonly used in the industry is such that:
the relevant course of current medical image inspection or training are because of following restriction, and the actual training course is few, and the actual operating ability is not high:
1) the teaching resources are tense. The main practice place of medical image examination is a hospital, and the subject is a human body. With the increasing requirements of people on health level and the increasing tension of doctor-patient relationship, the operation among patients is greatly limited, and the practical operation opportunities of corresponding courses are difficult to be provided.
2) Knowledge content is relatively abstract and cannot be intuitively felt.
3) With the development of medical instruments, the requirements for operating instruments are higher, but the requirements for obtaining operation and understanding basic principles cannot be met due to expensive imaging equipment products, harsh use conditions, expensive use and maintenance costs. Meanwhile, the existing system cannot train the user training effect objectively and comprehensively enough, so that the training accuracy is reduced; in the process of calculating corresponding picture data used by the matching degree of the existing system, the operation efficiency of a medical image examination training system cannot be improved, and the effect of training a user is reduced.
In summary, the problems of the prior art are as follows:
(1) the existing medical image examination and learning training is less, and the practical operation capability is poor due to the fact that theoretical knowledge is taken as the main point; the medical imaging agent examination has high requirements on the actual operation capability;
(2) the corresponding theoretical knowledge of the medical image examination is abstract, and no more vivid way to teach the corresponding theoretical knowledge is available at present;
(3) the existing simulation training is only aimed at a CT machine and does not have universality.
(4) The existing system can not train the user training effect objectively and comprehensively enough, so that the training accuracy is reduced.
(5) In the process of calculating corresponding picture data used by the matching degree of the existing system, the operation efficiency of a medical image examination training system cannot be improved, and the effect of training a user is reduced.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides a medical image examination training system and method based on the Internet of things.
The invention is realized in such a way, and a medical image examination training method based on the Internet of things specifically comprises the following steps:
step one, a user learns the relevant knowledge of medical image examination by using a multimedia technology and watches a teaching video;
step two, a user carries out simulation training of medical image examination on a simulation training platform; the system utilizes a plurality of cameras to collect operation data of user simulation training in real time; in the simulation training of medical image examination on a simulation training platform, a simulation training data processing module integrated with the simulation training platform gives out a sequencing vector of each training index of simulation training data, and the subjective weight is calculated by using an ordered binary comparison method;
calculating objective weight by using a coefficient of variation method according to an initial decision matrix consisting of values of each training index; solving comprehensive weight by using a vector similarity theory; converting the initial decision matrix in the real number form into a decision matrix, and converting the weight vector into the decision matrix; selecting different preference functions according to the characteristics of each attribute of the decision matrix, wherein all the attributes are benefit types, the magnitude of the preference function value represents the magnitude of the dominant relationship between the schemes, and the attribute value f of the attribute j of the simulation training data a and the simulation training data bj(a),fj(b) Respectively as follows:
Figure BDA0002325419800000021
calculate the value of the preference function:
Figure BDA0002325419800000031
calculating a preference indicator Π (a, b):
Figure BDA0002325419800000032
calculating pi (a, b) of any two simulation training data, and calculating inflow, outflow and net flow indexes according to the pi (a, b);
and (3) outflow:
Figure BDA0002325419800000033
and (3) inflow:
Figure BDA0002325419800000034
net flow:
Φ(a)=Φ+(a)-Φ-(a)=(_(a),g(a));
wherein _ (a) represents an approval value of the recipe, and g (a) represents an objection value of the recipe;
after the simulation training data priority index S (a) is calculated, outputting optimal simulation training data;
S(a)=_(a)-g(a);
comparing and analyzing the collected operation data of the user simulation training with standard teaching data of the medical examination stored in a database in advance, judging whether the user operation is wrong or not, and giving an actual operation score; in the comparative analysis of the user operation data and the standard teaching data, marking deduction points, corresponding knowledge points and teaching videos in the system in advance; comparing and analyzing the operation data of user simulation training acquired by the camera with standard teaching data stored in a database in advance, and comparing and deducting points according to the weight ratio; when the point of deduction appears in the user operation data, deducting the corresponding score, and outputting the residual score, the point of deduction and the corresponding correct operation and related knowledge points;
step four, storing relevant knowledge and teaching videos of medical image examination by using a database;
and fifthly, outputting the knowledge of the medical image examination, the teaching video, the collected user simulation training image and the practice score by using the display.
Further, the subjective weight is obtained by an ordered binary comparison method, and the specific method is as follows:
step 1, determining a training object and an expert set:x is the overall object set under investigation and is marked as X ═ X1,x2,...xNThe set of experts involved in determining the weights of the indices is P ═ P1,p2...pL};
Step 2, sequencing the indexes by applying a set value iteration method: the weight is { lambda12,...λLSorting the indexes in the index set according to the importance degree, wherein the index sequence set selected by k (k is more than or equal to 1 and less than or equal to L) is Xk=(x3,x5,x1,xN...,xN-1) In the formula x3At XkThe first position of (2), i.e. representing x3Considered most important at k, at X according to each indexkThe position in (1) is given index score at XkIn x3The corresponding score is N, x5The corresponding score is N-1, xN-1The corresponding score is 1;
μi,k(i is more than or equal to 1 and less than or equal to N, k is more than or equal to 1 and less than or equal to L) is taken as a score obtained by the index i at k, and
Figure BDA0002325419800000041
as a composite score, i is 1. ltoreq. N, in the formulaiThe training indexes are newly sequenced from big to small,
Figure BDA0002325419800000042
step 3, comparing the adjacent training indexes to obtain a comparison matrix;
giving a training interval by comparing the importance degree of a previous index relative to a next index in adjacent indexes, and taking adjacent r as an end point value of the intervalkNumerical values of relative importance between two rkThe importance degree corresponding to the numerical value;
and 4, converting the interval into a point value through the following formula:
Figure BDA0002325419800000043
in the formula, rij' is the lower bound in the training matrix for expert i on index j,rij"is the upper bound in the training matrix, j ═ 1, 2, …, n-1;
step 5, determining the weight of the training index: because N indexes are adjacently compared, N-1 comparison values are obtained:
in the formula: r is1Meaning that the degree of importance of the first indicator after reordering relative to the second indicator is determined by the ratio of the absolute importance of the first indicator to the absolute importance of the second indicator
Figure BDA0002325419800000051
In the description of the present invention,
Figure BDA0002325419800000052
Figure BDA0002325419800000053
Figure BDA0002325419800000054
the comprehensive weight of (a) is:
Figure BDA0002325419800000055
the weights of the other indices are:
Figure BDA0002325419800000056
further, the specific method for obtaining the objective weight by the variation coefficient method comprises the following steps:
the training index system has m training indexes, n training objects are subjected to system evaluation and data sampling, and an original data training matrix is expressed as a matrix X:
Figure BDA0002325419800000057
further, the specific method for obtaining the objective weight by the variation coefficient method further comprises the following steps: calculating the mean value and the standard deviation of each index according to the actual value of each classified object index:
wherein the mean and standard deviation of the jth index are respectively:
Figure BDA0002325419800000058
Figure BDA0002325419800000059
wherein j is 1, 2, … … m;
further, the specific method for obtaining the objective weight by the variation coefficient method further includes the following steps:
Figure BDA0002325419800000061
further, the specific method for obtaining the objective weight by the variation coefficient method further comprises the following steps of:
firstly, the index variation coefficient is normalized,
Figure BDA0002325419800000062
then, a weight set V of the index is obtainedj={ν12,…νMTherein of
Figure BDA0002325419800000063
Another object of the present invention is to provide a terminal carrying a controller for implementing a medical image examination training method based on the internet of things.
Another object of the present invention is to provide a computer-readable storage medium, which includes instructions that, when executed on a computer, cause the computer to execute the method for training medical image examination based on internet of things.
Another objective of the present invention is to provide a medical image examination training system based on the internet of things, which specifically includes:
a teaching module: the system is connected with the main control module and is used for carrying out relevant knowledge and teaching video of medical image examination by utilizing a multimedia technology;
the main control module: the system is connected with a teaching module, a camera module, a simulation training platform, a grading module, a database and a display module; the single chip microcomputer is used for controlling each module to work normally;
a simulation training platform: the main control module is connected with the medical image examination simulation training module and is used for carrying out medical image examination simulation training based on a computer three-dimensional reconstruction technology;
a camera module: the system is connected with the main control module and is used for acquiring operation data during user simulation training by utilizing a plurality of cameras;
a scoring module: the system is connected with the main control module and used for comparing user simulation training operation acquired by the camera module with teaching video stored in a database in advance, judging whether user operation fails or not and giving an actual operation score;
a database: the main control module is connected with the medical image examination device and is used for storing relevant knowledge and teaching videos of the medical image examination;
a display module: and the main control module is connected with the main control module and used for outputting the knowledge of medical image examination, teaching video, collected user simulation training images and practice scores by utilizing the display.
Further, the simulation training platform specifically includes:
the simulation training platform specifically comprises: the medical image examination main control operation table, the actual operation instrument, the matched setting, the simulation training software and the display screen;
the simulation training platform adopts the technology of Internet of things and the three-dimensional reconstruction technology of a computer, and displays a main control console, an actual operation instrument, related supporting facilities and simulation training software of each medical image inspection instrument on a display screen; and transmitting the simulation training result to the main control module in real time.
In summary, the advantages and positive effects of the invention are:
the invention combines theory and practice, the medical image examination simulation platform can fully mobilize feeling, movement and thinking, and the learning efficiency is greatly improved; meanwhile, the simulation operation software and the operation instrument restore a real medical image examination instrument, so that the training of medical image examination can be more intuitively carried out; the simulation training is safer, no serious consequence is caused, meanwhile, the user operation can be judged based on the standard teaching video, the user error can be pointed out more visually, the user can be supervised and corrected in time, the learning of corresponding knowledge points is enhanced, the training practicability is greatly improved, and the effect is obvious. The invention carries out scoring based on the system, and can avoid subjectivity and error and leakage rate of artificial scoring. The medical image examination training system and the medical image examination training method provided by the invention can adapt to examination of various medical imaging agents and have universality. The image enhancement algorithm based on wavelet transformation is adopted to enhance the image, so that the noise in the image is effectively inhibited, most of edge information is reserved, the matching degree is improved, and accurate scoring is provided for a user. The method for training the user training effect can be used for training the user training effect objectively and comprehensively, and the accuracy is improved. In the invention, in the process of storing the corresponding picture data used for calculating the matching degree by the database, the operation efficiency of the medical image examination training system can be improved by clustering the pictures, and the effect of training and training a user can be improved. The camera module of the invention utilizes a plurality of cameras to collect images of user simulation training, and adopts an image enhancement algorithm based on wavelet transformation to enhance the collected images, thereby being beneficial to improving the matching degree between the images.
The user performs simulation training of medical image examination on a simulation training platform; the system utilizes a plurality of cameras to collect operation data of user simulation training in real time; in the simulation training of medical image examination on a simulation training platform, a simulation training data processing module integrated with the simulation training platform gives out a sequencing vector of each training index of simulation training data, and the subjective weight is calculated by using an ordered binary comparison method;
calculating objective weight by using a coefficient of variation method according to an initial decision matrix consisting of values of each training index; solving comprehensive weight by using a vector similarity theory; converting the initial decision matrix in the real number form into a decision matrix, and converting the weight vector into the decision matrix; and (c) after the simulation training data priority index S (a) is calculated, outputting the optimal simulation training data.
Drawings
Fig. 1 is a flowchart of a medical image examination training method based on the internet of things according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a medical image examination training system based on the internet of things according to an embodiment of the present invention.
In the figure: 1. a teaching module; 2. a main control module; 3. a simulation training platform; 4. a camera module; 5. a scoring module; 6. a database; 7. and a display module.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The following detailed description of the principles of the invention is provided in connection with the accompanying drawings.
As shown in fig. 1, the medical image examination training method based on the internet of things provided by the embodiment of the present invention specifically includes:
s101: a user learns the relevant knowledge of medical image examination by using a multimedia technology and watches a teaching video;
s102: the user carries out simulation training of corresponding medical image examination according to the watched teaching video; simultaneously, a plurality of cameras are used for collecting operation data of user simulation training in real time;
s103: comparing and analyzing the collected operation data of the user simulation training with standard teaching data of the medical examination stored in a database in advance, judging whether the user operation is wrong or not, and giving an actual operation score;
s104: the system stores the related knowledge and teaching video of the medical image examination, and outputs the information of the knowledge, the teaching video, the collected user simulation training image, the practice score and the like of the medical image examination by using the display.
In step S102, a user performs simulation training of medical image examination on a simulation training platform; the system utilizes a plurality of cameras to collect operation data of user simulation training in real time; in the simulation training of medical image examination on a simulation training platform, a simulation training data processing module integrated with the simulation training platform gives out a sequencing vector of each training index of simulation training data, and the subjective weight is calculated by using an ordered binary comparison method; calculating objective weight by using a coefficient of variation method according to an initial decision matrix consisting of values of each training index; solving comprehensive weight by using a vector similarity theory; converting the initial decision matrix in the real number form into a decision matrix, and converting the weight vector into the decision matrix; selecting different preference functions according to the characteristics of each attribute of the decision matrix, wherein all the attributes are benefit types, the magnitude of the preference function value represents the magnitude of the dominant relationship between the schemes, and the attribute value f of the attribute j of the simulation training data a and the simulation training data bj(a),fj(b) Respectively as follows:
Figure BDA0002325419800000091
calculate the value of the preference function:
Figure BDA0002325419800000092
calculating a preference indicator Π (a, b):
Figure BDA0002325419800000093
calculating pi (a, b) of any two simulation training data, and calculating inflow, outflow and net flow indexes according to the pi (a, b);
and (3) outflow:
Figure BDA0002325419800000101
and (3) inflow:
Figure BDA0002325419800000102
net flow:
Φ(a)=Φ+(a)-Φ-(a)=(_(a),g(a));
wherein _ (a) represents an approval value of the recipe, and g (a) represents an objection value of the recipe;
after the simulation training data priority index S (a) is calculated, outputting optimal simulation training data;
S(a)=_(a)-g(a);
the subjective weight is obtained by an ordered binary comparison method, and the specific method comprises the following steps:
step 1, determining a training object and an expert set: x is the overall object set under investigation and is marked as X ═ X1,x2,...xNThe set of experts involved in determining the weights of the indices is P ═ P1,p2...pL};
Step 2, sequencing the indexes by applying a set value iteration method: the weight is { lambda12,...λLSorting the indexes in the index set according to the importance degree, wherein the index sequence set selected by k (k is more than or equal to 1 and less than or equal to L) is Xk=(x3,x5,x1,xN...,xN-1) In the formula x3At XkThe first position of (2), i.e. representing x3Considered most important at k, at X according to each indexkThe position in (1) is given index score at XkIn x3The corresponding score is N, x5The corresponding score is N-1, xN-1The corresponding score is 1;
μi,k(i is more than or equal to 1 and less than or equal to N, k is more than or equal to 1 and less than or equal to L) is taken as a score obtained by the index i at k, and
Figure BDA0002325419800000103
as a composite score, i is 1. ltoreq. N, in the formulaiThe training indexes are newly sequenced from big to small,
Figure BDA0002325419800000104
step 3, comparing the adjacent training indexes to obtain a comparison matrix;
giving a training interval by comparing the importance degree of a previous index relative to a next index in adjacent indexes, and taking adjacent r as an end point value of the intervalkNumerical values of relative importance between two rkThe importance degree corresponding to the numerical value;
and 4, converting the interval into a point value through the following formula:
Figure BDA0002325419800000111
in the formula, rij' lower bound in training matrix for index j, r, for expert iij"is the upper bound in the training matrix, j ═ 1, 2, …, n-1;
step 5, determining the weight of the training index: because N indexes are adjacently compared, N-1 comparison values are obtained:
in the formula: r is1Meaning that the degree of importance of the first indicator after reordering relative to the second indicator is determined by the ratio of the absolute importance of the first indicator to the absolute importance of the second indicator
Figure BDA0002325419800000112
In the description of the present invention,
Figure BDA0002325419800000113
Figure BDA0002325419800000114
Figure BDA0002325419800000115
the comprehensive weight of (a) is:
Figure BDA0002325419800000116
the weights of the other indices are:
Figure BDA0002325419800000117
the specific method for obtaining the objective weight by the variation coefficient method comprises the following steps:
the training index system has m training indexes, n training objects are subjected to system evaluation and data sampling, and an original data training matrix is expressed as a matrix X:
Figure BDA0002325419800000118
the specific method for obtaining the objective weight by the variation coefficient method further comprises the following steps: calculating the mean value and the standard deviation of each index according to the actual value of each classified object index:
wherein the mean and standard deviation of the jth index are respectively:
Figure BDA0002325419800000121
Figure BDA0002325419800000122
wherein j is 1, 2, … … m;
the specific method for obtaining the objective weight by the variation coefficient method further comprises the following steps of:
Figure BDA0002325419800000123
the specific method for obtaining the objective weight by the variation coefficient method further comprises the following steps of determining the weight of each index:
firstly, the index variation coefficient is normalized,
Figure BDA0002325419800000124
then, a weight set V of the index is obtainedj={ν12,…νMTherein of
Figure BDA0002325419800000125
In step S103, the comparing and analyzing of the user operation data and the standard teaching data provided in the embodiment of the present invention specifically includes:
(1) the teacher marks deduction points, corresponding deduction values, corresponding knowledge points and teaching videos in the system in advance;
(2) comparing and analyzing the operation data of user simulation training acquired by the camera with standard teaching data stored in a database in advance, and comparing and deducting points according to the weight ratio;
(3) and when the point of deduction appears in the user operation data, deducting the corresponding point, and outputting the residual point, the point of deduction and the corresponding correct operation and related knowledge points.
As shown in fig. 2, the medical image examination training system based on the internet of things provided in the embodiment of the present invention specifically includes:
the system comprises a teaching module 1, a main control module 2, a simulation training platform 3, a camera module 4, a grading module 5, a database 6 and a display module 7;
teaching module 1: is connected with the main control module 2 and is used for carrying out the related knowledge and teaching video of the medical image examination by utilizing the multimedia technology;
and (3) the main control module 2: the system is connected with a teaching module 1, a simulation training platform 3, a camera module 4, a scoring module 5, a database 6 and a display module 7; the single chip microcomputer is used for controlling each module to work normally;
simulation training platform 3: is connected with the main control module 2 and is used for carrying out the simulation training of the medical image examination based on the three-dimensional reconstruction technology of the computer;
the camera module 4: the main control module 2 is connected with the main control module and is used for collecting operation data of a user during simulation training by utilizing a plurality of cameras;
and a scoring module 5: the main control module 2 is connected with the camera module 4 and is used for comparing the user simulation training operation collected by the camera module 4 with the teaching video prestored in the database 6, judging whether the user operation is wrong or not and giving an actual operation score;
the database 6 is: is connected with the main control module 2 and is used for storing the relevant knowledge and teaching video of the medical image examination;
the display module 7: and the main control module 2 is connected with the main control module and used for outputting the knowledge of medical image examination, teaching video, collected user simulation training images and practice scores by using a display.
The simulation training platform 3 provided by the embodiment of the invention specifically comprises:
the simulation training platform 3 specifically includes: the medical image examination main control operation table, the actual operation instrument, the matched setting, the simulation training software and the display screen;
the simulation training platform 3 presents a main control console, an actual operation instrument, related supporting facilities and simulation training software of each medical image inspection instrument on a display screen by adopting an internet of things technology and a computer three-dimensional reconstruction technology; and transmitting the simulation training result to the main control module in real time.
The camera module 4 acquires images of the user simulation training by using a plurality of cameras, and in order to calculate the matching degree between the images, the acquired images need to be enhanced by adopting an image enhancement algorithm based on wavelet transformation, and the method specifically comprises the following steps;
let f (x, y) be the space L2(IR) image wavelet transformed into (W)ψf)j,k(x, y) wherein s is 2jRepresenting a scale, k representing a decomposition direction;
step one, positively transforming an image f (x, y) into (W) through waveletsψf)j,k(x,y);
Step two, calculating wavelet coefficient (W) according to the image modelψf)j,kThreshold value T of (x, y)j,k
Step three, wavelet coefficient (W)ψf)j,k(x,y)≤Tj,kCarrying out zero setting treatment;
step four, stretching the nonzero wavelet coefficient, namely Gj,k(x,y)·(Wψf)j,k(x, y) wherein Gj,k(x, y) ≧ 1 is a gain factor with a dimension of j at position (x, y) and a direction of k;
step five, the processed wavelet coefficient (W)ψf)j,k(x, y) performing a corresponding inverse wavelet transform to obtain an enhanced image.
The method for training the training effect of the user by the scoring module 5 specifically comprises the following steps:
firstly, acquiring user simulation training operation image data;
step two, comparing the collected image data with teaching video data stored in advance in a database;
thirdly, the system extracts operation key points in the influence of prestored teaching images and practical training operations, establishes a comparison group, calculates the similarity of the two groups of images and judges whether the matching is successful or not according to the matching degree; if yes, judging that the practical training person operates correctly; if not, judging the operation error of the training person;
comparing the pictures of each group by the system, and calculating the matching degree of the pictures of each group;
and step five, according to the calculated matching degree, the system makes corresponding scores for the practical training operation of the user, and the system presents error points of the practical training of the user and provides reference for the user.
In the process of storing the corresponding picture data used for calculating the matching degree in the database 6, the pictures need to be clustered, and the specific process is as follows:
step one, the extracted picture is used as an original image;
secondly, extracting color features and texture features of the original picture;
thirdly, performing pre-classification and combination according to the extracted features, and evolving and classifying multiple targets;
and step four, obtaining an optimal solution to obtain a classification result.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When used in whole or in part, can be implemented in a computer program product that includes one or more computer instructions. When loaded or executed on a computer, cause the flow or functions according to embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, the computer instructions may be transmitted from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL), or wireless (e.g., infrared, wireless, microwave, etc.)). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A medical image examination training method based on the Internet of things is characterized by specifically comprising the following steps:
step one, a user learns the relevant knowledge of medical image examination by using a multimedia technology and watches a teaching video;
step two, a user carries out simulation training of medical image examination on a simulation training platform; the system utilizes a plurality of cameras to collect operation data of user simulation training in real time; in the simulation training of medical image examination on a simulation training platform, a simulation training data processing module integrated with the simulation training platform gives out a sequencing vector of each training index of simulation training data, and the subjective weight is calculated by using an ordered binary comparison method;
calculating objective weight by using a coefficient of variation method according to an initial decision matrix consisting of values of each training index; solving comprehensive weight by using a vector similarity theory; converting the initial decision matrix in the real number form into a decision matrix, and converting the weight vector into the decision matrix; selecting different preference functions according to the characteristics of each attribute of the decision matrix, wherein all the attributes are benefit types, the magnitude of the preference function value represents the magnitude of the dominant relationship between the schemes, and the attribute value f of the attribute j of the simulation training data a and the simulation training data bj(a),fj(b) Respectively as follows:
Figure FDA0002325419790000011
calculate the value of the preference function:
Figure FDA0002325419790000012
calculating a preference indicator Π (a, b):
Figure FDA0002325419790000013
calculating pi (a, b) of any two simulation training data, and calculating inflow, outflow and net flow indexes according to the pi (a, b);
and (3) outflow:
Figure FDA0002325419790000021
and (3) inflow:
Figure FDA0002325419790000022
net flow:
Φ(a)=Φ+(a)-Φ-(a)=(_(a),g(a));
wherein _ (a) represents an approval value of the recipe, and g (a) represents an objection value of the recipe;
after the simulation training data priority index S (a) is calculated, outputting optimal simulation training data;
S(a)=_(a)-g(a);
comparing and analyzing the collected operation data of the user simulation training with standard teaching data of the medical examination stored in a database in advance, judging whether the user operation is wrong or not, and giving an actual operation score; in the comparative analysis of the user operation data and the standard teaching data, marking deduction points, corresponding knowledge points and teaching videos in the system in advance; comparing and analyzing the operation data of user simulation training acquired by the camera with standard teaching data stored in a database in advance, and comparing and deducting points according to the weight ratio; when the point of deduction appears in the user operation data, deducting the corresponding score, and outputting the residual score, the point of deduction and the corresponding correct operation and related knowledge points;
step four, storing relevant knowledge and teaching videos of medical image examination by using a database;
and fifthly, outputting the knowledge of the medical image examination, the teaching video, the collected user simulation training image and the practice score by using the display.
2. The Internet of things-based medical image examination training method of claim 1,
the subjective weight is obtained by an ordered binary comparison method, and the specific method comprises the following steps:
step 1, determining a training object and an expert set: x is the overall object set under investigation and is marked as X ═ X1,x2,...xNThe set of experts involved in determining the weights of the indices is P ═ P1,p2...pL};
Step 2, sequencing the indexes by applying a set value iteration method: the weight is { lambda12,...λLSorting the indexes in the index set according to the importance degree, wherein the index sequence set selected by k (k is more than or equal to 1 and less than or equal to L) is Xk=(x3,x5,x1,xN...,xN-1) In the formula x3At XkThe first position of (2), i.e. representing x3Considered most important at k, at X according to each indexkThe position in (1) is given index score at XkIn x3The corresponding score is N, x5The corresponding score is N-1, xN-1The corresponding score is 1;
μi,k(i is more than or equal to 1 and less than or equal to N, k is more than or equal to 1 and less than or equal to L) is taken as a score obtained by the index i at k, and
Figure FDA0002325419790000031
as a composite score, i is 1. ltoreq. N, in the formulaiThe training indexes are newly sequenced from big to small,
Figure FDA0002325419790000032
step 3, comparing the adjacent training indexes to obtain a comparison matrix;
giving a training interval by comparing the importance degree of a previous index relative to a next index in adjacent indexes, and taking adjacent r as an end point value of the intervalkNumerical values of relative importance between two rkThe importance degree corresponding to the numerical value;
and 4, converting the interval into a point value through the following formula:
Figure FDA0002325419790000033
in the formula, rij'lower bound, r' in the training matrix for index j for expert iijIs the upper bound in the training matrix, j ═ 1, 2, …, n-1;
step 5, determining the weight of the training index: because N indexes are adjacently compared, N-1 comparison values are obtained:
in the formula: r is1Meaning that the degree of importance of the first indicator after reordering relative to the second indicator is determined by the ratio of the absolute importance of the first indicator to the absolute importance of the second indicator
Figure FDA0002325419790000034
In the description of the present invention,
Figure FDA0002325419790000035
Figure FDA0002325419790000036
Figure FDA0002325419790000037
the comprehensive weight of (a) is:
Figure FDA0002325419790000041
the weights of the other indices are:
Figure FDA0002325419790000042
3. the internet-of-things-based medical image examination training method of claim 1, wherein the specific method for obtaining the objective weight by a coefficient of variation method comprises the following steps:
the training index system has m training indexes, n training objects are subjected to system evaluation and data sampling, and an original data training matrix is expressed as a matrix X:
Figure FDA0002325419790000043
4. the internet of things-based medical image examination training method of claim 1, wherein the specific method for obtaining the objective weight by a coefficient of variation method further comprises: calculating the mean value and the standard deviation of each index according to the actual value of each classified object index:
wherein the mean and standard deviation of the jth index are respectively:
Figure FDA0002325419790000044
Figure FDA0002325419790000045
wherein j is 1, 2, … … m.
5. The internet-of-things-based medical image examination training method of claim 1, wherein the objective weight is obtained by a coefficient of variation method, and further comprising calculating the coefficient of variation of each index:
Figure FDA0002325419790000046
6. the internet of things-based medical image examination training method of claim 1, wherein the objective weighting by a coefficient of variation method further comprises determining the weighting of each index:
firstly, the index variation coefficient is normalized,
Figure FDA0002325419790000051
then, a weight set V of the index is obtainedj={ν12,…νMTherein of
Figure FDA0002325419790000052
7. A terminal is characterized in that the terminal is provided with a controller for realizing the medical image examination training method based on the Internet of things according to claims 1-6.
8. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the internet of things-based medical image examination training method of any one of claims 1-6.
9. The utility model provides a medical image inspection training system based on thing networking which characterized in that, medical image inspection training system based on thing networking specifically includes:
a teaching module: the system is connected with the main control module and is used for carrying out relevant knowledge and teaching video of medical image examination by utilizing a multimedia technology;
the main control module: the system is connected with a teaching module, a camera module, a simulation training platform, a grading module, a database and a display module; the single chip microcomputer is used for controlling each module to work normally;
a simulation training platform: the main control module is connected with the medical image examination simulation training module and is used for carrying out medical image examination simulation training based on a computer three-dimensional reconstruction technology;
a camera module: the system is connected with the main control module and is used for acquiring operation data during user simulation training by utilizing a plurality of cameras;
a scoring module: the system is connected with the main control module and used for comparing user simulation training operation acquired by the camera module with teaching video stored in a database in advance, judging whether user operation fails or not and giving an actual operation score;
a database: the main control module is connected with the medical image examination device and is used for storing relevant knowledge and teaching videos of the medical image examination;
a display module: and the main control module is connected with the main control module and used for outputting the knowledge of medical image examination, teaching video, collected user simulation training images and practice scores by utilizing the display.
10. The internet of things-based medical image examination training system of claim 9, wherein the simulation training platform specifically comprises:
the simulation training platform specifically comprises: the medical image examination main control operation table, the actual operation instrument, the matched setting, the simulation training software and the display screen;
the simulation training platform adopts the technology of Internet of things and the three-dimensional reconstruction technology of a computer, and displays a main control console, an actual operation instrument, related supporting facilities and simulation training software of each medical image inspection instrument on a display screen; and transmitting the simulation training result to the main control module in real time.
CN201911314349.1A 2019-12-19 2019-12-19 Medical image examination training system and method based on Internet of things Pending CN111028949A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911314349.1A CN111028949A (en) 2019-12-19 2019-12-19 Medical image examination training system and method based on Internet of things

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911314349.1A CN111028949A (en) 2019-12-19 2019-12-19 Medical image examination training system and method based on Internet of things

Publications (1)

Publication Number Publication Date
CN111028949A true CN111028949A (en) 2020-04-17

Family

ID=70210690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911314349.1A Pending CN111028949A (en) 2019-12-19 2019-12-19 Medical image examination training system and method based on Internet of things

Country Status (1)

Country Link
CN (1) CN111028949A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021233086A1 (en) * 2020-05-18 2021-11-25 日本电气株式会社 Information processing method, electronic device, and computer storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105046574A (en) * 2015-04-29 2015-11-11 国家电网公司 Black-start scheme evaluation method
CN105788390A (en) * 2016-04-29 2016-07-20 吉林医药学院 Medical anatomy auxiliary teaching system based on augmented reality
US20180098814A1 (en) * 2011-03-30 2018-04-12 Surgical Theater LLC Method and system for simulating surgical procedures
CN109035091A (en) * 2018-07-25 2018-12-18 深圳市异度信息产业有限公司 A kind of scoring method, device and equipment for student experimenting
CN109872130A (en) * 2019-02-13 2019-06-11 上海麦予智能科技有限公司 Medical surgery practical training simulated system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180098814A1 (en) * 2011-03-30 2018-04-12 Surgical Theater LLC Method and system for simulating surgical procedures
CN105046574A (en) * 2015-04-29 2015-11-11 国家电网公司 Black-start scheme evaluation method
CN105788390A (en) * 2016-04-29 2016-07-20 吉林医药学院 Medical anatomy auxiliary teaching system based on augmented reality
CN109035091A (en) * 2018-07-25 2018-12-18 深圳市异度信息产业有限公司 A kind of scoring method, device and equipment for student experimenting
CN109872130A (en) * 2019-02-13 2019-06-11 上海麦予智能科技有限公司 Medical surgery practical training simulated system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021233086A1 (en) * 2020-05-18 2021-11-25 日本电气株式会社 Information processing method, electronic device, and computer storage medium

Similar Documents

Publication Publication Date Title
CN108428227B (en) No-reference image quality evaluation method based on full convolution neural network
CN111814704B (en) Full convolution examination room target detection method based on cascade attention and point supervision mechanism
CN110378232B (en) Improved test room examinee position rapid detection method of SSD dual-network
WO2024051597A1 (en) Standard pull-up counting method, and system and storage medium therefor
CN113314205A (en) Efficient medical image labeling and learning system
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN111860157A (en) Motion analysis method, device, equipment and storage medium
CN112995652A (en) Video quality evaluation method and device
CN114550027A (en) Vision-based motion video fine analysis method and device
CN107977949A (en) A kind of method improved based on projection dictionary to the Medical image fusion quality of study
CN115810163B (en) Teaching evaluation method and system based on AI classroom behavior recognition
CN114663426A (en) Bone age assessment method based on key bone area positioning
CN116705300A (en) Medical decision assistance method, system and storage medium based on sign data analysis
CN111028949A (en) Medical image examination training system and method based on Internet of things
CN110288026A (en) A kind of image partition method and device practised based on metric relation graphics
CN115760822B (en) Image quality detection model building method and system
CN110033846B (en) Obstetrical clinical auxiliary sequential labor device, control method and computer program
CN116993639A (en) Visible light and infrared image fusion method based on structural re-parameterization
CN107967660B (en) Automatic facial recognition's safe examination system
CN112102285B (en) Bone age detection method based on multi-modal countermeasure training
CN110084109A (en) A kind of low-resolution face image recognition methods, device, electronic equipment and storage medium
CN115019396A (en) Learning state monitoring method, device, equipment and medium
CN113919983A (en) Test question portrait method, device, electronic equipment and storage medium
CN114445649A (en) Method for detecting RGB-D single image shadow by multi-scale super-pixel fusion
CN113486912A (en) Automatic scoring method and system based on computer vision and deep learning in middle school experiment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200417