CN111553218A - Intelligent medical skill teaching monitoring system based on human body posture recognition - Google Patents

Intelligent medical skill teaching monitoring system based on human body posture recognition Download PDF

Info

Publication number
CN111553218A
CN111553218A CN202010314544.0A CN202010314544A CN111553218A CN 111553218 A CN111553218 A CN 111553218A CN 202010314544 A CN202010314544 A CN 202010314544A CN 111553218 A CN111553218 A CN 111553218A
Authority
CN
China
Prior art keywords
video
module
feature vector
result
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010314544.0A
Other languages
Chinese (zh)
Inventor
胡潇允
倪铭徽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Nanjing Medical University
Original Assignee
Nanjing Medical University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Medical University filed Critical Nanjing Medical University
Priority to CN202010314544.0A priority Critical patent/CN111553218A/en
Publication of CN111553218A publication Critical patent/CN111553218A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Resources & Organizations (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Multimedia (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Artificial Intelligence (AREA)
  • Development Economics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Game Theory and Decision Science (AREA)
  • Human Computer Interaction (AREA)
  • Computing Systems (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Primary Health Care (AREA)

Abstract

The invention discloses an intelligent medical skill teaching monitoring system based on human body posture recognition, which consists of a video acquisition module, a video comparison module, a result statistics module, a result pushing module and a background server, wherein the video acquisition module is connected with the video comparison module, the video comparison module is connected with the result statistics module, the result statistics module is connected with the result pushing module, and the background server is respectively connected with the video acquisition module, the video comparison module, the result statistics module and the result pushing module; the video acquisition module comprises a plurality of high definition digtal cameras, high definition digtal cameras install at laboratory bench operation scene. The invention is convenient for teachers to develop medical skill depth teaching based on artificial intelligence, is convenient for students to develop medical skill depth autonomous learning based on artificial intelligence, and is convenient for teachers and students to objectively and timely obtain teaching feedback information, thereby improving teaching quality and promoting development of medical skills of students.

Description

Intelligent medical skill teaching monitoring system based on human body posture recognition
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to an intelligent medical skill teaching monitoring system based on human body posture recognition.
Background
With the gradual maturity of the internet and artificial intelligence technologies, the technologies are applied to the traditional teaching process, and the intelligent innovation of the education technology is promoted, so that the intelligent teaching method becomes the current research hotspot. The teaching of the medical operation skill is an important part of medical teaching, is an important link for cultivating medical specialized talents, and is a main mode for deepening the understanding and application of medical students to the theoretical knowledge and improving the comprehensive capacity of the medical students. The emphasis of the teaching of the medical operation skills is on the process of student learning, so the process management is particularly important.
The following problems mainly exist in the current teaching of medical operation skills:
(1) before teaching, the evaluation of the original operation skill level of students and the guiding and monitoring of the pre-learning link are lacked. (2) In the teaching, a teacher needs to teach a plurality of students simultaneously, the learning process of all the students is difficult to monitor and guide, the information is not mastered enough, and the teaching effect is not ideal. (3) After teaching, due to condition limitation, when a student independently trains operation skills, the process and the result of the student cannot be effectively supervised and evaluated, the student is blindly mastered, and the learning effect is influenced. (4) The teacher judges the operation skills subjectively, and different teachers score differently and are difficult to unify. (5) Teacher and student are difficult to carry out horizontal and vertical contrast to the learning effect of operation skill, are difficult to master all students' study problem accurately. (6) The clinical patient has stronger and stronger right-maintaining consciousness, and the students are not allowed to carry out repeated medical operations for many times.
Disclosure of Invention
The invention aims to provide an intelligent medical skill teaching monitoring system based on face recognition, which is convenient for teachers to carry out medical operation skill depth teaching based on artificial intelligence, students to carry out medical operation skill depth autonomous learning based on artificial intelligence, and the teachers and the students to objectively and timely obtain feedback information of teaching effects, so that the teaching quality is accurately improved, the development of medical skills of the students is promoted, and the problems in the background art are solved.
In order to achieve the purpose, the invention provides the following technical scheme: an intelligent medical skill teaching monitoring system based on human body posture recognition is composed of a video acquisition module, a video comparison module, a result statistics module, a result pushing module and a background server, wherein the video acquisition module is connected with the video comparison module, the video comparison module is connected with the result statistics module, the result statistics module is connected with the result pushing module, and the background server is respectively connected with the video acquisition module, the video comparison module, the result statistics module and the result pushing module;
the video acquisition module consists of a plurality of high-definition cameras, the high-definition cameras are installed on the operation site of the experiment table, the number of the high-definition cameras is determined according to the area of the operation site, and the high-definition cameras are used for acquiring operation videos of the operation site of the experiment table and transmitting the acquired operation videos to the video comparison module through an HTTP (hyper text transport protocol) or FTP (file transfer protocol);
the video comparison module receives an operation video sent by the video acquisition module, and performs video splitting on the operation video and a standard video to obtain an atlas to be compared corresponding to the operation video and an original atlas corresponding to the standard video;
preprocessing the operation video through an optical flow method to obtain a first atlas corresponding to the operation video;
preprocessing the standard video through an optical flow method to obtain a second atlas corresponding to the standard video;
acquiring a target image feature vector corresponding to each target image in the first image set through a convolutional neural network to form a first image feature vector set;
acquiring a target image feature vector corresponding to each target image in the second image set through a convolutional neural network to form a second image feature vector set;
the result counting module acquires the similarity between each picture feature vector in the first picture feature vector set and the corresponding picture feature vector in the second picture feature vector set so as to obtain the average similarity between the operation video and the standard video; grading the average similarity of the operation video and the standard video on the premise of meeting an operation rule; matching in system basic data to obtain name information and class information corresponding to the operation video; recording the grading result, the deduction reason, the name information and the class information operation video into a comparison record of the background management system; feeding back the scoring result, the deduction reason, the name information and the class information operation video to the result pushing module;
the result pushing module receives the grading result, the name information, the class information and the operation video, records the grading result, the name information, the class information and the operation video in an operation record of a background management system after superposition, and pushes the grading result, the mark deduction reason, the name information and the class information to a system administrator, a lessee teacher and a student after the operation time period is finished;
the background server: the central hub plays a role in transferring and distributing data and is the central hub of the whole system.
Preferably, the system also comprises an information input module, wherein the information input module is respectively connected with the result counting module and the background server, and the information input module consists of a video quality detection input unit, a personnel basic data input unit, a scoring rule input unit and an operation rule input unit.
Preferably, the video quality detection recording unit is used for detecting and recording standard videos in real time, the personnel basic data recording unit is used for recording name information, class information, lessee teachers and student basic data, the grading rule recording unit is used for setting grading corresponding to average similarity, the operation rule recording unit is used for setting operation rules, and the operation rules comprise operation site areas shot by a high-definition camera, the maximum preset operation duration and face videos shot by the high-definition camera in real time and the preset face videos.
Preferably, the preprocessing the operation video by an optical flow method to obtain a first atlas corresponding to the operation video includes: acquiring speed vector characteristics corresponding to each pixel point of each frame of picture in the operation video; and if the speed vector characteristics of at least one frame of picture in the operation video do not keep continuously changing, forming a first atlas in the operation video by the corresponding picture.
Preferably, the obtaining, by a convolutional neural network, a target graph feature vector corresponding to each target graph in the first graph set to form a first graph feature vector set includes: preprocessing each target image in the first image set to obtain preprocessed images corresponding to each target image and image pixel matrixes corresponding to each preprocessed image; the method comprises the following steps of preprocessing a target image, namely sequentially carrying out graying, edge detection and binarization processing on the target image; inputting the picture pixel matrix corresponding to each preprocessed picture into an input layer in the convolutional neural network model to obtain a characteristic diagram corresponding to each preprocessed picture; inputting each characteristic diagram into a pooling layer in the convolutional neural network model to obtain a one-dimensional vector corresponding to each characteristic diagram; and inputting the one-dimensional vectors corresponding to the characteristic graphs into a full-connection layer in the convolutional neural network model to obtain target graph characteristic vectors corresponding to the characteristic graphs so as to form a first graph characteristic vector set.
Preferably, the obtaining the similarity between each picture feature vector in the first picture feature vector set and the corresponding picture feature vector in the second picture feature vector set includes: and after Euclidean distances between the image feature vectors in the first image feature vector set and the corresponding image feature vectors in the second image feature vector set are obtained, an average Euclidean distance value is obtained, and the average Euclidean distance value is used as the similarity between the first image feature vector set and the second image feature vector set.
Compared with the prior art, the invention has the following beneficial effects: according to the invention, the similarity and the score hook are obtained by comparing the shot operation video with the pre-recorded standard operation video, so that automatic scoring and the reason for deduction are realized.
Drawings
FIG. 1 is a functional block diagram of the system components of the present invention;
FIG. 2 is a first diagram set of functional blocks of the present invention;
FIG. 3 is a second set of functional block diagrams of the present invention.
In the figure: 100-a video acquisition module, 101-a high-definition camera, 200-a video comparison module, 201-a first atlas, 2011-a first image feature vector set, 202-a second atlas, 2021-a second image feature vector set, 300-a result counting module, 400-a result pushing module 4, 500-a background server, 600-an information entry module, 601-a video quality detection entry unit, 602-a personnel basic data entry unit, 603-a scoring rule entry unit and 604-an operation rule entry unit.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1 to 3, the present invention provides a technical solution: an intelligent medical skill teaching monitoring system based on human body posture recognition is composed of a video acquisition module 100, a video comparison module 200, a result statistics module 300, a result pushing module 400 and a background server 500, wherein the video acquisition module 100 is connected with the video comparison module 200, the video comparison module 200 is connected with the result statistics module 300, the result statistics module 300 is connected with the result pushing module 400, and the background server 500 is respectively connected with the video acquisition module 100, the video comparison module 200, the result statistics module 300 and the result pushing module 400;
the video acquisition module 100 is composed of a plurality of high-definition cameras 101, the high-definition cameras 101 are installed on the laboratory bench operation site, the number of the high-definition cameras 101 is determined according to the operation site area, and the high-definition cameras are used for acquiring operation videos of the laboratory bench operation site and transmitting the acquired operation videos to the video comparison module 200 through an HTTP (hyper text transport protocol) or FTP (file transfer protocol) protocol;
the video comparison module 200 receives the operation video sent by the video acquisition module 100, and performs video splitting on both the operation video and the standard video to obtain an atlas to be compared corresponding to the operation video and an original atlas corresponding to the standard video;
since the essence of the video is to play a certain number of pictures in a unit time, for example, play 24-30 consecutive pictures in 1 second, in order to compare the similarity degree of the video to be compared and the original video, the video to be compared and the original video may be split first to obtain a set of pictures to be compared corresponding to the video to be compared and a set of original pictures corresponding to the original video. When the video to be compared and the original video are both split, some common video splitting tools are adopted, and the video of each second is split into 24 frames of pictures.
Preprocessing the operation video by an optical flow method to obtain a first atlas 201 corresponding to the operation video;
further, the method for preprocessing the operation video by the optical flow method to obtain the first atlas 201 corresponding to the operation video includes the following steps:
1. acquiring a speed vector characteristic corresponding to each pixel point of each frame of picture in an operation video;
2. if the speed vector characteristics of at least one frame of picture in the operation video do not keep continuously changing, the corresponding pictures are combined into a first atlas 201 in the operation video.
When a moving object is viewed by the human eye, the scene of the object forms a series of continuously changing images on the retina of the human eye, and this series of continuously changing information continuously "flows" through the retina (i.e., the image plane) as if it were a "stream" of light, hence the term light stream. The optical flow expresses changes in the image, containing information of the motion of the object, which can be used to determine the motion of the object. Three elements of optical flow: one is the motion velocity field, which is a necessary condition for forming optical flow; the part with optical characteristics, such as gray pixel points, can carry motion information; and thirdly, the imaging projection is from the scene to the image plane and can thus be observed.
Defining the optical flow is based on points, and specifically, assuming that (u, v) is the optical flow of the image point (x, y), the (x, y, u, v) is referred to as an optical flow point. The collection of all optical flow points is called an optical flow field. When an object with optical properties moves in three-dimensional space, a corresponding image motion field, or image velocity field, is formed at the image plane. In an ideal case, the optical flow field corresponds to a motion field.
Each pixel in the image is assigned a velocity vector, thus forming a motion vector field. According to the speed vector characteristics of each pixel point, the image can be dynamically analyzed. If there is no moving object in the image, the optical flow vector is continuously varied over the entire image area. When moving objects exist in the image, the target and the background move relatively. The velocity vector formed by the moving object is different from the velocity vector of the background, so that the position of the moving object can be calculated. Preprocessing is performed by an optical flow method, so that a first atlas 201 corresponding to the video to be compared can be obtained.
Preprocessing the standard video by an optical flow method to obtain a second atlas 202 corresponding to the standard video;
further, when the canonical video is preprocessed by the optical flow method, the process is the same as that of preprocessing the operation video by the optical flow method, and the second atlas 202 corresponding to the canonical video can be obtained by preprocessing by the optical flow method, which is not described again. The first total number of pictures included in the first album 201 does not exceed the second total number of pictures included in the second album 202, and generally, the first total number of pictures included in the first album 201 is equal to the second total number of pictures included in the second album 202.
Acquiring a target map feature vector corresponding to each target map in the first map set 201 through a convolutional neural network to form a first map feature vector set 2011;
further, obtaining a target map feature vector corresponding to each target map in the first map set 201 through a convolutional neural network to form a first map feature vector set 2011, including the following steps:
1. preprocessing each target image in the first atlas 201 to obtain preprocessed images corresponding to each target image and image pixel matrixes corresponding to each preprocessed image;
2. preprocessing a target image, namely sequentially carrying out graying, edge detection and binarization on the target image;
3. inputting the picture pixel matrix corresponding to each preprocessed picture into an input layer in the convolutional neural network model to obtain a characteristic diagram corresponding to each preprocessed picture;
4. inputting each characteristic diagram into a pooling layer in the convolutional neural network model to obtain a one-dimensional vector corresponding to each characteristic diagram;
5. the one-dimensional vectors corresponding to the feature maps are input to the full-link layer in the convolutional neural network model to obtain target map feature vectors corresponding to the feature maps, so as to form a first map feature vector set 2011.
Acquiring a target map feature vector corresponding to each target map in the second map set 202 through a convolutional neural network to form a second map feature vector set 2021;
further, when the feature vector of the target picture corresponding to each target picture in the second target picture set is obtained through the convolutional neural network, the process is the same as that of obtaining the feature vector of the target picture corresponding to each target picture in the first target picture set through the convolutional neural network. Each target picture in the second target picture set obtains a corresponding target picture feature vector, so that the second picture feature vector set 2021 can be formed.
The result statistics module 300 obtains the similarity between each picture feature vector in the first picture feature vector set 2011 and the corresponding picture feature vector in the second picture feature vector set 2021 to obtain the average similarity between the operation video and the standard video;
further, the method for obtaining the similarity between each picture feature vector in the first picture feature vector set 2011 and the corresponding picture feature vector in the second picture feature vector set 2021 includes the following steps: after Euclidean distances between each picture feature vector in the first picture feature vector set 2011 and the corresponding picture feature vector in the second picture feature vector set 2021 are obtained, an average Euclidean distance value is obtained, the average Euclidean distance value is used as the similarity between the first graph feature vector set 2011 and the second graph feature vector set 2021 to obtain the average similarity between the video to be compared and the original video, in the embodiment, generally, since the operation video is a simulated video of the standard video, the large body motions are generally similar, and in order to judge the similarity between the operation video and the standard video through the motions more finely, a first set 2011 and a second set 2021 of map feature vectors may be obtained separately, then, the similarity between each picture feature vector in the first picture feature vector set 2011 and the corresponding picture feature vector in the second picture feature vector set 2021 is obtained to obtain the average similarity between the operation video and the standard video. For example, the first picture feature vector set 2011 includes 10 picture feature vectors, which are denoted as a1-a 10; the second picture feature vector set 2021 also includes 10 picture feature vectors, which are respectively denoted as b1-b 10; at this time, the euclidean distance between a1 and b1 is calculated to be used as a first similarity between a1 and b1, the euclidean distance between a2 and b2 is calculated to be used as a second similarity between a2 and b2, … …, the euclidean distance between a10 and b10 is calculated to be used as a tenth similarity between a10 and b10, and at this time, an average value of the first similarity to the tenth similarity is obtained to be used as an average similarity between the video to be compared and the original video.
The result statistic module 300 scores the average similarity between the operation video and the standard video on the premise of meeting the operation rule; matching in the system basic data to obtain name information and class information corresponding to the operation video; recording the grading result, the deduction reason, the name information and the class information operation video into a comparison record of the background management system; the scoring result, the deduction reason, the name information and the class information operation video are fed back to the result pushing module 400;
the result pushing module 400 receives the scoring result, the name information, the class information and the operation video, records the scoring result, the name information, the class information and the operation video in the operation record of the background management system after superposition, and pushes the scoring result, the mark deduction reason, the name information and the class information to a system administrator, an arbitrary teacher and a student, wherein the system administrator can search the record in the system at any time, the arbitrary teacher serves as a first learner and needs to record the scoring result, and the student serves as a second learner and can promote the search and repair of missing;
the background server 500: the central hub plays a role in transferring and distributing data and is the central hub of the whole system.
The system further comprises an information entry module 600, the information entry module 600 is respectively connected with the result statistics module 300 and the background server 600, and the information entry module 600 is composed of a video quality detection entry unit 601, a personnel basic data entry unit 602, a scoring rule entry unit 603 and an operation rule entry unit 604.
The video quality detection entry unit 601 is used for detecting and entering a standard video in real time, wherein the standard video meets the requirement of completing all actions within a preset time length.
The personnel basic data entry unit 602 is used for entering name information, class information, lessee-giving teachers and student basic data, and is convenient for the system to send the scoring results of the examination personnel to system administrators, lessee-giving teachers and parents in time through the internet of things.
The scoring rule entering unit 603 is configured to set a score corresponding to the average similarity, for example, the similarity may be 0.6, 0.7, 0.8, 0.9, and the like, and then the score may be 60, 70, 80, 90, and the like, although the actual similarity may be accurate to a percentage, a thousand, and the like, and the actual bisection may also be a percentage system, a thousandth system, and the like. The method further includes entry of a deduction rule, for example, the first picture feature vector set 2011 includes 10 picture feature vectors, which are respectively denoted as a1-a 10; the second graph feature vector set 2021 includes 9 graph feature vectors, which are respectively recorded as b1-b9, so that it is known from these graph feature vectors that a student has fewer steps, and after the steps are deducted (the deduction standard of each step can also be set by the scoring rule entry unit 603), the corresponding scores can be obtained by multiplying the similarity;
the operation rule recording unit 604 is configured to set operation rules, where the operation rules include an operation field area recorded by the high definition camera 101, and if the examiner is beyond a recording range of the high definition camera 101, the score is invalid. The method also comprises the step of presetting the maximum operation time, wherein the practical training time cannot exceed the maximum operation time, and if the practical training time exceeds the maximum operation time, the scoring is invalid.
And matching the real-time shot face video with a preset face video, wherein the practical training personnel sends the real-time shot face video to the system in advance through the internet of things, in the practical training process, the real-time shot face video is matched with the preset face video, namely, the real-time shot face video is compared by adopting the same similarity algorithm, the similarity is used for judging the face conformity degree, namely, the face conformity degree is not less than 0.8, namely, the face conformity degree is the same person, the person is prevented from being replaced, and of course, 0.8, other data can be set through the scoring rule input unit 603.
Compared with the prior art that one-to-one training teacher corresponds to a student to observe and grade in the whole process, the automatic grading system can improve classroom efficiency, facilitate teachers and students to carry out deep learning of operation skills, and improve teaching quality and actual operation capability of students.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. The utility model provides an intelligent medical skill teaching monitoring system based on human posture discernment which characterized in that: the system comprises a video acquisition module (100), a video comparison module (200), a result counting module (300), a result pushing module (400) and a background server (500), wherein the video acquisition module (100) is connected with the video comparison module (200), the video comparison module (200) is connected with the result counting module (300), the result counting module (300) is connected with the result pushing module (400), and the background server (500) is respectively connected with the video acquisition module (100), the video comparison module (200), the result counting module (300) and the result pushing module (400);
the video acquisition module (100) is composed of a plurality of high-definition cameras (101), the high-definition cameras (101) are installed on an operation site of the experiment table, the number of the high-definition cameras (101) is determined according to an operation site area, and the high-definition cameras are used for acquiring operation videos of the operation site of the experiment table and transmitting the acquired operation videos to the video comparison module (200) through an HTTP (hyper text transport protocol) or FTP (file transfer protocol) protocol;
the video comparison module (200) receives an operation video sent by the video acquisition module (100), and performs video splitting on the operation video and a standard video to obtain an atlas to be compared corresponding to the operation video and an original atlas corresponding to the standard video;
preprocessing the operation video through an optical flow method to obtain a first atlas (201) corresponding to the operation video;
preprocessing the canonical video through an optical flow method to obtain a second atlas (202) corresponding to the canonical video;
acquiring a target map feature vector corresponding to each target map in the first map set (201) through a convolutional neural network to form a first map feature vector set (2011);
acquiring a target map feature vector corresponding to each target map in the second map set (202) through a convolutional neural network to form a second map feature vector set (2021);
the result counting module (300) obtains the similarity between each picture feature vector in the first picture feature vector set (2011) and the corresponding picture feature vector in the second picture feature vector set (2021) to obtain the average similarity between the operation video and the standard video; grading the average similarity of the operation video and the standard video on the premise of meeting an operation rule; matching in system basic data to obtain name information and class information corresponding to the operation video; recording the grading result, the deduction reason, the name information and the class information operation video into a comparison record of the background management system; feeding back the scoring result, the deduction reason, the name information and the class information operation video to a result pushing module (400);
the result pushing module (400) receives the scoring result, the name information, the class information and the operation video, records the result after superposition in an operation record of a background management system, and pushes the scoring result, the deduction reason, the name information and the class information to a system administrator, a lesson teacher and a student after the operation time period is finished;
the background server (500): the central hub plays a role in transferring and distributing data and is the central hub of the whole system.
2. The intelligent medical skill teaching monitoring system based on human body posture recognition according to claim 1, characterized in that: the system is characterized by further comprising an information entry module (600), wherein the information entry module (600) is respectively connected with the result counting module (300) and the background server (500), and the information entry module (600) is composed of a video quality detection entry unit (601), a personnel basic data entry unit (602), a scoring rule entry unit (603) and an operation rule entry unit (604).
3. The intelligent medical skill teaching monitoring system based on human body posture recognition of claim 2, wherein: the video quality detection recording unit (601) is used for detecting and recording standard videos in real time, the personnel basic data recording unit (602) is used for recording name information, class information, lessee teachers and student basic data, the scoring rule recording unit (603) is used for setting scoring corresponding to average similarity, the operation rule recording unit (604) is used for setting operation rules, and the operation rules comprise operation site areas recorded by a high-definition camera (101), the maximum operation duration and face videos recorded by the high-definition camera in real time and the matching of the face videos.
4. The intelligent medical skill teaching monitoring system based on human body posture recognition according to claim 1, characterized in that: the pre-processing the operation video through an optical flow method to obtain a first atlas (201) corresponding to the operation video, comprising: acquiring speed vector characteristics corresponding to each pixel point of each frame of picture in the operation video; and if the speed vector characteristics of at least one frame of picture in the operation video do not keep continuously changing, forming a first atlas (201) in the operation video by corresponding pictures.
5. The intelligent medical skill teaching monitoring system based on human body posture recognition according to claim 1, characterized in that: the obtaining, by a convolutional neural network, a target graph feature vector corresponding to each target graph in the first graph set (201) to form a first graph feature vector set (2011), includes: preprocessing each target image in the first image set (201) to obtain preprocessed images corresponding to each target image and image pixel matrixes corresponding to each preprocessed image; the method comprises the following steps of preprocessing a target image, namely sequentially carrying out graying, edge detection and binarization processing on the target image; inputting the picture pixel matrix corresponding to each preprocessed picture into an input layer in the convolutional neural network model to obtain a characteristic diagram corresponding to each preprocessed picture; inputting each characteristic diagram into a pooling layer in the convolutional neural network model to obtain a one-dimensional vector corresponding to each characteristic diagram; and inputting the one-dimensional vectors corresponding to the feature maps into a full-connection layer in the convolutional neural network model to obtain target map feature vectors corresponding to the feature maps so as to form a first map feature vector set (2011).
6. The intelligent medical skill teaching monitoring system based on human body posture recognition according to claim 1, characterized in that: the obtaining of the similarity between each picture feature vector in the first picture feature vector set (2011) and the corresponding picture feature vector in the second picture feature vector set (2021) includes: and after Euclidean distances between the image feature vectors in the first image feature vector set (2011) and the corresponding image feature vectors in the second image feature vector set (2021) are obtained, an average Euclidean distance value is obtained, and the average Euclidean distance value is used as the similarity between the first image feature vector set (2011) and the second image feature vector set (2021).
CN202010314544.0A 2020-04-20 2020-04-20 Intelligent medical skill teaching monitoring system based on human body posture recognition Pending CN111553218A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010314544.0A CN111553218A (en) 2020-04-20 2020-04-20 Intelligent medical skill teaching monitoring system based on human body posture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010314544.0A CN111553218A (en) 2020-04-20 2020-04-20 Intelligent medical skill teaching monitoring system based on human body posture recognition

Publications (1)

Publication Number Publication Date
CN111553218A true CN111553218A (en) 2020-08-18

Family

ID=72000294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010314544.0A Pending CN111553218A (en) 2020-04-20 2020-04-20 Intelligent medical skill teaching monitoring system based on human body posture recognition

Country Status (1)

Country Link
CN (1) CN111553218A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159725A (en) * 2021-04-29 2021-07-23 山东数字人科技股份有限公司 Anatomy teaching supervision method, apparatus, system, equipment and storage medium
CN114887271A (en) * 2022-05-12 2022-08-12 福州大学 Fire escape skill teaching monitoring system based on human body posture recognition
CN116757524A (en) * 2023-05-08 2023-09-15 广东保伦电子股份有限公司 Teacher teaching quality evaluation method and device
CN117575862A (en) * 2023-12-11 2024-02-20 广州番禺职业技术学院 Knowledge graph-based student personalized practical training guiding method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909060A (en) * 2017-12-05 2018-04-13 前海健匠智能科技(深圳)有限公司 Gymnasium body-building action identification method and device based on deep learning
CN108154304A (en) * 2017-12-26 2018-06-12 重庆大争科技有限公司 There is the server of Teaching Quality Assessment
CN108985443A (en) * 2018-07-04 2018-12-11 北京旷视科技有限公司 Action identification method and its neural network generation method, device and electronic equipment
CN110443226A (en) * 2019-08-16 2019-11-12 重庆大学 A kind of student's method for evaluating state and system based on gesture recognition
CN110674837A (en) * 2019-08-15 2020-01-10 深圳壹账通智能科技有限公司 Video similarity obtaining method and device, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909060A (en) * 2017-12-05 2018-04-13 前海健匠智能科技(深圳)有限公司 Gymnasium body-building action identification method and device based on deep learning
CN108154304A (en) * 2017-12-26 2018-06-12 重庆大争科技有限公司 There is the server of Teaching Quality Assessment
CN108985443A (en) * 2018-07-04 2018-12-11 北京旷视科技有限公司 Action identification method and its neural network generation method, device and electronic equipment
CN110674837A (en) * 2019-08-15 2020-01-10 深圳壹账通智能科技有限公司 Video similarity obtaining method and device, computer equipment and storage medium
CN110443226A (en) * 2019-08-16 2019-11-12 重庆大学 A kind of student's method for evaluating state and system based on gesture recognition

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113159725A (en) * 2021-04-29 2021-07-23 山东数字人科技股份有限公司 Anatomy teaching supervision method, apparatus, system, equipment and storage medium
CN114887271A (en) * 2022-05-12 2022-08-12 福州大学 Fire escape skill teaching monitoring system based on human body posture recognition
CN116757524A (en) * 2023-05-08 2023-09-15 广东保伦电子股份有限公司 Teacher teaching quality evaluation method and device
CN116757524B (en) * 2023-05-08 2024-02-06 广东保伦电子股份有限公司 Teacher teaching quality evaluation method and device
CN117575862A (en) * 2023-12-11 2024-02-20 广州番禺职业技术学院 Knowledge graph-based student personalized practical training guiding method and system
CN117575862B (en) * 2023-12-11 2024-05-24 广州番禺职业技术学院 Knowledge graph-based student personalized practical training guiding method and system

Similar Documents

Publication Publication Date Title
CN111553218A (en) Intelligent medical skill teaching monitoring system based on human body posture recognition
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
WO2021047185A1 (en) Monitoring method and apparatus based on facial recognition, and storage medium and computer device
CN107316261A (en) A kind of Evaluation System for Teaching Quality based on human face analysis
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
CN111046823A (en) Student classroom participation degree analysis system based on classroom video
CN113506624B (en) Autism children cognitive ability evaluation intervention system based on hierarchical generalization push logic
KR20100016696A (en) Student learning attitude analysis systems in virtual lecture
Indi et al. Detection of malpractice in e-exams by head pose and gaze estimation
CN106295558A (en) A kind of pig Behavior rhythm analyzes method
CN111814587A (en) Human behavior detection method, teacher behavior detection method, and related system and device
CN106652605A (en) Remote emotion teaching method
CN113282840B (en) Comprehensive training acquisition management platform
Liu et al. An improved method of identifying learner's behaviors based on deep learning
Ashwin et al. Unobtrusive students' engagement analysis in computer science laboratory using deep learning techniques
Ray et al. Design and implementation of affective e-learning strategy based on facial emotion recognition
CN116205598A (en) Online education platform based on virtual reality technology
Chen et al. MDNN: Predicting Student Engagement via Gaze Direction and Facial Expression in Collaborative Learning.
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
CN111444877B (en) Classroom people number identification method based on video photos
Vishnumolakala et al. In-class student emotion and engagement detection system (iSEEDS): an AI-based approach for responsive teaching
CN114255509A (en) Student supervises appurtenance based on OpenPose
Wang Research on Classroom Teaching Quality Evaluation Method Based on Machine Vision Analysis
CN116797090B (en) Online assessment method and system for classroom learning state of student
CN115631074A (en) Network science and education method, system and equipment based on informatization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200818

WD01 Invention patent application deemed withdrawn after publication