CN112151155A - Ultrasonic image intelligent training method and system based on artificial intelligence and application system - Google Patents

Ultrasonic image intelligent training method and system based on artificial intelligence and application system Download PDF

Info

Publication number
CN112151155A
CN112151155A CN202011004430.2A CN202011004430A CN112151155A CN 112151155 A CN112151155 A CN 112151155A CN 202011004430 A CN202011004430 A CN 202011004430A CN 112151155 A CN112151155 A CN 112151155A
Authority
CN
China
Prior art keywords
training
data
image
ultrasonic
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011004430.2A
Other languages
Chinese (zh)
Inventor
龚任
谢晓燕
张晓儿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Shishang Medical Technology Co.,Ltd.
Original Assignee
Kestrel Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kestrel Intelligent Technology Shanghai Co ltd filed Critical Kestrel Intelligent Technology Shanghai Co ltd
Priority to CN202011004430.2A priority Critical patent/CN112151155A/en
Publication of CN112151155A publication Critical patent/CN112151155A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/20ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management or administration of healthcare resources or facilities, e.g. managing hospital staff or surgery rooms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/08Electrically-operated educational appliances providing for individual presentation of information to a plurality of student stations
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B7/00Electrically-operated teaching apparatus or devices working with questions and answers

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Primary Health Care (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Computing Systems (AREA)
  • Human Resources & Organizations (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Epidemiology (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention discloses an ultrasonic image intelligent training method, an ultrasonic image intelligent training system and an application system based on artificial intelligence. The training system is composed of a student training subsystem, a teacher management subsystem, an artificial intelligence subsystem and a system manager subsystem, provides a learning environment which is not limited by environment and time and is more flexible and free for training students, achieves the purposes of ultrasonic training self-training and self-checking under the assistance and supervision of the artificial intelligence system, can greatly reduce the training workload of ultrasonic departments, improves the training efficiency to a great extent, and ensures the training quality in a more scientific mode. The training system can be deployed in various modes, including a network mode, a stand-alone mode, an integrated or embedded mode and the like.

Description

Ultrasonic image intelligent training method and system based on artificial intelligence and application system
Technical Field
The invention relates to the technical field of medical image processing and intelligent training, in particular to an ultrasonic image intelligent training method and system based on artificial intelligence and an application system.
Background
Currently, ultrasound image examination has become the most common non-invasive examination means in clinical practice due to its advantages of non-invasiveness, no radiation, real-time image acquisition and no known side effects. However, for various reasons, ultrasound medical resources are severely scarce, and in particular, sonographers with professional qualifications and clinical experience are severely lacking, and the level and ability of diagnosis in various levels of medical institutions is uneven. Since the sonographer must examine the patient in a short time and make a more accurate ultrasound diagnosis, it puts high demands on the medical knowledge, clinical experience, and standardized procedures (such as manipulation techniques). In view of the complex medical environment and increasing medical disputes at present, more preclinical training is performed on low-grade medical hospitalization doctors and interns and advanced medical professionals in subordinate medical institutions, and the basic operation skills of the low-grade medical hospitalization doctors and interns are a working focus of ultrasonic skill training.
Because the ultrasonic skill training in-process needs the student to operate in person, practise repeatedly to need the teacher to handle training and supervise and just can reach better training effect, the mode of the concentrated training of regularly fixed point is generally adopted to current training mode, because the student lacks the practise chance of practising after the training, and lacks the teacher and supervise, the training effect is unsatisfactory.
Therefore, how to provide an intelligent training system for ultrasound images with high efficiency and complete functions is a problem that needs to be solved urgently by those skilled in the art.
Disclosure of Invention
In view of the above, the invention provides an artificial intelligent ultrasonic image intelligent training method, an artificial intelligent ultrasonic image intelligent training system and an application system, by the method, students can perform various types of ultrasonic image training and learning such as basic knowledge learning, process training, question bank training, practice training and the like, the learning process is more flexible and free, and the problems that the students lack practical exercise opportunities, lack mentor supervision, have unsatisfactory training effects and the like in the conventional ultrasonic skill training process are solved.
In order to achieve the purpose, the invention adopts the following technical scheme:
in a first aspect, the invention provides an intelligent training method for ultrasound images based on artificial intelligence, which comprises the following steps:
the method comprises the following steps that an assistant guide teacher collects ultrasonic video images, reads and edits the ultrasonic video images, and creates training data for student training;
carrying out artificial intelligence auxiliary analysis on the ultrasonic video image in the training data to generate a machine answer;
the assistant instructor verifies the machine answer, and labels and stores wrong data in the machine answer;
the students are assisted to perform basic knowledge learning, flow training, reading practice and assessment according to the training data, the machine answers and the instructor marking data, and the students are assisted to perform ultrasonic scanning practice training;
the assistant guide teacher consults the learning condition of the students and carries out statistics and analysis on the training and examination data of the students.
Further, the process of performing artificial intelligence auxiliary analysis on the ultrasound video image in the training data to generate a machine answer specifically includes:
desensitizing and preprocessing real image data in the clinical practice of ultrasonic examination;
carrying out multi-parameter characteristic labeling and target object sketching on the real image data subjected to desensitization and pretreatment to generate a labeled image;
performing data enhancement processing on the marked image to obtain sample data;
deep learning and training are carried out on the sample data to obtain an ultrasonic image auxiliary analysis initial algorithm model;
retraining and optimizing the image-aided analysis initial algorithm model by using real image data in the ultrasonic examination clinical practice to obtain an ultrasonic image-aided analysis neural network model;
preprocessing a target image, inputting the preprocessed target image into the ultrasonic image auxiliary analysis neural network model for depth network calculation, and outputting feature classification information of a target object or a suspected focus;
and resolving an auxiliary diagnosis result of the target object or the suspected focus according to the feature classification information of the target object or the suspected focus to obtain a machine answer.
In a second aspect, the present invention further provides an ultrasound image intelligent training system based on artificial intelligence, the system comprising:
the trainee training subsystem is used for providing functions of application, training, assessment and training record retrieval for trainees;
the teacher management subsystem is used for providing functions of question bank management, student account management, data acquisition and labeling tools and dynamic video intelligent acquisition and analysis tools for a teacher;
the artificial intelligence subsystem is used for deploying various artificial intelligence neural networks and carrying out intelligent operation; and
and the system administrator subsystem is used for coordinating, controlling and managing the whole training system.
Further, the trainee training subsystem comprises:
the training/assessment application module is used for managing the schedule and the approval process of trainees for planning, applying or confirming training and assessment;
the trainee training and examining module is used for trainees to learn, train and examine theoretical knowledge and to read and practice training in each stage; and
and the training and examination record inquiry is used for inquiring the training and examination records and retrieving the training related data by the trainees.
Further, the instructor management subsystem comprises:
the question bank management module is used for the instructor to establish, edit and manage specific training data and examination data under each training subject;
the student account management module is used for a teacher to perform new construction, editing, viewing, password resetting, statistical analysis and training quality evaluation operation on a student account;
the data acquisition and marking tool module is used for leading a teacher to mark and describe the image data in the training data and the examination data in the question bank management module, and the marked data is also used for optimizing an artificial intelligence algorithm and a neural network in the artificial intelligence subsystem; and
and the dynamic video intelligent acquisition and analysis tool module is used for assisting a guide to read the ultrasonic video, dynamically identifying the target object, capturing the video and the image and diagnosing intelligent auxiliary analysis.
Further, the artificial intelligence subsystem comprises:
the ultrasonic image dynamic identification and prompting module is used for analyzing and learning ultrasonic images and image data through an artificial intelligence deep learning method, constructing an algorithm model and a neural network, deploying the constructed neural network on an artificial intelligence engine to dynamically process and identify the number of input ultrasonic images, marking the position and the area of a suspected lesion and outputting a marked image;
the identification prompt tone control module is used for making a sound to remind a user of finding a target object and dynamically marking and identifying an image frame in the video playback process;
the intelligent dynamic image acquisition and identification module is used for marking each identified dynamic image frame and automatically carrying out video recording, the starting point of the video recording starts from a plurality of frames (for example, the first 50 frames) before the first identified frame until no identified frame is found in a plurality of frames (for example, the last 50 frames) after a certain identified image frame, and the process is repeatedly carried out;
the ultrasonic image intelligent auxiliary analysis module adopts an artificial intelligent deep learning method, constructs an image recognition algorithm model and a neural network by learning the ultrasonic image big data of confirmed lesion and normal tissue of a specific part, deploys the neural network on an artificial intelligent engine, processes and recognizes an input ultrasonic image, and finally obtains a recognition conclusion as a diagnosis auxiliary reference;
the ultrasonic dynamic image intelligent auxiliary analysis module is combined with the ultrasonic image dynamic identification and reminding module and the ultrasonic image intelligent auxiliary analysis module to carry out dynamic image identification, frame identification marking, classification and naming and intelligent auxiliary analysis on the multi-frame image after weighting on the identification target image data;
further, the system administrator subsystem includes:
the system maintenance module is used for maintaining and controlling the whole training system and each client by a system administrator, and mainly comprises the following functions: system and interface configuration and setting maintenance, client configuration, system data storage and backup and the like;
the account management module is used for the system administrator to set up, set the authority, edit, check, analyze the statistics, reset the password and other operations on all accounts (including the instructor account and the student account) in the system;
the training requirement making module is used for setting and evaluating indexes such as work tasks, plans and training quality of a training instructor or a teacher;
the question bank content auditing module is used for auditing the training content submitted by the instructor;
the database management module is used for managing all relevant data in the training system and comprises: account information (user data, user training and usage history and data), training and assessment content for each subject, collected and labeled data, etc.
The specific presentation means of the training system for realizing the ultrasonic intelligent training system based on the subsystems is realized in a client-side and management server-side mode, and the system can be deployed in various modes, including a network mode, a single-machine mode, an integrated or embedded mode and the like.
In a third aspect, an ultrasound image intelligent training application system based on artificial intelligence comprises: at least two clients and at least one management server;
the at least two clients are a student client and a teacher client respectively;
the management server is used for providing information connection and data support for the student client to realize the functions of training subject selection, basic knowledge learning, process training, question bank training and practice training, and the teacher client to realize the functions of question bank management, student learning condition checking and the like.
In the invention, each client and management server hardware is provided with GPU/CPU hardware and a software system, namely an artificial intelligence operation engine, which can perform artificial intelligence edge calculation and network deployment, and the software system is usually an embedded system or a Windows system.
The whole training process is divided into three stages, namely one-stage or primary-stage training, two-stage or middle-stage training and three-stage or high-stage training, and the corresponding training content is different according to different training stages.
Further, the management server includes:
the acquisition module is used for acquiring training related requests initiated by the student client or the teacher client, wherein the training related requests comprise training subject selection, basic knowledge learning, process training, question bank training, practice training, question bank management and student learning condition checking;
the database is used for storing image training data corresponding to each training subject;
the data calling module is used for calling image training data under corresponding training subjects from the database according to the training subjects selected by the student client, acquiring basic knowledge training data or process training data from the called image training data according to a basic knowledge learning request or a process training request initiated by the student client, and sending the basic knowledge training data or the process training data to the student client;
the practical operation training module is used for receiving the ultrasonic image data uploaded by the teacher client, extracting standard data corresponding to the current scanning part from the image training data called by the data calling module, carrying out identification analysis on the ultrasonic image data to obtain a marked image, feeding the standard data corresponding to the scanning part and the marked image back to the teacher client, and generating prompt information; the system is also used for receiving a consulting request of the student client and feeding the standard data and the marked image back to the student client;
the question bank generating module is used for receiving the ultrasonic images or the ultrasonic pictures uploaded by the teacher client, generating a training question bank by taking the ultrasonic images or the ultrasonic pictures as training questions, generating an examination question bank by taking a plurality of ultrasonic images or ultrasonic picture combinations as examination questions, respectively identifying and analyzing target images and ultrasonic pictures in the ultrasonic images in the training question bank and the examination question bank to form machine answers, generating a corresponding standard answer bank, and storing the training question bank, the examination question bank and the standard answer bank into the database; and
the question bank training module is used for calling corresponding training questions or assessment questions in the database to the student client according to the request of the student client, receiving the artificial answers uploaded by the student client, comparing and analyzing the artificial answers and the corresponding machine answers to obtain a comparison result, and generating an assessment result according to the comparison result;
the learning data management module is used for storing relevant learning data of basic knowledge learning, process training, question bank training and practical exercise training completed by the student client, and feeding back the relevant learning data completed by the student client to the teacher client according to a student learning condition checking request initiated by the teacher client;
and the answer correction module is used for extracting the ultrasonic image or the ultrasonic picture corresponding to the answer of the machine to be corrected from the standard answer library according to the question library management request sent by the teacher client, feeding the ultrasonic image or the ultrasonic picture back to the teacher client, and receiving and storing a labeled image formed after target object sketching and parameter feature labeling are carried out on the target image or the ultrasonic picture in the ultrasonic image corresponding to the answer of the machine to be corrected and returned by the teacher client.
Furthermore, the real-practice training module performs identification analysis on the ultrasonic image data to obtain a marked image, an artificial intelligent deep learning algorithm is adopted to realize the process, an algorithm model and a neural network are constructed and deployed on a GPU and a CPU through analysis and learning of the ultrasonic image and image big data, the input ultrasonic image data is dynamically processed and identified, the position and the area of a target object in the image are marked, and the marked image is output. The method specifically comprises the following steps:
data preprocessing: carrying out scaling, graying and normalization processing on each frame of image in the ultrasonic image data;
constructing a learning model: constructing an ultrasonic image target object or suspected focus dynamic recognition neural network based on deep learning, training the dynamic recognition neural network by using real image data in ultrasonic examination clinical practice, and optimizing the trained dynamic recognition neural network to obtain a deep learning network model;
identifying a target or a lesion: inputting the preprocessed ultrasonic image data into the deep learning network model, and outputting feature classification information of the target object or the suspected focus;
analyzing the target object or the focus: calculating and resolving the actual position or edge of the target object or the suspected focus according to the feature classification information of the target object or the suspected focus;
and (4) labeling and outputting a result: and marking the actual position or edge of the target object or the suspected focus, and outputting a marked image.
Furthermore, the training question bank comprises a video training question bank and a picture training question bank, the video training question bank stores the ultrasonic images uploaded by the teacher client in real time, and the picture training question bank is used for storing the ultrasonic pictures uploaded by the teacher client in real time.
When a student conducts question bank training, the student firstly conducts manual reading, analysis and judgment on a target ultrasonic image or an image to generate a manual answer, meanwhile conducts intelligent auxiliary analysis on the target image to generate a machine answer, the machine answer is used as a training reference answer, and the manual answer and the machine answer are compared and compared to evaluate the training effect.
The training guide can lead in a section of ultrasonic video image or a plurality of ultrasonic pictures to become a training question, or combine a plurality of ultrasonic images and ultrasonic pictures to form an examination question; the training guide can selectively run intelligent auxiliary analysis on partial images to quickly form machine answers, then confirm the machine answers and modify the wrong machine answers.
The examination questions can be generated manually or automatically, and the automatic mode refers to that the training examination questions are generated by a machine in a training question bank according to a random selection and combination mode; the manual mode means that the training examination questions are generated and input into the examination question bank by a training physician or a guide in a manual mode.
Further, the student client includes:
the training subject selection module is used for initiating a training subject selection request and selecting a training subject;
the first uploading module is used for uploading the ultrasonic image data or the artificial answers generated after the students analyze in the question bank training process to the management server;
the learning module is used for initiating basic knowledge learning, process training, question bank training or practical exercise training requests, receiving and displaying basic knowledge training data, process training data, standard data and labeled images corresponding to scanning positions and corresponding training questions or examination questions returned by the management server, and performing ultrasonic image training learning;
the consulting module is used for receiving and consulting the machine answers, the comparison results and the evaluation results fed back by the management server to obtain learning results;
and the prompting module is used for receiving the prompting information sent by the management server and sending a prompting sound, specifically, a speaker or a buzzer can be used for sending sound to prompt a student to find a target object in time, and the sound can be turned off at any time or the time interval can be adjusted.
Further, the teacher client includes:
the second uploading module is used for uploading the ultrasonic images or the ultrasonic pictures of the typical cases to the management server;
the question bank editing module is used for initiating a question bank management request and creating, deleting or modifying data in the question bank;
and the learning data viewing module is used for initiating a learning condition viewing request of the student and receiving and viewing the related learning data finished by the student client end fed back by the management server.
Specifically, the question bank management request specifically includes creating, deleting, modifying the question bank, and correcting the answer. The answer correction module is mainly used for carrying out multi-parameter feature labeling and target object sketching on the image generating the wrong machine answer to form a labeled image, and the labeled image is stored as important labeled data and can be used for retraining and optimizing a neural network in future so as to continuously improve the identification accuracy of the network.
According to the technical scheme, compared with the prior art, the invention discloses and provides the ultrasonic image intelligent training method, the ultrasonic image intelligent training system and the application system based on artificial intelligence. The training system realized based on the intelligent training method consists of a student training subsystem, a teacher management subsystem, an artificial intelligence subsystem and a system manager subsystem, the system provides a learning environment which is not limited by environment and time and is more flexible and free for training students, the purposes of ultrasonic training self-training and self-checking under the assistance and supervision of the artificial intelligence system are realized, the training workload of ultrasonic departments can be greatly reduced, the training efficiency is improved to a great extent, and the training quality is ensured in a more scientific mode. Meanwhile, the training system can be deployed in various modes, including a network mode, a stand-alone mode, an integrated or embedded mode and the like.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a schematic structural diagram of an ultrasound image intelligent training system based on artificial intelligence according to the present invention;
FIG. 2 is a functional block diagram of a trainee training subsystem according to an embodiment of the present invention;
FIG. 3 is a functional architecture diagram of a mentor management subsystem according to an embodiment of the present invention;
FIG. 4 is a functional architecture diagram of an administrator subsystem according to an embodiment of the present invention;
FIG. 5 is a functional architecture diagram of an artificial intelligence subsystem in an embodiment of the invention;
FIG. 6 is a schematic diagram of an overall structure of an ultrasound image intelligent training application system based on artificial intelligence according to the present invention;
FIG. 7 is a diagram illustrating the specific architecture of the management server, the student client, and the teacher client according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of an image displayed in a four-quadrant display mode according to an embodiment of the present invention;
fig. 9 is a schematic view of another image displayed in a four-quadrant display mode according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a first aspect, the embodiment of the invention discloses an intelligent training method for ultrasonic images based on artificial intelligence, which comprises the following steps:
step 1: the method comprises the following steps that an assistant guide teacher collects ultrasonic video images, reads and edits the ultrasonic video images, and creates training data for student training;
step 2: carrying out artificial intelligence auxiliary analysis on the ultrasonic video image in the training data to generate a machine answer;
and step 3: the assistant guide teacher verifies the machine answer and marks and stores the wrong data in the machine answer;
and 4, step 4: the method is characterized by comprising the following steps of assisting students to perform basic knowledge learning, flow training, film reading training and assessment according to training data, machine answers and instructor marking data, and assisting students to perform ultrasonic scanning practice training;
and 5: the assistant guide teacher consults the learning condition of the students and carries out statistics and analysis on the training and examination data of the students.
In a second aspect, referring to fig. 1, the present invention further provides an artificial intelligence based ultrasound image intelligent training system, comprising:
the trainee training subsystem 11 is used for providing application, training, assessment and training record retrieval functions for trainees;
the instructor management subsystem 12 is used for providing functions of item bank management, student account management, data acquisition and labeling tools and dynamic video intelligent acquisition and analysis tools for instructors;
the artificial intelligence subsystem 13 is used for deploying various artificial intelligence neural networks and carrying out intelligent operation; and
and a system administrator subsystem 14 for coordinating, controlling and managing the entire training system.
Referring to fig. 2, for the functions that the trainee training subsystem 11 can realize, the trainee training subsystem 11 includes:
the training/assessment application module is used for managing the schedules and examination and approval processes of trainees for planning, applying or confirming training and assessment, and is mainly used for trainee account registration, wherein the registration information comprises information such as names, titles, departments, contact ways, application reasons and the like;
the training module comprises three stages of training, wherein the first-stage training content comprises basic ultrasonic theory training, physician standardization basic training, film reading training and operation training, and the second-stage training content and the third-stage training content comprise film reading training and operation training;
the examination module comprises three stages of examinations, wherein the three stages of examinations comprise operation examinations, specifically operation specification assessment, detectable rate assessment, diagnosis accuracy assessment and report specification assessment;
and the training and examination record inquiry is used for inquiring the training and examination records and retrieving the training related data by the trainees.
The interpretation training content specifically comprises:
the static picture reading exercise is used for a student to interpret the exercise question image, if suspected lesion is found, the lesion needs to be measured and an ultrasonic finding description is given, and the ultrasonic finding description and a final ultrasonic prompting result can be presented in a question selecting mode; the student can view the answers immediately after submitting the exercises.
The dynamic video film reading exercise is used for a student to read, compare, acquire pictures, analyze multi-frame images and play back and compare and analyze multi-direction videos on one or more sections of dynamic multi-frame ultrasonic video images so as to obtain a diagnosis conclusion.
The process of dynamic video reading training specifically comprises the following steps:
the video playback display mode can adopt a single-quadrant, double-quadrant or four-quadrant display mode;
dynamic image reading, namely replaying and reading each section of dynamic image video shown in the question bank;
dynamic image comparison, in the process of viewing a dynamic image video, if a suspected lesion is found, judging whether different scanning sections at the same position of the suspected lesion present lesion images or not by comparing video images in different scanning directions in a double-quadrant or four-quadrant display mode according to video images in a plurality of scanning directions (such as horizontal, longitudinal and oblique directions) provided in a video question bank, thereby determining the establishment of the suspected lesion;
acquiring images, namely acquiring a plurality of frames of typical ultrasonic pictures for image analysis aiming at the found suspected lesion in the video playback of the images in different scanning directions;
image analysis, namely selecting a typical picture from a plurality of collected pictures in each scanning direction, and measuring and describing the picture as a reserved picture;
the diagnosis conclusion presents the final interpretation conclusion in a question selection mode so as to facilitate answer comparison;
and (4) answer comparison, wherein the student can check the answers for comparison immediately after submitting the exercise.
The contents of student training mainly comprise:
the ultrasonic basic theory training is used for providing basic training of medical ultrasonic image basic principles and knowledge and comprises the following steps of; the basic principle of ultrasonic imaging, anatomical knowledge and corresponding ultrasonic imaging cases of specific parts, common pathological change cases of the specific parts, the setting and control of common ultrasonic equipment and the like;
ultrasound examination standardized basic training for training trainees in basic standardized procedures for ultrasound examinations, including (without limitation): patient body position setting, scanning range, scanning mode, manipulation specification, detection and image retention specification, measurement and marking specification, ultrasonic discovery description, report specification and the like;
the film reading training comprises static image film reading and dynamic video film reading; the static picture reading refers to the ability of training a student to read and diagnose a single ultrasonic picture; the dynamic video reading refers to the ability of training students to find and interpret suspicious lesions and perform image retention, quantitative analysis and diagnosis on the lesions in a mode of playing back a section of dynamic ultrasonic video image; the student can check and contrast the answer immediately after submitting the exercise;
operation training, including virtual reality mode training and magnetic navigation positioning system training; the focus is to train the coordination ability of hands and eyes of the trainee, thereby achieving the purpose of enhancing the detection ability of the focus in the ultrasonic examination.
A training stage: the training is divided into three stages, i.e., one-stage or primary-stage training, two-stage or intermediate-stage training, and three-stage or advanced-stage training, and the corresponding training contents will be different according to the training stages.
The assessment content of the students mainly comprises the following steps:
and operation normative evaluation, wherein a pointer is used for each specific subject assessment student to carry out the normative operation of the ultrasonic operation, such as: the preparation process before examination (patient preparation, instrument condition, etc.), the patient body position, scanning method, examination content, etc. are evaluated and scored;
the detection rate evaluation means that a student judges and reads a certain number of data cases by judging and reading a certain number of video images or operating a magnetic navigation virtual reality simulation system, and the ratio of finding and detecting pathological changes of the student is evaluated;
the diagnosis accuracy evaluation means that the analysis and diagnosis capability of the students on the detected suspected lesion is evaluated, and scoring judgment is carried out according to the diagnosis accuracy;
report normative evaluation refers to evaluation of the trainee's ultrasonic report normative writing and normative mapping.
And (3) an assessment stage: the assessment is also divided into three assessment stages aiming at the three training stages, namely one-stage or primary-stage assessment, two-stage or middle-stage assessment and three-stage or high-stage assessment, and the corresponding assessment contents are different according to different stages.
The content of the operation training specifically comprises the following steps:
the digital phantom is formed by combining an ultrasonic phantom (phantom) of a specific part of a human body, actually and clinically acquired 3D data of an ultrasonic image of the specific part and corresponding 3D anatomical simulation data of a computer;
magnetic navigation positioning, namely allocating a magnetic navigation positioning system to an ultrasonic probe to realize space accurate positioning of the position of the ultrasonic probe;
the virtual reality scanning simulation is that an ultrasonic imaging equipment probe equipped with magnetic navigation is adopted to carry out ultrasonic scanning on a digital imitation, an ultrasonic virtual image of each scanning section is dynamically displayed on a large screen, and a 3D/2D anatomical simulation image is synchronously displayed in a contrast manner, so that the understanding and interpretation of ultrasonic imaging by trainees are deepened, the hand-eye coordination ability of the trainees is trained, and the detection ability of focuses in ultrasonic examination is enhanced.
Specifically, referring to FIG. 3, for the primary functions of the mentor management subsystem 12, the mentor management subsystem 12 includes:
the question bank management module is used for a guide to establish, edit and manage specific training data and examination data under each training subject, takes a Motechomed question bank as an example, and comprises the functions of question bank generation and editing (basic knowledge, standardized training, a contact question bank and an examination question bank), question bank content management (exercise question bank content and examination question bank content), question bank answer generation (artificial answers and machine answers) and question bank lookup;
the account management module is used for the instructor to carry out account auditing, checking exercise and examination records, managing the training authority of the trainees (the training authority of the first stage, the second stage and the third stage), counting the training exercise and examination results;
the data acquisition and marking tool module (namely an AI data marking tool) is used for importing images, performing drawing and marking and storing data; and
the intelligent dynamic video acquisition and analysis tool module is used for assisting a guide teacher to import or clinically acquire data, performing video acquisition and display, dynamically identifying and reminding ultrasonic images, intelligently acquiring images and image data (automatically storing the images with reminding function, and opening or closing the function according to the needs of a user), managing acquired images and images, intelligently assisting analysis in diagnosis, and storing data (which can be stored as a clinical case to an appointed directory by default, and can also be backed up as a practice problem at the same time, so that the name of the practice problem needs to be named).
Specifically, referring to fig. 4, for the main functions of the artificial intelligence subsystem 13, the artificial intelligence subsystem 13 includes:
the ultrasonic image dynamic identification and prompting module is used for analyzing and learning ultrasonic images and image data through an artificial intelligence deep learning method, constructing an algorithm model and a neural network, deploying the constructed neural network on an artificial intelligence engine to dynamically process and identify the number of input ultrasonic images, marking the position and the area of a suspected lesion and outputting a marked image;
the identification prompt tone control module is used for making a sound to remind a user of finding a target object and dynamically marking and identifying an image frame in the video playback process;
the intelligent dynamic image acquisition and identification module is used for marking each identified dynamic image frame and automatically carrying out video recording, the starting point of the video recording starts from a plurality of frames (for example, the first 50 frames) before the first identified frame until no identified frame is found in a plurality of frames (for example, the last 50 frames) after a certain identified image frame, and the process is repeatedly carried out;
the ultrasonic image intelligent auxiliary analysis module adopts an artificial intelligent deep learning method, constructs an image recognition algorithm model and a neural network by learning the ultrasonic image big data of confirmed lesion and normal tissue of a specific part, deploys the neural network on an artificial intelligent engine, processes and recognizes an input ultrasonic image, and finally obtains a recognition conclusion as a diagnosis auxiliary reference;
the ultrasonic dynamic image intelligent auxiliary analysis module is combined with the ultrasonic image dynamic identification and reminding module and the ultrasonic image intelligent auxiliary analysis module to carry out dynamic image identification, frame identification marking, classification and naming and intelligent auxiliary analysis on the multi-frame image after weighting on the identification target image data;
and the diagnosis report generation module generates a diagnosis report.
Specifically, referring to fig. 5, for the primary functions of system administrator subsystem 14, system administrator subsystem 14 includes:
the system maintenance module is used for maintaining and controlling the whole training system and each client by a system administrator, and mainly comprises the following functions: system and interface configuration and setting maintenance, client configuration, system data storage and backup and the like;
the account management module is used for a system administrator to perform the following operations on the system: the method comprises the steps of setting up (i.e. creating) an account (including name, title, department, contact and other), the type of the account (executive account, common teacher, doctor trainee and screening technician training), checking, statistical exercise and assessment exercise, setting other account permissions (including uploading questions, taking out, modifying, checking a question bank and assessing assessment), deleting, freezing, modifying the account, resetting passwords and the like;
the training requirement making module is used for setting and evaluating indexes such as work tasks, plans and training quality of a training instructor or a teacher;
the question bank content auditing module is used for auditing the training content submitted by the instructor, realizing the function of question bank maintenance and determining whether the questions submitted by the instructor are in a bank, such as a special question bank which comprises functions of a static picture question bank, a dynamic video question bank, a virtual model question bank, a modified question bank and a deleted question bank;
the database management module is used for managing all relevant data in the training system and comprises: account information (user data, user training and usage history and data), training and assessment content for each subject, collected and labeled data, etc.
In addition, the system administrator subsystem is also provided with functions of sub-themes, exiting the system and the like.
Referring to fig. 6, an embodiment of the present invention discloses an ultrasound image intelligent training application system based on artificial intelligence, which includes: at least two clients and at least one management server 21;
the at least two clients are a student client 22 and a teacher client 23 respectively;
the management server 21 is used for providing information connection and data support for the student client 22 to realize the functions of training subject selection, basic knowledge learning, process training, question bank training and practice training, and the teacher client 23 to realize the functions of question bank management, student learning condition checking and the like.
In a specific embodiment, referring to fig. 7, the management server 21 specifically includes:
an obtaining module 211, configured to obtain training-related requests initiated by the student client 22 or the teacher client 23, including training subject selection, basic knowledge learning, process training, question bank training, practice training, question bank management, and student learning status checking;
a database 212 for storing image training data corresponding to each training subject;
the data retrieving module 213 is configured to retrieve image training data under corresponding training subjects from the database 212 according to the training subjects selected by the student client 22, acquire basic knowledge training data or process training data from the retrieved image training data according to a basic knowledge learning request or a process training request initiated by the student client 2, and send the basic knowledge training data or the process training data to the student client 22;
the practice training module 214 is configured to receive the ultrasound image data uploaded by the teacher client 23, extract standard data corresponding to the currently scanned part from the image training data called by the data calling module 213, perform recognition analysis on the ultrasound image data to obtain a labeled image, feed back the standard data corresponding to the scanned part and the labeled image to the teacher client 23, generate prompt information, send the standard data and the labeled image to the student client 22 according to a request of the student client 22, and send the prompt information to the student client 22;
the question bank generating module 215 is configured to receive the ultrasonic images or the ultrasonic pictures uploaded by the teacher client 23, generate a training question bank by using the ultrasonic images or the ultrasonic pictures as training questions, generate an examination question bank by using a combination of a plurality of ultrasonic images or ultrasonic pictures as examination questions, perform recognition analysis on target images and ultrasonic pictures in the ultrasonic images in the training question bank and the examination question bank respectively to form machine answers, generate a corresponding standard answer bank, and store the training question bank, the examination question bank and the standard answer bank in the database 212; and
the question bank training module 216 is configured to call corresponding training questions or assessment questions in the database 212 to the student client 22 according to a request of the student client 22, receive an artificial answer uploaded by the student client 22, compare and analyze the artificial answer with a corresponding machine answer to obtain a comparison result, and generate an assessment result according to the comparison result;
the learning data management module 217 is configured to store relevant learning data of basic knowledge learning, process training, question bank training, and practice training completed by the student client 22, and feed back the relevant learning data completed by the student client 22 to the teacher client 23 according to a request for checking the learning status of the student initiated by the teacher client 23.
Specifically, the question bank management request specifically includes creating, deleting, modifying the question bank, and correcting the answer. When the question bank management request is a correction answer, the management server 21 further includes:
the answer correction module 218 is configured to extract the ultrasonic image or the ultrasonic picture corresponding to the machine answer to be corrected from the standard answer library according to a correction answer request sent by the teacher client 23, feed the ultrasonic image or the ultrasonic picture back to the teacher client 23, and receive and store a labeled image formed by performing target object delineation and parameter feature labeling on the target image or the ultrasonic picture in the ultrasonic image corresponding to the machine answer to be corrected, which is returned by the teacher client 23.
Referring to fig. 7, the student client 22 in the present embodiment includes:
the training subject selecting module 221 is configured to initiate a training subject selection request and select a training subject, where the ultrasound image may relate to subjects including breast, liver, spleen, biliary tract, pancreas, gastrointestinal tract, urinary system, gynecology, obstetrics, musculoskeletal, interventional ultrasound, and multiple subjects including peripheral blood vessels, carotid artery, and thyroid. Therefore, the system can select corresponding training subjects to train according to training needs, and the general functions of all the subjects are basically the same;
a first uploading module 222, configured to upload, to the management server 21, the ultrasonic image data or the artificial answers generated after the analysis by the students in the question bank training process;
the learning module 223 is used for initiating basic knowledge learning, process training, question bank training or practice training requests, receiving and displaying basic knowledge training data, process training data, standard data and labeled images corresponding to scanning positions and corresponding training questions or examination questions returned by the management server, and performing ultrasonic image training learning;
a consulting module 224, configured to receive and consult the machine answers, the comparison results, and the evaluation results fed back by the management server 21, and obtain learning results;
the prompt module 225 is configured to receive the prompt information sent by the management server 21 and send a prompt sound, specifically, the sound may be sent by a speaker or a buzzer to prompt a student to find a target object, and the sound may be turned off at any time or the time interval may be adjusted.
Referring to fig. 7, the teacher client 23 in this embodiment includes:
a second uploading module 231 for uploading the ultrasound image or ultrasound image of the typical case to the management server;
the question bank editing module 232 is used for initiating a question bank management request and creating, deleting or modifying data in the question bank;
and the learning data viewing module 233 is configured to initiate a learning status viewing request of the student, and receive and view the relevant learning data completed by the student client, which is fed back by the management server.
In this embodiment, the implementation of a part of functions in the system requires an external video device, and a video signal output by the external video device (e.g., an ultrasound device, a video workstation, etc.) is input to a system interface in a wired manner (e.g., HDMI, DVI, S-video, RGB, internet access, USB, Dicom interface, etc.) or a wireless manner (e.g., 5G, 4G, Wifi, etc.), and is converted into digital video data by an image acquisition card or an image acquisition module and input to a system main program (and a management server) for processing, where the main program includes: system and interface control, CPU and GPU operation, storage and display control. The management server firstly preprocesses the input image data and then transmits the preprocessed image data to the ultrasonic real practice training module for image display processing, intelligent dynamic recognition and sketching positioning, and in addition, the real practice training module also comprises the commonly used functions in ultrasonic examination, such as: freezing, storing, recording, editing, measuring, marking and annotating images, describing ultrasonic finding, describing diagnosis, and the like, so that trainees can train conveniently.
The preprocessed image data is synchronously displayed on an external image display according to a single-quadrant display mode or a four-quadrant display mode, wherein the single-quadrant display mode is to display an input image in a full screen mode, namely to synchronously display an ultrasonic image of external ultrasonic equipment; the four-quadrant display is shown in fig. 8 and 9, and the four quadrants can be respectively defined as: the input ultrasonic dynamic image, the probe section ultrasonic reference image of the scanning part, the probe section anatomical image of the scanning part and the probe scanning placement position image of the scanning part can also be defined as four different ultrasonic image images, and each quadrant can be freely sequenced and switched in position. The content of any one quadrant in the four quadrants can be edited and can be displayed in a magnified or full screen (single quadrant), and the four-quadrant display mode and the single-quadrant display mode can be switched back and forth. The purpose of the four-quadrant display mode is to help the user understand the meaning of the ultrasonic image by contrasting the ultrasonic scanning section and the corresponding anatomical map thereof, and to help the student accurately control the scanning angle and the manipulation of the probe by a standard manipulation diagram.
The dynamic intelligent identification and reminding in the ultrasonic practical intelligent training are realized by adopting an artificial intelligent deep learning method, an algorithm model and a neural network are constructed by analyzing and learning the ultrasonic images and image big data, the neural network is deployed on a GPU and a CPU to dynamically process and identify the number of the input ultrasonic images, the positions and the areas of the identified target objects are marked, and the positions and the areas are provided for a main program to realize dynamic image target object delineation and real-time display. The specific process comprises the following steps: preprocessing data, constructing an algorithm model, deducing a result, analyzing a target object or a focus, outputting a recognition result and sketching. The following specifically describes the above steps:
s1: data preprocessing: preprocessing ultrasound video image data, comprising: carrying out scaling, graying and normalization processing on each frame of image;
s2: an algorithm model is constructed: constructing an ultrasonic image target object or suspected focus dynamic recognition neural network based on deep learning, training the dynamic recognition neural network by using real image data in ultrasonic examination clinical practice, and optimizing the trained model to obtain a deep learning network model; specifically, aiming at the characteristic of rapid identification of dynamic images, a large amount of clinically verified ultrasonic image data under a specific subject is selected and subjected to pathological change target object delineation and characteristic marking to form positive marking data, and an artificial intelligence deep learning method is adopted to construct a proper initial algorithm model; inputting the positive marking data into an initial algorithm model in combination with a certain proportion of negative data generated by normal organization image data, and repeatedly training the algorithm model to obtain a deep learning network model suitable for fast identification of a dynamic image target object;
s3: image recognition operation and deduction: inputting the preprocessed ultrasonic image data into a deep learning dynamic recognition network model for operation and deduction, and calculating the characteristic classification information of the target object or the suspected focus;
s4: analyzing a target object or a focus: calculating and analyzing the actual position or edge of the target object or the suspected lesion according to the calculation result;
s5: outputting a result and sketching: and marking the actual position or the edge, and outputting and displaying a marked image.
In the process of reading training, a student manually reads, analyzes and judges a target ultrasonic image or image to generate a manual answer, and simultaneously intelligently and auxiliarily analyzes the target image to generate a machine answer, wherein the machine answer is used as a training reference answer, and the manual answer and the machine answer are compared and compared to evaluate the training effect. The process specifically comprises the following steps:
1) a user reads a section of image video and stores one or a plurality of frames of images of suspected focus as target images or temporarily stores the target images for reading training exercises;
2) the user manually interprets, measures, analyzes and diagnoses the target image to generate a manual answer;
3) a user intelligently analyzes the target image by adopting an intelligent auxiliary analysis method to obtain an analysis result, and the intelligent analysis result is used as a machine answer for a reference answer of the reading training;
the generation process of the machine answer specifically includes:
a1: desensitizing and preprocessing real image data in the clinical practice of ultrasonic examination;
a2: carrying out multi-parameter feature labeling and target object sketching on the processed data to generate a labeled image;
a3: performing data enhancement processing on the marked image to obtain sample data;
a4: carrying out deep learning algorithm calculation and training on the sample data to obtain an ultrasonic image auxiliary analysis initial algorithm model;
a5: retraining and optimizing the image-aided analysis initial algorithm model by using real image data in the ultrasonic examination clinical practice to obtain an ultrasonic image-aided analysis neural network model;
the specific contents of the step a4 and the step a5 are as follows: carrying out deep learning algorithm model calculation and training on the sample data by adopting an artificial intelligence deep learning method, thereby obtaining an initial algorithm model suitable for ultrasonic image target object identification and analysis; inputting the positive marking data into an initial algorithm model in combination with negative data generated by normal tissue image data in a certain proportion, and repeatedly training the algorithm model to obtain a deep learning network model suitable for identifying and analyzing an ultrasonic image target object or a target lesion;
a6: preprocessing a target image, inputting the preprocessed target image into an ultrasonic image auxiliary analysis neural network for deep network calculation, and calculating characteristic classification information of a target object or suspected diseases;
a7: and resolving an auxiliary diagnosis conclusion of the target object or the suspected lesion according to the characteristic classification information to obtain a machine answer.
4) Comparing and comparing the manual answers and the machine answers, analyzing the difference and simultaneously storing a comparison result;
5) and repeating the training process, finally counting and scoring the training accuracy, evaluating the training effect, and storing all the information as personal training files.
The process of carrying out intelligent auxiliary analysis on the ultrasonic dynamic image by adopting an artificial intelligent deep learning method comprises the following steps:
data preprocessing: preprocessing target ultrasonic dynamic image data, comprising: carrying out scaling, graying and normalization processing on each frame of image;
target identification and classification: adopting the ultrasonic image dynamic identification and prompting module to identify the target object of the preprocessed target data, marking each identification frame image, classifying all the identification frame images and respectively naming the identification frame images as a found target object N;
and (3) interpretation of the target: the ultrasonic image intelligent auxiliary analysis module is adopted to intelligently and auxiliarily analyze the found target object N or a plurality of single-frame pictures, weighting processing is carried out on the analysis result, and the analysis result of the identified target object is obtained after comprehensive calculation.
The training guide can lead in a section of ultrasonic video image or a plurality of ultrasonic pictures to become a training question, or combine a plurality of images and images to form an examination question; the training guide can selectively run intelligent auxiliary analysis on partial images to quickly form machine answers, then confirm the machine answers and modify the wrong machine answers. The process specifically comprises the following steps:
1) dividing training questions into video training questions and image training questions;
2) a training guide (doctor) inputs an ultrasonic image video of a certain typical case to a video training question bank, or a plurality of ultrasonic images of the typical case to an image training question bank;
3) the method comprises the steps of quickly forming machine answers by running intelligent auxiliary analysis, then checking the machine answers and modifying the machine answers with errors to form training question standard answers, and manually marking and sketching the images and storing the images as important optimized data;
4) combining a plurality of image videos and images to form examination questions in a manual or automatic mode;
5) the same intelligent auxiliary analysis and processing are carried out on the target image(s) in the video of the examination questions and each image of the examination questions to form examination question standard answers.
If the instructor finds that the machine answers have wrong answers, the image generating the wrong machine answers can be subjected to feature labeling and target object sketching, and the image can be stored as important labeling data and used for retraining and optimizing the neural network in future. The method specifically comprises the following steps:
1) if the intelligent auxiliary analysis of the target image is found to have obvious errors or misjudgments, a training physician or a guide needs to perform target object delineation and multi-parameter characteristic labeling on the original data of the image to form a labeled image, and the labeled image is stored as important optimized labeled data;
2) the important optimization labeling data can be used for retraining and re-optimizing the neural network so as to continuously improve the identification accuracy of the network.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An ultrasonic image intelligent training method based on artificial intelligence is characterized by comprising the following steps:
the method comprises the following steps that an assistant guide teacher collects ultrasonic video images, reads and edits the ultrasonic video images, and creates training data for student training;
carrying out artificial intelligence auxiliary analysis on the ultrasonic video image in the training data to generate a machine answer;
the assistant instructor verifies the machine answer, and labels and stores wrong data in the machine answer;
the students are assisted to perform basic knowledge learning, flow training, reading practice and assessment according to the training data, the machine answers and the instructor marking data, and the students are assisted to perform ultrasonic scanning practice training;
the assistant guide teacher consults the learning condition of the students and carries out statistics and analysis on the training and examination data of the students.
2. An ultrasonic image intelligent training system based on artificial intelligence is characterized by comprising:
the trainee training subsystem is used for providing functions of application, training, assessment and training record retrieval for trainees;
the teacher management subsystem is used for providing functions of question bank management, student account management, data acquisition and labeling tools and dynamic video intelligent acquisition and analysis tools for a teacher;
the artificial intelligence subsystem is used for deploying various artificial intelligence neural networks and carrying out intelligent operation; and
and the system administrator subsystem is used for coordinating, controlling and managing the whole training system.
3. The system for intelligent training of ultrasound image based on artificial intelligence as claimed in claim 2, wherein said trainee training subsystem comprises:
the training/assessment application module is used for managing the schedule and the approval process of trainees for planning, applying or confirming training and assessment;
the trainee training and examining module is used for trainees to learn, train and examine theoretical knowledge and to read and practice training in each stage; and
and the training and examination record inquiry is used for inquiring the training and examination records and retrieving the training related data by the trainees.
4. The system for intelligent training of ultrasound image based on artificial intelligence as claimed in claim 2, wherein said mentor management subsystem comprises:
the question bank management module is used for the instructor to establish, edit and manage specific training data and examination data under each training subject;
the student account management module is used for a teacher to perform new construction, editing, viewing, password resetting, statistical analysis and training quality evaluation operation on a student account;
the data acquisition and marking tool module is used for leading a teacher to mark and describe the image data in the training data and the examination data in the question bank management module, and the marked data is also used for optimizing an artificial intelligence algorithm and a neural network in the artificial intelligence subsystem; and
and the dynamic video intelligent acquisition and analysis tool module is used for assisting a guide to read the ultrasonic video, dynamically identifying the target object, capturing the video and the image and diagnosing intelligent auxiliary analysis.
5. The system of claim 2, wherein the artificial intelligence subsystem comprises:
the ultrasonic image dynamic identification and prompting module is used for analyzing and learning ultrasonic images and image data through an artificial intelligence deep learning method, constructing an algorithm model and a neural network, deploying the constructed neural network on an artificial intelligence engine to dynamically process and identify the number of input ultrasonic images, marking the position and the area of a suspected lesion and outputting a marked image;
the identification prompt tone control module is used for making a sound to remind a user of finding a target object and dynamically marking and identifying an image frame in the video playback process;
the intelligent dynamic image acquisition and identification module is used for marking each identified dynamic image frame and automatically carrying out video recording;
the ultrasonic image intelligent auxiliary analysis module is used for learning ultrasonic image big data of confirmed lesions and normal tissues of a specific part by an artificial intelligent deep learning method, constructing an image recognition algorithm model and a neural network, deploying the neural network on an artificial intelligent engine, processing and recognizing an input ultrasonic image, and finally obtaining a recognition conclusion as a diagnosis auxiliary reference; and
and the ultrasonic dynamic image intelligent auxiliary analysis module is used for matching with the ultrasonic image dynamic identification and reminding module and the ultrasonic image intelligent auxiliary analysis module to perform dynamic image identification, identification frame marking, classification and naming and multi-frame image intelligent auxiliary analysis after weighting on the identification target image data.
6. The system for intelligent training of ultrasound image based on artificial intelligence as claimed in claim 2, wherein said system administrator subsystem comprises:
the system maintenance module is used for maintaining and controlling the whole training system, maintaining the configuration and the setting of the system and the interface, configuring the corresponding client and storing and backing up system data;
the account management module is used for setting up, setting authority, editing, checking, carrying out statistical analysis and password resetting operation on all accounts in the system;
the training requirement making module is used for setting and evaluating work tasks, plans and training quality indexes of a training instructor or a teacher;
the question bank content auditing module is used for auditing the training content submitted by the instructor; and
and the database management module is used for managing related data including account information, training and assessment contents under various subjects and collected and labeled data in the training system.
7. An ultrasound image intelligent training application system based on artificial intelligence, comprising: at least two clients and at least one management server;
the at least two clients are a student client and a teacher client respectively;
the management server is used for providing information connection and data support for the student client to realize the functions of training subject selection, basic knowledge learning, process training, question bank training and practice training, and the teacher client to realize the functions of question bank management and student learning condition checking.
8. The system for ultrasound image intelligent training application based on artificial intelligence as claimed in claim 7, wherein the management server comprises:
the acquisition module is used for acquiring a training related request initiated by the student client or the teacher client;
the database is used for storing image training data corresponding to each training subject;
the data calling module is used for calling image training data under corresponding training subjects from the database according to the training subjects selected by the student client, acquiring basic knowledge training data or process training data from the called image training data according to a basic knowledge learning request or a process training request initiated by the student client, and sending the basic knowledge training data or the process training data to the student client;
the practical operation training module is used for receiving the ultrasonic image data uploaded by the teacher client, extracting standard data corresponding to the current scanning part from the image training data called by the data calling module, carrying out identification analysis on the ultrasonic image data to obtain a marked image, feeding the standard data corresponding to the scanning part and the marked image back to the teacher client, and generating prompt information; the system is also used for receiving a consulting request of the student client and feeding the standard data and the marked image back to the student client;
the question bank generating module is used for receiving the ultrasonic images or the ultrasonic pictures uploaded by the teacher client, generating a training question bank by taking the ultrasonic images or the ultrasonic pictures as training questions, generating an examination question bank by taking a plurality of ultrasonic images or ultrasonic picture combinations as examination questions, respectively identifying and analyzing target images and ultrasonic pictures in the ultrasonic images in the training question bank and the examination question bank to form machine answers, generating a corresponding standard answer bank, and storing the training question bank, the examination question bank and the standard answer bank into the database; and
the question bank training module is used for calling corresponding training questions or assessment questions in the database to the student client according to the request of the student client, receiving the artificial answers uploaded by the student client, comparing and analyzing the artificial answers and the corresponding machine answers to obtain a comparison result, and generating an assessment result according to the comparison result;
the learning data management module is used for storing relevant learning data of basic knowledge learning, process training, question bank training and practical exercise training completed by the student client, and feeding back the relevant learning data completed by the student client to the teacher client according to a student learning condition checking request initiated by the teacher client; and
the answer correction module is used for extracting an ultrasonic image or an ultrasonic picture corresponding to a machine answer to be corrected from the standard answer library according to a question library management request sent by the teacher client, feeding the ultrasonic image or the ultrasonic picture back to the teacher client, and receiving and storing a labeled image formed after target object sketching and parameter feature labeling are carried out on a target image or the ultrasonic picture in the ultrasonic image corresponding to the machine answer to be corrected and returned by the teacher client;
the training related requests comprise training subject selection, basic knowledge learning, process training, question bank training, practice training, question bank management and student learning condition checking requests.
9. The system for ultrasound image intelligent training application based on artificial intelligence of claim 8, wherein the student client comprises:
the training subject selection module is used for initiating a training subject selection request and selecting a training subject;
the first uploading module is used for uploading the ultrasonic image data or the artificial answers generated after the students analyze in the question bank training process to the management server;
the learning module is used for initiating basic knowledge learning, process training, question bank training or practical exercise training requests, receiving and displaying basic knowledge training data, process training data, standard data and labeled images corresponding to scanning positions and corresponding training questions or examination questions returned by the management server, and performing ultrasonic image training learning;
the consulting module is used for receiving and consulting the machine answers, the comparison results and the evaluation results fed back by the management server to obtain learning results;
and the prompt module is used for receiving the prompt information sent by the management server and sending a prompt tone.
10. The system for ultrasound image intelligent training application based on artificial intelligence of claim 8, wherein the teacher client comprises:
the second uploading module is used for uploading the ultrasonic images or the ultrasonic pictures of the typical cases to the management server;
the question bank editing module is used for initiating a question bank management request, and creating, deleting and modifying data in the question bank or correcting answers;
and the learning data viewing module is used for initiating a learning condition viewing request of the student and receiving and viewing the related learning data finished by the student client end fed back by the management server.
CN202011004430.2A 2020-09-22 2020-09-22 Ultrasonic image intelligent training method and system based on artificial intelligence and application system Pending CN112151155A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011004430.2A CN112151155A (en) 2020-09-22 2020-09-22 Ultrasonic image intelligent training method and system based on artificial intelligence and application system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011004430.2A CN112151155A (en) 2020-09-22 2020-09-22 Ultrasonic image intelligent training method and system based on artificial intelligence and application system

Publications (1)

Publication Number Publication Date
CN112151155A true CN112151155A (en) 2020-12-29

Family

ID=73897504

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011004430.2A Pending CN112151155A (en) 2020-09-22 2020-09-22 Ultrasonic image intelligent training method and system based on artificial intelligence and application system

Country Status (1)

Country Link
CN (1) CN112151155A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053194A (en) * 2021-02-28 2021-06-29 华中科技大学同济医学院附属协和医院 Doctor training system and method based on artificial intelligence and VR technology
CN113112882A (en) * 2021-04-08 2021-07-13 郭山鹰 Ultrasonic image examination system
CN113487927A (en) * 2021-06-25 2021-10-08 褚信医学科技(苏州)有限公司 Imaging teaching assessment method based on intelligent voice recognition technology
CN113920396A (en) * 2021-10-08 2022-01-11 中国人民解放军军事科学院军事医学研究院 Method, system and equipment for quantitatively evaluating visual cognitive ability of special post personnel
CN117594196A (en) * 2023-11-22 2024-02-23 广州盛安医学检验有限公司 Pathological image scanning analysis system and method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113053194A (en) * 2021-02-28 2021-06-29 华中科技大学同济医学院附属协和医院 Doctor training system and method based on artificial intelligence and VR technology
CN113053194B (en) * 2021-02-28 2023-02-28 华中科技大学同济医学院附属协和医院 Physician training system and method based on artificial intelligence and VR technology
CN113112882A (en) * 2021-04-08 2021-07-13 郭山鹰 Ultrasonic image examination system
CN113487927A (en) * 2021-06-25 2021-10-08 褚信医学科技(苏州)有限公司 Imaging teaching assessment method based on intelligent voice recognition technology
CN113920396A (en) * 2021-10-08 2022-01-11 中国人民解放军军事科学院军事医学研究院 Method, system and equipment for quantitatively evaluating visual cognitive ability of special post personnel
CN117594196A (en) * 2023-11-22 2024-02-23 广州盛安医学检验有限公司 Pathological image scanning analysis system and method
CN117594196B (en) * 2023-11-22 2024-06-07 广州盛安医学检验有限公司 Pathological image scanning analysis system and method

Similar Documents

Publication Publication Date Title
CN112151155A (en) Ultrasonic image intelligent training method and system based on artificial intelligence and application system
Pinchi et al. Dental identification by comparison of antemortem and postmortem dental radiographs: influence of operator qualifications and cognitive bias
US20130065211A1 (en) Ultrasound Simulation Training System
US11328812B2 (en) Medical image processing apparatus, medical image processing method, and storage medium
US10140888B2 (en) Training and testing system for advanced image processing
US20140004488A1 (en) Training, skill assessment and monitoring users of an ultrasound system
Galusko et al. Hand-held ultrasound scanners in medical education: a systematic review
CN108230311A (en) A kind of breast cancer detection method and device
JP2005510326A (en) Image report creation method and system
Stember et al. Integrating eye tracking and speech recognition accurately annotates MR brain images for deep learning: proof of principle
Munoz et al. Development of a software that supports multimodal learning analytics: A case study on oral presentations
US20130100269A1 (en) System and Method for Assessing an Individual's Physical and Psychosocial Abilities
JP2852866B2 (en) Computer-aided image diagnosis learning support method
CN117558439A (en) Question interaction method and device based on large language model and related equipment
Sheehan et al. Simulation for competency assessment in vascular and cardiac ultrasound
CN113485555B (en) Medical image film reading method, electronic equipment and storage medium
CN114246611B (en) System and method for an adaptive interface for an ultrasound imaging system
US20050280651A1 (en) Apparatus and method for enhanced video images
CN109147927B (en) Man-machine interaction method, device, equipment and medium
CN113313359A (en) Evaluation method and device for image labeling diagnosis quality
Azari et al. Evaluation of simulated clinical breast exam motion patterns using marker-less video tracking
TWI774982B (en) Medical resource integration system, computer device and medical resource integration method
Simeoli et al. A comparison between digital and traditional tools to assess autism: effects on engagement and performance.
Qiao et al. A deep learning-based intelligent analysis platform for fetal ultrasound four-chamber views
CN110269641B (en) Ultrasonic imaging auxiliary guiding method, system, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220216

Address after: 215300 Room 601, building 008, No. 2001 Yingbin West Road, Bacheng Town, Kunshan City, Suzhou City, Jiangsu Province

Applicant after: Suzhou Shishang Medical Technology Co.,Ltd.

Address before: 200000 7K, 8519 Nanfeng Road, Fengxian District, Shanghai

Applicant before: Kestrel Intelligent Technology (Shanghai) Co.,Ltd.

TA01 Transfer of patent application right