CN111310851B - Artificial intelligence ultrasonic auxiliary system and application thereof - Google Patents

Artificial intelligence ultrasonic auxiliary system and application thereof Download PDF

Info

Publication number
CN111310851B
CN111310851B CN202010137967.XA CN202010137967A CN111310851B CN 111310851 B CN111310851 B CN 111310851B CN 202010137967 A CN202010137967 A CN 202010137967A CN 111310851 B CN111310851 B CN 111310851B
Authority
CN
China
Prior art keywords
section
image
module
ultrasonic
doctor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010137967.XA
Other languages
Chinese (zh)
Other versions
CN111310851A (en
Inventor
陈欣
罗红
张波
李科君
夏姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Wangwang Technology Co ltd
West China Second University Hospital of Sichuan University
Original Assignee
Chengdu Wangwang Technology Co ltd
West China Second University Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Wangwang Technology Co ltd, West China Second University Hospital of Sichuan University filed Critical Chengdu Wangwang Technology Co ltd
Priority to CN202010137967.XA priority Critical patent/CN111310851B/en
Publication of CN111310851A publication Critical patent/CN111310851A/en
Application granted granted Critical
Publication of CN111310851B publication Critical patent/CN111310851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the field of ultrasonic medical images, and provides an artificial intelligent ultrasonic auxiliary system and application thereof, wherein the artificial intelligent ultrasonic auxiliary system mainly comprises an ultrasonic machine, an image system, a remote medical treatment application, an image collecting module, an image transmission module, an image framing module, an automatic classification module, an automatic sheet selection module, a teaching auxiliary module, a sheet reading prompt module, a quality control analysis module and the like. After the artificial intelligent based ultrasound auxiliary system is implanted in the ultrasound machine, the image information system and the remote medical system, doctors with low annual resources can quickly get on hand, and the defect of basic medical resources is overcome.

Description

Artificial intelligence ultrasonic auxiliary system and application thereof
Technical Field
The invention relates to the field of ultrasonic medical imaging, in particular to an artificial intelligent ultrasonic auxiliary system and application thereof.
Background
In the current field of ultrasound medicine, there is a tremendous supply and demand asymmetry.
From the aspect of doctor's requirement, a set of rapid and accurate system is needed to help doctors to rapidly and accurately detect whether patients belong to positive cases or negative cases in daily work, and automatically acquire standard sections meeting clinical requirements for the negative cases; and a film reading prompt is provided for positive cases, so that doctors can clearly know related symptoms and assist the doctors to make accurate diagnosis.
The current situation of products and services available in the market and enterprises is as follows:
at present, an image information system connected with an ultrasonic machine or an ultrasonic machine does not have an artificial intelligence function, an ultrasonic doctor screens and judges cases by means of personal experience in the process of acquiring images, a standard section is selected by means of personal experience and manipulation technology for negative cases, and abnormality is identified by means of naked eye observation for positive cases.
The main pain points of the current situation are as follows:
the time is long: whether a standard section is selected for a negative case or a positive case is diagnosed, the operation method and the judging capability of a doctor are very depended, the time for selecting the section by a doctor with poor experience level is very long, and the working efficiency is low;
normalization: the section selection standard aiming at the negative case is completely limited by the personal level and subjective judgment of a doctor, and the section selected by the doctor does not necessarily accord with the clinical standard in addition to time urgency and other interference factors;
easy missing diagnosis and misdiagnosis: aiming at the diagnosis working process of positive cases, if a small abnormality in the image is not found in time, misdiagnosis is easily caused, and even medical accidents are caused;
hysteresis of quality control: at present, ultrasonic quality control is performed by adopting a manual spot check method after the fact, the quality control cannot fully cover the image collected and reserved by an ultrasonic doctor, and meanwhile, problems cannot be found in time by adopting the quality control after the fact.
Lack of teaching assistance: at present, an ultrasonic machine and an image system thereof lack teaching auxiliary functions and cannot give teaching auxiliary support to doctors during working.
On the other hand, the remote medical treatment refers to the remote diagnosis, treatment and consultation of the sick and wounded on the remote areas, islands or ships with poor medical conditions by taking the computer technology, remote sensing, telemetry and remote control technology as a basis, fully playing the advantages of the medical technology and medical equipment of the medical centers of large hospitals or special departments. In the current telemedicine, the ultrasonic examination is related to, or the manual acquisition section is transmitted to the consultation terminal doctor for remote consultation under the guidance of the consultation terminal doctor by the remote terminal doctor; or consultation doctors collect through the remote control of the ultrasonic probe by the mechanical arm, and the two modes of remote medical treatment mainly include the following pain points:
the reading is limited: the film reading accuracy of consultation doctors is based on whether the ultrasonic images acquired by the remote-end doctors meet clinical standards or not, and is subject to personal experience and level of the remote-end doctors, and the film reading accuracy of expert doctors is seriously influenced because a plurality of acquired ultrasonic images do not meet the standards.
Long time and tired doctor: the acquisition of the ultrasonic image standard section is limited by the technology and experience of a remote doctor, and the remote doctor needs to spend more time and energy for remote guidance in the process of acquiring images, so that the doctor at the consultation end needs to see the real-time imaging image video of ultrasonic equipment and the imaging technique of the doctor at the same time to guide the doctor at the consultation end in real time because of worry about the influence of the imaging technique and the technology of the remote doctor on diagnosis, and tells the remote doctor where to exert force, pressurize and adjust, and the doctor at the consultation end spends a long time and energy.
Easy missed diagnosis and misdiagnosis: the remote medical treatment is limited by time, personnel interference and image conditions, and doctors at consultation terminals cannot comprehensively and carefully observe the acquired images, so that some focus areas are easily missed, and risks of missed diagnosis and misdiagnosis are caused.
The mechanical arm is unreliable in precision and safety: through arm remote operation, because the accuracy problem hardly reaches on-the-spot image level of gathering, remote control arm lacks dynamics perception simultaneously, has the danger that causes the harm to patient organ oppression.
Disclosure of Invention
Based on the above, the invention provides an artificial intelligence ultrasound auxiliary system and application thereof in an ultrasound machine, an audio-visual information system and a telemedicine system.
In order to achieve the technical effects, the technical scheme of the application is as follows:
an artificial intelligent ultrasonic auxiliary system comprises an image collection module, an image transmission module, an image framing module, an automatic classification module, an automatic film selection module, a teaching auxiliary module, a film reading prompt module and a quality control analysis module;
the image collection module is used for: in the process of checking by an ultrasonic doctor, acquiring an ultrasonic image of a checked person through an image collecting module;
and the image transmission module is used for: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
the image framing module: the method comprises the steps of automatically framing a received ultrasonic image, and dividing the ultrasonic video into ultrasonic section images of one frame by one frame;
automatic classification module: the method is used for classifying the case of the framed ultrasonic section image through feature matching, and classifying the case into a suspected case and a negative case.
Preferably, the automatic classification module performs the following steps for the framed ultrasound image:
s1, preprocessing an input image, and carrying out gamma conversion and average filtering processing on a framing ultrasonic image;
s2, establishing coordinate axes by using the images and randomly generating n equal areas to be selected;
s3, calculating and normalizing pixel characteristic tensors of each region to be selected;
s4, calculating pixel characteristic tensors of each region to be selected and standard training pixel characteristic tensors, and obtaining similarity values of the regions by adopting a svm algorithm;
s5, removing 2 areas with the lowest similarity each time;
s6, repeating (S4-S5) until a final area is reserved;
s7, placing the area obtained in the S6 into an intercommunication fusion convolutional network to obtain images of suspected cases and negative cases to be diagnosed.
Further, the automatic film selecting module comprises:
and extracting an ultrasonic image with a standard tangent plane matching rate of more than 90% corresponding to the requirements of a society of ultrasound doctors and doctors of China's society of prenatal ultrasonic examination guidelines (2012) as a standard tangent plane, wherein the standard tangent plane matching rate of between 70 and 90% corresponding to the requirements of a society of ultrasound doctors and doctors of China's society of prenatal ultrasonic examination guidelines (2012) as a passing tangent plane, and the standard tangent plane matching rate of less than 70% corresponding to the requirements of a society of ultrasound doctors and doctors of China's society of prenatal ultrasonic examination guidelines (2012) as a failing tangent plane.
Preferably, ultrasound images with a matching rate of more than 90% to the standard section are preferentially extracted and provided to the sonographer, and then pass sections with a matching rate of between 70 and 90% are provided to the sonographer, and fail sections are not provided to the sonographer.
Further, the teaching aid module prompts doctors to read the film in a text, symbol and sound mode, and the skills of the doctors are improved.
Further, the automatic diagnosis and treatment system also comprises a film reading prompt module, wherein the film reading prompt module is used for automatically covering and marking abnormal focus areas, automatically measuring, facilitating consultation terminal doctors to identify abnormal focus positions in images and providing help for diagnosis.
Further, the system also comprises a reading prompt module, and aiming at suspected cases, with the help of the reading prompt module, consultation terminal doctors are helped to screen and judge whether positive cases or negative cases are detected.
Further, the system also comprises a quality control analysis module, wherein the quality control analysis module is used for intelligent comprehensive analysis of images acquired by doctors and forms a quality control report through report display.
Further, the image transmission module comprises a wireless transmission mode or/and a wired transmission mode, wherein the wireless transmission mode comprises: one or more of WiFi, 3G/4G/5G/6G, bluetooth and microwave transmission are combined, and the wired transmission mode comprises the following steps: one or more of ethernet, fiber optics, VGA, DVI, HDMI and DP.
In the working process of the automatic film selecting module, the automatic film selecting range suitable for the ultrasonic image section comprises: the prenatal ultrasound three-level examination involved thalamus horizontal cross section, lateral ventricle horizontal cross section, cerebellum horizontal cross section, nasolabial coronal cross section, four-chamber heart cross section, left ventricular outflow tract cross section, right ventricular outflow tract cross section, fetal heart rate chart (Doppler or M-shaped), upper abdomen cross section (abdominal girth measurement cross section), umbilical cord abdominal wall entrance abdomen cross section, umbilical artery horizontal bladder cross section, double kidney cross section, spinal sagittal cross section, humeral long axis cross section (left, right), ulnar long axis cross section (left, right), femoral long axis cross section (left, right), tibial fibula long axis cross section (left, right), cervical canal sagittal cross section, portal sagittal cross section contained in normal physical examination of an adult human, portal vein trunk cross section, second hepatic portal cross section, gallbladder cross section, extrahepatic bile duct cross section, left kidney long axis cross section, right kidney long axis cross section, spleen long axis cross section (including spleen gate), pancreas cross section.
The method is applicable to the horizontal cross section of the thalamus, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a transparent separation chamber, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure of the brain, a caudate nucleus and a lateral ventricle anterior horn; the method is applicable to the horizontal cross section of the cerebellum, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise a craniocerebral fossa, a cerebellum hemisphere, a cerebellum foot, a thalamus and a cerebellum earthworm part; the method is suitable for four-cavity heart section, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the cross section of the upper abdomen, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise descending vena cava, abdominal aorta, umbilical vein and gastric bulb; the method is applicable to horizontal cross sections of the bladder, and when feature extraction and feature matching are carried out, umbilical arteries and the bladder are contained as feature anatomical organs; the method is suitable for a double-kidney cross section, and when feature extraction and feature matching are carried out, the double-kidney cross section is taken as a feature anatomical organ to comprise a left kidney, a right kidney and a spleen; the method is applicable to long-axis sections of thighbones, and when feature extraction and feature matching are carried out, vertebral arches and vertebral bodies are contained as feature anatomic organs; the method is suitable for the horizontal cross section of the lateral ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a lateral ventricle back angle, a choroid plexus, a thalamus, a caudal nucleus, a transparent separation cavity, a lateral ventricle front angle and a cerebral lateral fissure; the method is suitable for a coronal section of a nose and a lip, and is used as a characteristic anatomical organ including a middle part, a nose, an upper lip, a lower lip and a lower jaw of a person when characteristic extraction and characteristic matching are carried out; the method is suitable for the outflow tract section of the left ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ serving as the characteristic comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when the characteristic extraction and the characteristic matching are carried out, the section is taken as a characteristic anatomic organ to comprise a main trunk of the right ventricle, the ascending aorta and the pulmonary artery; the method is suitable for fetal heart rate diagrams, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricular back wall, a ventricular septum, a left ventricle and a right ventricle.
The application also protects the application of the artificial intelligence ultrasonic auxiliary system in the ultrasonic machine, the ultrasonic machine comprises the artificial intelligence ultrasonic auxiliary system, the artificial intelligence ultrasonic auxiliary system is communicated with each existing functional component of the ultrasonic machine, and the purposes of image acquisition, image transmission, image framing, automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis are achieved.
The application also protects the application of the artificial intelligent ultrasonic auxiliary system in the video and audio information system, wherein the video and audio information system comprises the artificial intelligent ultrasonic auxiliary system, the artificial intelligent ultrasonic auxiliary system is communicated with each existing functional component of the ultrasonic image system, and the functions of image acquisition, image transmission, image framing, automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis are achieved.
The application also protects the application of the artificial intelligent ultrasonic auxiliary system in a remote medical system, wherein the remote medical system comprises the artificial intelligent ultrasonic auxiliary system, the artificial intelligent ultrasonic auxiliary system is communicated with each existing functional component in the remote medical system, and the purposes of image acquisition, image transmission, image framing, automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis are achieved. The remote medical system refers to the behavior of medical consultation activities such as offsite and interactive guidance, examination, diagnosis, treatment and the like, which are developed by combining the medical technology with the communication technology, the computer and the network technology among medical institutions.
After the scheme is adopted, the artificial intelligent ultrasonic auxiliary system is independently used, or the artificial intelligent ultrasonic auxiliary system is arranged in an ultrasonic machine, or the artificial intelligent ultrasonic auxiliary system is combined with an external ultrasonic machine for use, or the artificial intelligent ultrasonic auxiliary system is applied to a remote medical system, so that the following beneficial effects can be realized:
1. make up for the medical resource shortage of the basic unit: the doctor lacking in the national primary hospitals, particularly experienced ultrasonic doctors, is seriously insufficient, and after the artificial intelligent-based ultrasonic auxiliary system is implanted in the ultrasonic machine, the image information system and the remote medical system, the doctor with low annual cost can quickly get on hand, so that the defect of primary medical resources is overcome.
2. Work efficiency is improved: the method has the advantages that the standard section is selected rapidly according to the negative cases, the suspected cases are judged rapidly and accurately, the working efficiency is improved, the working intensity is reduced, and meanwhile, the time for a doctor at a remote end to acquire the standard section can be shortened greatly.
3. Unified inspection standard: automatic selection is entered through unified standards embedded by the artificial intelligent model, so that artificial subjective differentiation judgment is avoided, and accuracy is improved.
4. Prevent missed/misdiagnosis: in the process of checking and auditing, aiming at positive cases and suspicious cases, the artificial intelligence avoids the condition of missed diagnosis/misdiagnosis caused by the interference of other external factors on the judgment of doctors.
5. And (3) strengthening quality control management: through intelligent automatic selection, the images provided for doctors are images with the matching rate being more than 80% through artificial intelligent screening, and the quality control of the ultrasonic images is ensured.
6. Improving doctor skills: through reading the suggestion module and providing doctor and reading the observation suggestion, can also provide online auxiliary module, promote doctor's skill.
7. Quality control analysis report form: the quality control analysis is carried out on the preserved ultrasonic image through the artificial intelligent module, and the advantages of real-time performance, full coverage and unified standard are provided.
8. Compatibility enhancement: the invention is used as a functional module, is implanted into an ultrasonic machine and an image information system related to the ultrasonic machine, provides an open interface, is compatible with various brands of ultrasonic machines and ultrasonic image information systems related to the ultrasonic machines in the industry, and is compatible with various brands of remote medical treatment and related equipment in the industry.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an ultrasound machine with an artificial intelligence ultrasound assistance system built into it in an embodiment of the invention.
FIG. 2 is a schematic diagram of an ultrasound image information system incorporating an artificial intelligence ultrasound assistance system in accordance with an embodiment of the present invention.
FIG. 3 is a schematic diagram of an artificial intelligence ultrasound assistance system.
Fig. 4 is a schematic structural diagram of a teaching assistance module 80 according to an embodiment of the invention.
In the accompanying drawings: the system comprises a 10-image collecting module, a 20-image transmission module, a 30-image framing module, a 40-automatic classification module, a 50-automatic slice selecting module, a 51-standard slice, a 52-qualified slice, a 53-unqualified slice, a 60-slice reading prompt module, a 80-teaching auxiliary module and a 90-quality control analysis module.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-2, fig. 1 is an artificial intelligence ultrasound auxiliary system implanted in an ultrasound machine, and the artificial intelligence ultrasound auxiliary system implanted in the ultrasound machine is mutually communicated with the existing functional components of the ultrasound machine (such as functional components including a probe, an ultrasound transmitting/receiving device, signal processing, image display and the like), so that the ultrasound machine can realize the functions of automatic classification of cases, automatic selection of standard sections for negative cases, automatic prompt for suspected cases, and the like under the assistance of an artificial intelligence module. Ultrasonic machines of the present invention include, but are not limited to, the types of ultrasonic machines disclosed in the prior art.
Example 2
Fig. 2 is a schematic diagram of an artificial intelligence ultrasound auxiliary system implanted in various image information systems connected with an ultrasound machine, and the system is mutually communicated with the existing functional components of the image information system (such as an image processing system and a report management system), so that the ultrasound image system can realize the functions of automatic classification of cases, automatic selection of standard cut surfaces for negative cases, automatic prompt of reading for suspected cases and the like under the assistance of an artificial intelligence module. The image information system in the invention comprises various image information systems which are already disclosed in the prior art.
The implementation mode mainly comprises two stages of a model training step and a model recognition step, wherein the model training mainly comprises three steps of data acquisition, feature labeling and model training.
And a data acquisition step: acquiring image sample data aiming at current needs;
and a characteristic marking step: performing feature labeling on the focus area and other related areas;
model training: and selecting marked sample data for model training, and performing corresponding model parameter tuning and improvement in the training process to obtain the feature classification model with the best effect.
Example 3
Referring to fig. 3-4, the artificial intelligence ultrasound assistance system of the present embodiment is applied to embodiments 1 and 2, and includes the following functional modules
The image collection module is used for: the ultrasonic imaging device is used for acquiring ultrasonic images acquired by a doctor from the ultrasonic machine in real time; specifically, the main function of the ultrasonic imaging device is to collect ultrasonic images acquired in the examination process of a doctor, when the ultrasonic doctor holds the ultrasonic probe to carry out ultrasonic examination, the ultrasonic probe acquires ultrasonic images of organs to be examined from an area to be examined of a patient, and the ultrasonic collecting module acquires the ultrasonic images acquired by the doctor from an ultrasonic machine in real time;
and the image transmission module is used for: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
the image framing module: the method comprises the steps of automatically framing a received ultrasonic image, and dividing the ultrasonic video into ultrasonic section images of one frame by one frame; specifically, after the ultrasonic image framing module receives the transmitted image, framing the ultrasonic image, and dividing the ultrasonic video into ultrasonic two-dimensional images frame by frame;
automatic classification module: the method is used for classifying the case of the framed ultrasonic section image through feature matching, and classifying the framed ultrasonic section image into a suspected case and a negative case to be diagnosed. The method specifically comprises the following steps:
s1, preprocessing an input image, and carrying out gamma conversion and average filtering processing on a framing ultrasonic image;
s2, establishing coordinate axes by using the images and randomly generating n equal areas to be selected;
s3, calculating and normalizing pixel characteristic tensors of each region to be selected;
s4, calculating pixel characteristic tensors of each region to be selected and standard training pixel characteristic tensors (a convolution kernel function carries out cavity convolution on the image), and obtaining a similarity value by adopting a svm algorithm on the region;
s5, removing 2 areas with the lowest similarity each time;
s6, repeating (S4-S5) until a final area is reserved;
s7, placing the area obtained in the S6 into an intercommunication fusion convolutional network to obtain images of suspected cases and negative cases, carrying out encoding downsampling, jumping and linking feature fusion, decoding upsampling to restore to the original resolution, calculating that the iou of the mask is larger than 0.9 and is the suspected cases, and calculating that the iou is smaller than 0.9 and is the negative to be diagnosed (the iou is the abbreviation of Intersection over Union, and predicting the ratio of the mask area to the real mask area).
If the negative case to be diagnosed is judged by a doctor, the automatic tablet selection step is carried out, and if the negative case to be diagnosed is suspected, the tablet reading prompting step is carried out, the focus area is segmented, and the focus area is prompted to the doctor;
the automatic film selecting module comprises: the automatic slice selecting module is used for extracting an ultrasonic image with the matching rate of more than 90% with the standard tangent plane as the standard tangent plane, wherein the ultrasonic image with the matching rate of 70-90% with the standard tangent plane is used as the passing tangent plane, and the matching rate is lower than 70% as the failing tangent plane.
Specifically, the automatic slice selecting module is used for extracting an ultrasonic image with a standard slice matching rate of more than 90% corresponding to the requirements of the society of ultrasound doctors and China doctors 'society' prenatal ultrasonic examination guide (2012) as a standard slice, with a standard slice matching rate of between 70 and 90% corresponding to the requirements of the society of ultrasound doctors and China doctors 'society' prenatal ultrasonic examination guide (2012) as a passing slice, with a standard slice matching rate of less than 70% corresponding to the requirements of the society of ultrasound doctors and China doctors 'society' prenatal ultrasonic examination guide (2012), ultrasound images with a standard section matching rate of more than 90% corresponding to the requirements of the society of sonographers and the society of Chinese doctors, namely, prenatal ultrasound examination guidelines (2012), are preferentially extracted and provided to the sonographer, then a passing section with a standard section matching rate of between 70 and 90% corresponding to the requirements of the society of sonographers and the society of Chinese doctors, namely, prenatal ultrasound examination guidelines (2012), is provided to the sonographer, and a failing section with a standard section matching rate of less than 70% corresponding to the requirements of the society of sonographer and the society of Chinese doctors, namely, the society of Chinese doctors and the society of ultrasound examination guidelines (2012), is not provided to the doctor, and prompt is given to the doctor to enable the doctor to continue to acquire again.
And extracting an ultrasonic image with the matching rate of more than 90% with the standard tangent plane as the standard tangent plane, wherein the ultrasonic image with the matching rate of 70-90% with the standard tangent plane is used as the passing tangent plane, and the matching rate is lower than 70% as the failing tangent plane.
For negative cases, two steps of operations are performed, 1. Image preprocessing: carrying out Gaussian filtering on the image denoising, removing noise by gabor filtering to obtain useful information, and 2. Extracting time features of sequence images: and putting each frame of code into a deep convolutional neural network discrimination sub-module.
The deep convolution neural network discrimination submodule comprises:
a first layer: the size 3*3 number 32 convolution layers are then pooled;
a second layer: 3*3, pooling 64 convolution layers, and adding a residual error module;
third layer: 3*3, pooling 128 convolution layers, and adding a residual error module;
fourth layer: 3*3 256 convolution layers are pooled, and a residual error module is added;
fifth layer: 3*3 the 512 convolution layers are pooled and fully connected.
The final judging basis is obtained by fusing the output characteristic tensors of the two modules to carry out contrast matching on the input ultrasonic images, the ultrasonic images with the matching rate of more than 90% are preferentially extracted and provided for an ultrasonic doctor, and then the passing tangent planes with the matching rate of 70-90% are not provided for the doctor.
And in the process of checking, the doctor judges the section selected by the automatic sheet selecting module, if the section is approved, the frame of image is used as a standard section or a lattice section for next operation, if the section is not approved, the automatic sheet selecting module can continue to select until the doctor is satisfied, in addition, in the process of automatically selecting the sheet, if the matching rate in the process of acquiring the ultrasonic image by the doctor is found to be lower than 70%, the doctor can be prompted, and the doctor can continue to acquire again.
Teaching auxiliary module: the method is used for prompting the doctor to observe the image in a text, symbol and sound mode, and improving the skills of the doctor. Specifically, for negative cases, after a doctor is assisted by the automatic slice selection module to acquire a standard section, the frame of ultrasonic image is transmitted to the teaching auxiliary module, and anatomical structures of organs contained in the frame of image are identified by a previously trained convolutional neural network-based teaching auxiliary model.
The doctor is prompted to observe the image in a text, symbol and sound mode, for example, in a four-cavity heart tangential face sound image, the doctor is prompted to pay attention to the heart position, the apex pointing direction and the like, so that the doctor can continuously improve learning in the working process, and the skill improvement of the doctor is enhanced.
In the automatic film selecting process, the automatic film selecting range suitable for the ultrasonic image section comprises: the prenatal ultrasound three-level examination involved thalamus horizontal cross section, lateral ventricle horizontal cross section, cerebellum horizontal cross section, nasolabial coronal cross section, four-chamber heart cross section, left ventricular outflow tract cross section, right ventricular outflow tract cross section, fetal heart rate chart (Doppler or M-shaped), upper abdomen cross section (abdominal girth measurement cross section), umbilical cord abdominal wall entrance abdomen cross section, umbilical artery horizontal bladder cross section, double kidney cross section, spinal sagittal cross section, humeral long axis cross section (left, right), ulnar long axis cross section (left, right), femoral long axis cross section (left, right), tibial fibula long axis cross section (left, right), cervical canal sagittal cross section, portal sagittal cross section contained in normal physical examination of an adult human, portal vein trunk cross section, second hepatic portal cross section, gallbladder cross section, extrahepatic bile duct cross section, left kidney long axis cross section, right kidney long axis cross section, spleen long axis cross section (including spleen gate), pancreas cross section. The method is applicable to the horizontal cross section of the thalamus, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a transparent separation chamber, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure of the brain, a caudate nucleus and a lateral ventricle anterior horn; the method is applicable to the horizontal cross section of the cerebellum, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise a craniocerebral fossa, a cerebellum hemisphere, a cerebellum foot, a thalamus and a cerebellum earthworm part; the method is suitable for four-cavity heart section, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the cross section of the upper abdomen, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise descending vena cava, abdominal aorta, umbilical vein and gastric bulb; the method is applicable to horizontal cross sections of the bladder, and when feature extraction and feature matching are carried out, umbilical arteries and the bladder are contained as feature anatomical organs; the method is suitable for a double-kidney cross section, and when feature extraction and feature matching are carried out, the double-kidney cross section is taken as a feature anatomical organ to comprise a left kidney, a right kidney and a spleen; the method is applicable to long-axis sections of thighbones, and when feature extraction and feature matching are carried out, vertebral arches and vertebral bodies are contained as feature anatomic organs; the method is suitable for the horizontal cross section of the lateral ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a lateral ventricle back angle, a choroid plexus, a thalamus, a caudal nucleus, a transparent separation cavity, a lateral ventricle front angle and a cerebral lateral fissure; the method is suitable for a coronal section of a nose and a lip, and is used as a characteristic anatomical organ including a middle part, a nose, an upper lip, a lower lip and a lower jaw of a person when characteristic extraction and characteristic matching are carried out; the method is suitable for the outflow tract section of the left ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ serving as the characteristic comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when the characteristic extraction and the characteristic matching are carried out, the section is taken as a characteristic anatomic organ to comprise a main trunk of the right ventricle, the ascending aorta and the pulmonary artery; the method is suitable for fetal heart rate diagrams, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricular back wall, a ventricular septum, a left ventricle and a right ventricle.
Example 4
Referring to fig. 3, the artificial intelligence ultrasound assistance system of the present embodiment can be applied to embodiments 1, 2 and 5, and includes the following functional modules, which are adopted for processing of suspected cases based on the automatic classification module of embodiment 3.
The film reading prompt module: the film reading prompt module is used for automatically covering and marking abnormal focus areas, automatically measuring, facilitating doctors at consultation terminals to identify abnormal anatomical structures in images and providing assistance for accurate diagnosis. Specifically, aiming at suspected cases, two steps of operation are carried out, 1. Automatic inclusion is carried out, image features from high scale to low scale are collected and spliced, a suspected lesion area is obtained through the image convolution network for the image features of the previous step, and then precise positioning output coordinates are carried out on the suspected lesion area through the convolution regression network for inclusion marking; 2. the prompting instruction is that the anatomic name of the suspicious lesion area is added, the necessary instruction is added by comparing the related content in the characteristic library, the suspicious lesion area is reasonably overlapped, and if the matching rate of the anatomic structural characteristics of each organ contained in the suspicious lesion area and the dysplasia library or the abnormal lesion characteristic matching library is very high, the prompting instruction is given to a doctor in the form of characters, symbols or sound, so that the doctor is helped to accurately diagnose positive cases and negative cases finally, and the risks of missed diagnosis and misdiagnosis of the doctor are reduced.
And the quality control analysis module: the quality control analysis module is used for performing intelligent comprehensive analysis on images acquired by a remote doctor, and forming a report through report display. Specifically, a doctor collects and holds a first type of ultrasonic image, which can be a second type of ultrasonic image manually selected by the doctor, or can source a third type of ultrasonic image automatically acquired by an ultrasonic automatic selecting module, the third type of ultrasonic image is transmitted to a quality control analysis module through an ultrasonic transmission module, the definition, the section standardization and doctor image marks of the frame of image are subjected to characteristic comparison by a trained quality control analysis model based on a convolutional neural network to obtain a judgment score, and then the judgment score is displayed according to requirements according to various formats to obtain an analysis report 1, an analysis report 2 and an analysis report N, such as a department transverse analysis report: the quality control score of the whole ultrasonic image of each doctor in the month is compared, and the quality control score of the ultrasonic image of each doctor in a certain ultrasonic section is compared, for example: comparison of quality control scores of sections of double kidneys, comparison of quality control scores of a certain anatomical structure of a certain section, and longitudinal analysis report form of department: monthly, daily department integral ultrasonic image quality control score comparison, monthly, daily department ultrasonic image quality control score comparison of certain section, personal transverse analysis report form: ultrasound image quality control score comparison of each section of each doctor per month and each day, ultrasound image quality control score comparison of each examination patient of each doctor per month and each day, and individual longitudinal analysis report form: monthly, daily overall ultrasound image quality control score comparison for individual doctors, monthly, daily per ultrasound image quality control score comparison for individual doctors.
Example 5
Referring to fig. 3-4, an embodiment of the present invention provides an artificial intelligence ultrasound-assisted system for use in a telemedicine system. It should be understood by those skilled in the art that the artificial intelligent auxiliary system is implanted in the telemedicine system, and the artificial intelligent auxiliary system is communicated with all the existing functional components of the telemedicine system, so that the functions of automatic classification of cases, automatic selection of standard cut surfaces for negative cases, automatic prompt for reading of suspected cases and the like are realized under the assistance of the artificial intelligent module.
The implementation mode mainly comprises two stages of a model training step and a model recognition step, wherein the model training mainly comprises three steps of data acquisition, feature labeling and model training.
1) And a data acquisition step: image sample data for a current need is acquired.
2) And a characteristic marking step: the lesion area and other relevant areas are characterized.
3) Model training: and selecting marked sample data for model training, and performing corresponding model parameter tuning and improvement in the training process to obtain the feature classification model with the best effect.
Those skilled in the art will appreciate that implementing all or part of the above described embodiments of the system may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of embodiments of the methods and modules described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the system is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are included in the protection scope of the present invention.

Claims (12)

1. An artificial intelligence ultrasound auxiliary system, characterized in that: the device comprises an image collection module, an image transmission module, an image framing module, an automatic classification module and an automatic film selection module which are sequentially transmitted in a signal mode, wherein:
the image acquisition module is used for: in the checking process, an ultrasonic image of a checked person is acquired through an image collecting module;
and the image transmission module is used for: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
the image framing module: receiving the transmitted ultrasonic image and segmenting the ultrasonic image into ultrasonic sections of one frame by one frame;
automatic classification module: the method comprises the following steps of performing case classification on ultrasonic images of a divided frame through feature matching, dividing the ultrasonic images into negative cases to be diagnosed and suspected cases, and finally determining the negative cases and positive cases by doctors, wherein an automatic classification module comprises the following working steps: 1) Preprocessing an input image, and filtering the average value through gamma conversion; 2) Establishing coordinate axes by using the images and randomly generating n equal areas to be selected; 3) Calculating and normalizing the pixel characteristic tensor of each region to be selected; 4) Calculating the pixel characteristic tensor of each region to be selected and the standard training pixel characteristic tensor to obtain a similarity value by a svm method for the region; 5) 2 areas with the lowest similarity are removed each time; 6) Repeating 4) until a final one of the regions is reserved; 7) Placing the region obtained in the step 6) into an intercommunication fusion convolutional network, carrying out coding downsampling, jumping linkage feature fusion, decoding and upsampling to restore to the original resolution, and then calculating the iou of the mask, wherein the iou is greater than 0.9 and is a suspected case, and the iou is less than 0.9 and is a negative case to be diagnosed, and the iou is the ratio of the predicted mask area to the real mask area;
automatic selecting module: for the ultrasonic images of the negative cases to be diagnosed, after the diagnosis is confirmed by a doctor, the ultrasonic images with the matching rate of more than 90% with the standard section are extracted to be used as standard sections, the ultrasonic images with the matching rate of between 70 and 90% with the standard section are used as passing sections, the ultrasonic images with the matching rate of less than 70% are preferentially extracted to be provided for an ultrasonic doctor, then the passing sections with the matching rate of between 70 and 90% are preferentially extracted, and the failing sections are not provided for the doctor; if the ultrasonic images which meet the matching rate by more than 70% are not found in comparison, a prompt result is returned to the doctor, and the doctor is informed that the matching result is not met and the image is required to be acquired again.
2. The artificial intelligence ultrasound assistance system of claim 1 wherein: still including reading the suggestion module, read the suggestion module: aiming at suspected cases, the film reading prompt module can automatically wrap and mark abnormal focus areas, automatically measure, and facilitate consultation end doctors to identify abnormal anatomical structures in images.
3. The artificial intelligence ultrasound assistance system of claim 1 wherein: the system also comprises a teaching auxiliary module which is suitable for four conditions of negative cases to be diagnosed, suspected cases, negative cases and positive cases, and comprises the four types of ultrasonic images which are transmitted to the teaching auxiliary module, the anatomical structure of each organ contained in the frame of image is identified by a training auxiliary model based on convolutional neural network, and the doctor is prompted to observe the image in a text, symbol and sound mode.
4. The artificial intelligence ultrasound assistance system of claim 1 wherein: the system also comprises a quality control analysis module, wherein the quality control analysis module is used for carrying out intelligent comprehensive analysis on images acquired by a remote doctor and forming a report through report display.
5. An artificial intelligence ultrasound assistance system according to claim 1 wherein: the image transmission module comprises a wireless transmission mode and/or a wired transmission mode, wherein the wireless transmission mode comprises: one or more of WiFi, 3G/4G/5G/6G, bluetooth and microwave transmission; the wired transmission mode comprises the following steps: one or more of ethernet, fiber optics, VGA, DVI, HDMI and DP.
6. An artificial intelligence ultrasound assistance system according to claim 1 wherein: the image framing module is used as an automatic classification step to realize the basis, and the follow-up automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis all rely on automatic framing to frame the video stream of the ultrasonic machine into single-frame pictures, so that the ultrasonic machine can work on the basis of the single-frame pictures.
7. The artificial intelligence ultrasound assist system of claim 5 wherein: the automatic classification module performs feature extraction and comparison on the images after framing: if no focus characteristic prompts are listed in the negative cases to be diagnosed, if focus characteristic prompts are listed in the suspected cases, a doctor finally confirms the negative cases, and the suspected cases are screened to confirm positive cases or negative cases.
8. The artificial intelligence ultrasound assistance system of claim 1 wherein: for negative cases, automatically selecting tablets, and performing two steps: 1) Image preprocessing: image denoising, gaussian filtering, gabor filtering to remove noise and obtain useful information, 2) extracting time features of sequential images: and putting each frame of code into a deep convolutional neural network discrimination submodule, and fusing the output characteristic tensors of the two modules to obtain a final discrimination basis to carry out contrast matching on the input ultrasonic image.
9. The artificial intelligence ultrasound assistance system of claim 1 wherein: the automatic slice selecting module is used for extracting an ultrasonic image with a standard slice matching rate of more than 90% corresponding to a meeting of a doctor of China society and a public place ultrasonic examination guide (2012) as a standard slice, wherein the standard slice matching rate of between 70 and 90% corresponding to the meeting of the doctor of China society and the public place ultrasonic examination guide (2012) as a passing slice, the standard slice matching rate of less than 70% corresponding to the meeting of the doctor of China society and the public place ultrasonic examination guide (2012) as a failing slice, preferentially extracting an ultrasonic image with a standard slice matching rate of more than 90% corresponding to the meeting of the doctor of China society and the public place ultrasonic examination guide (2012) as a standard slice, providing the ultrasonic image with the standard slice matching rate of between 70 and 90% corresponding to the meeting of the doctor of the China society and the public place ultrasonic examination guide (2012) to the doctor of the public place doctor, providing the passing slice to the doctor with the doctor of the public place society and the public place ultrasonic examination guide (2012), and providing the failing slice with the meeting of the public place doctor of the doctor of China society and the public place doctor guide (2012) with the standard slice matching rate of less than 70%.
10. The artificial intelligence ultrasound assistance system of claim 1 wherein: after the automatic film selecting module selects the standard cut surface and the lattice cut surface, the selected standard cut surface or lattice cut surface is presented to a doctor, the doctor judges that the frame of image is used as the standard cut surface or the lattice cut surface if the standard cut surface or the lattice cut surface is approved, the image is reserved and the next operation is carried out, and if the standard cut surface or the lattice cut surface is not approved, the automatic film selecting module continues to select the film until the doctor is satisfied with the standard cut surface or the lattice cut surface.
11. The artificial intelligence ultrasound assistance system of claim 1 wherein: when the automatic film selecting module is used for automatically selecting films, the automatic film selecting range suitable for the ultrasonic image section comprises: the prenatal ultrasound three-level examination involves thalamus horizontal cross section, lateral ventricle horizontal cross section, cerebellum horizontal cross section, nasolabial coronal cross section, four-chamber heart cross section, left ventricular outflow tract cross section, right ventricular outflow tract cross section, fetal heart rate chart (Doppler or M-shaped), upper abdomen cross section (abdominal girth measurement cross section), umbilical cord abdominal wall entrance abdomen cross section, umbilical artery horizontal bladder cross section, double kidney cross section, spinal sagittal cross section, left humeral long axis cross section, right humeral long axis cross section, left ulnar long axis cross section, right ulnar long axis cross section, left femur long axis cross section, right femur long axis cross section, left tibia fibular long axis cross section, right tibia fibular long axis cross section, pregnant woman cervical canal sagittal cross section, portal vein sagittal section, portal main section, second liver portal cross section, gall bladder cross section, extrahepatic bile duct long axis cross section, left kidney long axis cross section, right kidney long axis cross section, spleen long axis cross section including spleen gate, pancreas long axis cross section.
12. The artificial intelligence ultrasound assist system of claim 11 wherein: the method is applicable to the horizontal cross section of the thalamus, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a transparent separation chamber, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure of the brain, a caudate nucleus and a lateral ventricle anterior horn; the method is applicable to the horizontal cross section of the cerebellum, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise a craniocerebral fossa, a cerebellum hemisphere, a cerebellum foot, a thalamus and a cerebellum earthworm part; the method is suitable for four-cavity heart section, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the cross section of the upper abdomen, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organs used as the characteristics comprise descending vena cava, abdominal aorta, umbilical vein and gastric bulb; the method is applicable to horizontal cross sections of the bladder, and when feature extraction and feature matching are carried out, umbilical arteries and the bladder are contained as feature anatomical organs; the method is suitable for a double-kidney cross section, and when feature extraction and feature matching are carried out, the double-kidney cross section is taken as a feature anatomical organ to comprise a left kidney, a right kidney and a spleen; the method is applicable to long-axis sections of thighbones, and when feature extraction and feature matching are carried out, vertebral arches and vertebral bodies are contained as feature anatomic organs; the method is suitable for the horizontal cross section of the lateral ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ used as the characteristic comprises a third ventricle, a lateral ventricle back angle, a choroid plexus, a thalamus, a caudal nucleus, a transparent separation cavity, a lateral ventricle front angle and a cerebral lateral fissure; the method is suitable for a coronal section of a nose and a lip, and is used as a characteristic anatomical organ including a middle part, a nose, an upper lip, a lower lip and a lower jaw of a person when characteristic extraction and characteristic matching are carried out; the method is suitable for the outflow tract section of the left ventricle, and when the characteristic extraction and the characteristic matching are carried out, the anatomical organ serving as the characteristic comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when the characteristic extraction and the characteristic matching are carried out, the section is taken as a characteristic anatomic organ to comprise a main trunk of the right ventricle, the ascending aorta and the pulmonary artery; the method is suitable for fetal heart rate diagrams, and when feature extraction and feature matching are carried out, the anatomical organ serving as the feature comprises a left ventricular back wall, a ventricular septum, a left ventricle and a right ventricle.
CN202010137967.XA 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof Active CN111310851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137967.XA CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137967.XA CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Publications (2)

Publication Number Publication Date
CN111310851A CN111310851A (en) 2020-06-19
CN111310851B true CN111310851B (en) 2023-04-28

Family

ID=71161965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137967.XA Active CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Country Status (1)

Country Link
CN (1) CN111310851B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111820950A (en) * 2020-06-23 2020-10-27 无锡祥生医疗科技股份有限公司 Personalized information determination device and ultrasonic training method
CN111754485A (en) * 2020-06-24 2020-10-09 成都市温江区人民医院 Artificial intelligence ultrasonic auxiliary system for liver
CN111860636A (en) * 2020-07-16 2020-10-30 无锡祥生医疗科技股份有限公司 Measurement information prompting method and ultrasonic training method
CN111798967A (en) * 2020-07-18 2020-10-20 贵州精准健康数据有限公司 Wisdom ultrasonic testing system
CN112102925A (en) * 2020-09-11 2020-12-18 高容科技(上海)有限公司 Supplementary minimal access surgery artificial intelligence platform in
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112992338A (en) * 2021-02-08 2021-06-18 青岛大学附属医院 Learning system combining ultrasonic inspection technology and artificial intelligence technology
CN112991289B (en) * 2021-03-10 2024-03-26 深圳市鹭鸣科技有限公司 Processing method and device for standard section of image
CN113035329A (en) * 2021-03-22 2021-06-25 杭州联众医疗科技股份有限公司 Medical image quality control system
CN113469388B (en) * 2021-09-06 2021-11-23 江苏中车数字科技有限公司 Maintenance system and method for rail transit vehicle
CN113741209A (en) * 2021-09-27 2021-12-03 成都脉讯科技有限公司 Intelligent AI quality control system for obstetrics and gynecology department
CN114334095A (en) * 2021-12-31 2022-04-12 深圳度影医疗科技有限公司 Intelligent identification method and system for ultrasonic examination and terminal equipment
CN114783572A (en) * 2022-04-07 2022-07-22 西安和华瑞博科技有限公司 Medical image processing method and device and medical image transmission system
CN114783575B (en) * 2022-04-20 2023-09-29 广州唯顶软件科技有限公司 Medical image processing system and method
CN116521912B (en) * 2023-07-04 2023-10-27 广东恒腾科技有限公司 Ultrasonic data storage management system and method based on artificial intelligence
CN116982953B (en) * 2023-09-27 2023-12-08 包头市中心医院 Pregnant and lying-in woman remote monitoring system based on 5G technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203995A (en) * 2017-06-09 2017-09-26 合肥工业大学 Endoscopic images intelligent analysis method and system
CN110009007A (en) * 2019-03-18 2019-07-12 武汉大学 A kind of artificial intelligence surgical assistant system towards polymorphic type disease

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354839A (en) * 2008-09-02 2009-01-28 深圳市蓝韵实业有限公司 System and method of foetus ultrasonic image teaching
CN102283675B (en) * 2011-05-27 2013-04-17 华南理工大学 Rotation judgment and error correction method in medical ultrasonic panoramic imaging
CN103955698B (en) * 2014-03-12 2017-04-05 深圳大学 The method of standard tangent plane is automatically positioned from ultrasonoscopy
CN103927559B (en) * 2014-04-17 2017-06-16 深圳大学 Ultrasonoscopy Fetal facies ministerial standard tangent plane automatic identifying method and system
CN105232081A (en) * 2014-07-09 2016-01-13 无锡祥生医学影像有限责任公司 Medical ultrasound assisted automatic diagnosis device and medical ultrasound assisted automatic diagnosis method
CN104636754B (en) * 2015-01-31 2018-02-27 华南理工大学 Intelligent image sorting technique based on tongue body subregion color characteristic
CN106548134A (en) * 2016-10-17 2017-03-29 沈阳化工大学 GA optimizes palmmprint and the vena metacarpea fusion identification method that SVM and normalization combine
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108038513A (en) * 2017-12-26 2018-05-15 北京华想联合科技有限公司 A kind of tagsort method of liver ultrasonic
CN108573490B (en) * 2018-04-25 2020-06-05 王成彦 Intelligent film reading system for tumor image data
CN109166105B (en) * 2018-08-01 2021-01-26 中国人民解放军东部战区总医院 Tumor malignancy risk layered auxiliary diagnosis system based on artificial intelligent medical image
CN110033020A (en) * 2019-03-07 2019-07-19 李胜利 The Plays tangent plane picture recognition methods of fetal ultrasound image and identifying system based on deep learning
CN110111329B (en) * 2019-05-17 2021-05-11 四川大学华西第二医院 Artificial intelligence based ultrasonic image detection method and system
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110767312A (en) * 2019-12-26 2020-02-07 杭州迪英加科技有限公司 Artificial intelligence auxiliary pathological diagnosis system and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203995A (en) * 2017-06-09 2017-09-26 合肥工业大学 Endoscopic images intelligent analysis method and system
CN110009007A (en) * 2019-03-18 2019-07-12 武汉大学 A kind of artificial intelligence surgical assistant system towards polymorphic type disease

Also Published As

Publication number Publication date
CN111310851A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN111310851B (en) Artificial intelligence ultrasonic auxiliary system and application thereof
KR102243830B1 (en) System for providing integrated medical diagnostic service and method thereof
US20190340763A1 (en) Systems and methods for analysis of anatomical images
EP3567525A1 (en) Systems and methods for analysis of anatomical images each captured at a unique orientation
CN113052795B (en) X-ray chest radiography image quality determination method and device
CN110379492A (en) A kind of completely new AI+PACS system and its audit report construction method
CN104424385B (en) A kind of evaluation method and device of medical image
KR20130136519A (en) Diagnosis assitance system utilizing panoramic radiographs, and diagnosis assistance program utilizing panoramic radiographs
US20220198214A1 (en) Image recognition method and device based on deep convolutional neural network
CN112950737B (en) Fundus fluorescence contrast image generation method based on deep learning
JP2020199328A (en) Medical image processing method, medical image processing device, medical image processing system, and medical image processing program
CN111462049A (en) Automatic lesion area form labeling method in mammary gland ultrasonic radiography video
CN111986182A (en) Auxiliary diagnosis method, system, electronic device and storage medium
WO2021061257A1 (en) Automated maternal and prenatal health diagnostics from ultrasound blind sweep video sequences
KR20190087681A (en) A method for determining whether a subject has an onset of cervical cancer
CN112562860A (en) Training method and device of classification model and coronary heart disease auxiliary screening method and device
CN111340794B (en) Quantification method and device for coronary artery stenosis
CN107092809A (en) A kind of ankylosing spondylitis remote medical consultation with specialists shared platform and its application method
CN111540442A (en) Medical image diagnosis scheduling management system based on computer vision
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
CN116228660A (en) Method and device for detecting abnormal parts of chest film
Deepika et al. Deep learning based automated screening for intracranial hemorrhages and grad-cam visualizations on non-contrast head computed tomography volumes
Haja et al. Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology
CN115719329A (en) Method and system for fusing RA ultrasonic modal synovial membrane scores based on deep learning
CN108596877A (en) Rib cage CT data analysis systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Chen Xin

Inventor after: Luo Hong

Inventor after: Zhang Bo

Inventor after: Li Kejun

Inventor after: Xia Jiao

Inventor before: Luo Hong

Inventor before: Zhang Bo

Inventor before: Li Kejun

Inventor before: Xia Jiao

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant