CN111310851A - Artificial intelligence ultrasonic auxiliary system and application thereof - Google Patents

Artificial intelligence ultrasonic auxiliary system and application thereof Download PDF

Info

Publication number
CN111310851A
CN111310851A CN202010137967.XA CN202010137967A CN111310851A CN 111310851 A CN111310851 A CN 111310851A CN 202010137967 A CN202010137967 A CN 202010137967A CN 111310851 A CN111310851 A CN 111310851A
Authority
CN
China
Prior art keywords
section
ultrasonic
image
module
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010137967.XA
Other languages
Chinese (zh)
Other versions
CN111310851B (en
Inventor
罗红
张波
李科君
夏娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Wangwang Technology Co ltd
West China Second University Hospital of Sichuan University
Original Assignee
Chengdu Wangwang Technology Co ltd
West China Second University Hospital of Sichuan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Wangwang Technology Co ltd, West China Second University Hospital of Sichuan University filed Critical Chengdu Wangwang Technology Co ltd
Priority to CN202010137967.XA priority Critical patent/CN111310851B/en
Publication of CN111310851A publication Critical patent/CN111310851A/en
Application granted granted Critical
Publication of CN111310851B publication Critical patent/CN111310851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention relates to the field of ultrasonic medical images, and provides an artificial intelligent ultrasonic auxiliary system and application thereof, which mainly comprises an ultrasonic machine, an image system and remote medical application, wherein the artificial intelligent ultrasonic auxiliary system comprises an image collecting module, an image transmission module, an image framing module, an automatic classification module, an automatic film selection module, a teaching auxiliary module, a film reading prompting module and a quality control analysis module, and has the functions of realizing automatic classification of cases, automatically selecting standard sections for negative cases, and providing automatic film reading prompting, teaching auxiliary, quality control analysis and the like for suspected cases. After the artificial intelligence-based ultrasonic auxiliary system is implanted into the ultrasonic machine, the image information system and the remote medical system, a low-cost doctor can start the operation quickly, and the shortage of basic medical resources is made up.

Description

Artificial intelligence ultrasonic auxiliary system and application thereof
Technical Field
The invention relates to the field of ultrasonic medical images, in particular to an artificial intelligence ultrasonic auxiliary system and application thereof.
Background
In the current ultrasonic medical field, a huge supply and demand asymmetry phenomenon exists.
From the perspective of doctor requirements, a set of rapid and accurate system is needed to help doctors to rapidly and accurately detect whether patients belong to positive cases or negative cases in daily work and automatically acquire standard sections meeting clinical requirements for the negative cases; and the positive case is provided with a reading prompt, so that doctors can clearly know related symptoms and can be assisted to make accurate diagnosis.
The current product and service status of the market and the enterprise are as follows:
at present, no matter an ultrasonic machine or an image information system connected with the ultrasonic machine has no artificial intelligence function, an ultrasonic doctor screens and judges cases according to personal experience in the process of collecting images, selects a standard section according to the personal experience and a manipulation technology aiming at negative cases, and diagnoses abnormity aiming at positive cases by visual observation and identification.
The main pain points of the current situation are as follows:
the time is long: whether the standard section is selected for negative cases or the diagnosis for positive cases is very dependent on the operation technique and judgment capability of doctors, the time for selecting the section by some doctors with poor experience level is very long, and the working efficiency is not high;
standardization: the section selection standard aiming at negative cases is completely limited by personal level and subjective judgment of doctors, and the section selected by the doctors does not necessarily accord with clinical standards by adding time urgency and other interference factors;
easy missed diagnosis and misdiagnosis: if a small abnormality in the image is not found in time in the process of positive case diagnosis, missed diagnosis and misdiagnosis are easily caused, and even medical accidents are further caused;
and (3) quality control lag: at present, the ultrasonic quality control is carried out by adopting a post-manual spot check method, the quality control cannot completely cover the collected and reserved images of an ultrasonic doctor, and meanwhile, the problem cannot be found in time by adopting the post-quality control.
Lack of teaching assistance function: at present, an ultrasonic machine and an image system thereof lack a teaching auxiliary function and cannot provide teaching auxiliary support for doctors during working.
On the other hand, remote medical treatment refers to the remote diagnosis, treatment and consultation of the sick and wounded in remote areas, islands or ships with poor medical conditions by relying on computer technology, remote sensing, remote measuring and remote control technology and fully playing the advantages of medical technology and medical equipment of large hospitals or specialized medical centers. In the current remote medical treatment, ultrasonic examination is involved, or a remote doctor is relied on to transmit a manual collection tangent plane to a consultation doctor for remote consultation under the guidance and coordination of the consultation doctor; or the diagnostician can remotely control the ultrasonic probe to collect the signals through the mechanical arm, and the two main pain points of the telemedicine are as follows:
the reading of the film is limited: the film reading accuracy of the consultation doctors is established on the basis that whether the ultrasonic images collected by the remote end doctors meet the clinical standards or not, and is limited by the personal experience and level of the remote end doctors, and a lot of collected ultrasonic images do not meet the standards, so that the film reading accuracy of the specialist doctors is seriously influenced.
Long time, tired doctor: the acquisition of the standard ultrasonic image section is limited by the technology and experience of a remote end doctor, more time and energy of the consultation end doctor are needed to conduct remote guidance in the image acquisition process, the consultation end doctor needs to see the real-time imaging image video of the ultrasonic equipment and the image shooting skill of the doctor at the same time to conduct real-time guidance on the real-time imaging image video and the image shooting skill of the doctor due to the fact that the image acquisition skill and the technology of the remote end doctor are influenced on diagnosis, and the consultation end doctor needs to spend long time and energy in telling the remote end doctor which place to exert force, pressurize and adjust.
Easy missed diagnosis and misdiagnosis: remote medical treatment is limited by time, personnel interference and image conditions, doctors at consultation ends cannot observe acquired images comprehensively and carefully, focus areas are easy to miss, and diagnosis omission and misdiagnosis risks are caused.
Mechanical arm precision and safety are unreliable: through the remote operation of the mechanical arm, the precision problem hardly reaches the level of on-site image acquisition, and meanwhile, the mechanical arm is remotely controlled to lack force perception, so that the danger of damage to the organ of a patient due to compression is caused.
Disclosure of Invention
Based on the technical problem, the invention provides an artificial intelligence ultrasonic auxiliary system and application thereof in an ultrasonic machine, a video information system and a telemedicine system.
In order to achieve the technical effects, the technical scheme of the application is as follows:
an artificial intelligence ultrasonic auxiliary system comprises an image collecting module, an image transmission module, an image framing module, an automatic classification module, an automatic film selection module, a teaching auxiliary module, a film reading prompt module and a quality control analysis module;
the image collection module: in the examination process of an ultrasonic doctor, an ultrasonic image of an examined person is obtained through an image collecting module;
the image transmission module: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
an image framing module: the ultrasonic image processing device is used for automatically framing the received ultrasonic image and dividing the ultrasonic video into ultrasonic sectional images of one frame and one frame;
an automatic classification module: the method is used for classifying the cases of the framed ultrasonic sectional images through feature matching, and the cases are classified into suspected cases and negative cases.
Preferably, the automatic classification module performs the following steps for the framed ultrasound image:
s1, preprocessing an input image, and performing gamma conversion and mean filtering processing on the frame-divided ultrasonic image;
s2, establishing coordinate axes by the images and randomly generating n equal regions to be selected;
s3, calculating and normalizing the pixel characteristic tensor of each to-be-selected area;
s4, calculating the pixel characteristic tensor of each to-be-selected area and the standard training pixel characteristic tensor, and obtaining a similarity value for the area by adopting a svm algorithm method;
s5, removing the 2 areas with the lowest similarity each time;
s6, repeating (S4-S5) until a final one of the regions is retained;
and S7, putting the area obtained in the step S6 into an intercommunicating and fusing convolution network to obtain images of suspected cases and negative cases to be diagnosed.
Further, still include the auto tab module, the auto tab module includes:
the ultrasonic image with the standard cut face matching rate larger than 90% required by the ultrasonic doctor in the prenatal ultrasonic examination guideline (2012) of the China doctor Association is extracted as a standard cut face, the standard cut face matching rate between 70% and 90% required by the ultrasonic doctor in the prenatal ultrasonic examination guideline (2012) of the China doctor Association is used as a passing cut face, and the standard cut face matching rate lower than 70% required by the ultrasonic doctor in the prenatal ultrasonic examination guideline (2012) of the China doctor Association is used as a failing cut face.
Preferably, the ultrasonic images with the matching rate of more than 90% with the standard section are preferentially extracted and provided for the sonographer, the passing section with the matching rate of between 70 and 90% is then provided for the sonographer, and the failing section is not provided for the sonographer.
Furthermore, the teaching auxiliary module is further included, and prompts a doctor when reading the film through the text, the symbol and the sound mode, so that the skill of the doctor is improved.
Furthermore, the system also comprises a film reading prompting module, wherein the film reading prompting module is used for automatically marking the abnormal focus area, automatically measuring, facilitating the identification of the abnormal focus position in the image by a doctor at a consultation end and providing help for accurate diagnosis.
Further, the system also comprises a film reading prompting module, and for suspected cases, with the help of the film reading prompting module, a consultation end doctor is helped to screen and judge positive cases or negative cases.
And the quality control analysis module is used for carrying out intelligent comprehensive analysis on the images acquired by the doctor and forming a quality control report through report display.
Further, the image transmission module includes a wireless transmission mode or/and a wired transmission mode, wherein the wireless transmission mode includes: one or more combinations of WiFi, 3G/4G/5G/6G, Bluetooth and microwave transmission, wherein the wired transmission mode comprises the following steps: one or more of Ethernet, fiber, VGA, DVI, HDMI and DP.
In the working process of the automatic film selecting module, the automatic film selecting range suitable for the ultrasonic image section comprises the following steps: the three-stage prenatal ultrasound examination involves a thalamus horizontal cross section, a lateral ventricle horizontal cross section, a cerebellum horizontal cross section, a nasolabial coronary section, a four-cavity heart section, a left ventricle outflow tract section, a right ventricle outflow tract section, a fetal heart rate chart (Doppler or M type), an upper abdomen cross section (abdominal circumference measurement section), an umbilical cord abdominal wall entrance and abdomen cross section, an umbilical artery horizontal bladder cross section, a double kidney cross section, a spine sagittal section, a humerus long axis section (left, right), an ulna and radial long axis section (left, right), a femur long axis section (left, right), a tibia long axis section (left, right), a cervical canal sagittal section, and a portal vein sagittal section, a portal vein trunk section, a second hepatic portal section, a gallbladder section, an extrahepatic bile duct upper section, a left kidney long axis section, a right kidney long axis section, a spleen long axis section (including the spleen portal) and the like, Cutting pancreas into section.
The method is suitable for a thalamus horizontal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a third ventricle, a transparent partition cavity, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure, a caudate nucleus and a lateral ventricle anterior horn; the method is suitable for cerebellum horizontal cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a cerebral fossa cisterna, a cerebellum hemisphere, a cerebral foot, a thalamus and a cerebellar lumbricus part; the device is suitable for four-cavity heart sections, and when feature extraction and feature matching are carried out, the device can be used as a feature anatomical organ and comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the upper abdominal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise descending vena cava, abdominal aorta, umbilical vein and gastric bleb; the method is suitable for the horizontal cross section of the bladder, and when feature extraction and feature matching are carried out, the horizontal cross section of the bladder is taken as a feature anatomical organ comprising an umbilical artery and the bladder; the method is suitable for double-kidney cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a left kidney, a right kidney and a spleen; the femur long axis section is suitable for being used as a characteristic anatomical organ comprising a vertebral arch and a vertebral body when characteristic extraction and characteristic matching are carried out; the lateral ventricle horizontal cross section is suitable for feature extraction and feature matching, and the characteristic anatomical organs comprise a third ventricle, a lateral ventricle posterior horn, a choroid plexus, a thalamus, a caudate nucleus, a transparent separation cavity, a lateral ventricle anterior horn and a cerebral lateral fissure; the device is suitable for the coronal section of the nose lip, and comprises human middle and nose, upper lip, lower lip and lower jaw as characteristic anatomical organs when characteristic extraction and characteristic matching are carried out; the device is suitable for the section of the outflow tract of the left ventricle, and when feature extraction and feature matching are carried out, the device is used as a feature anatomical organ and comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when feature extraction and feature matching are carried out, the section of the outflow tract of the right ventricle serves as a feature anatomical organ and comprises the right ventricle, the ascending aorta and the pulmonary artery trunk; the method is suitable for a fetal heart rate graph, and when feature extraction and feature matching are carried out, the anatomical organs serving as features comprise a left ventricle rear wall, a ventricular septum, a left ventricle and a right ventricle.
The application also protects the application of an artificial intelligence ultrasonic auxiliary system in the ultrasonic machine, the ultrasonic machine comprises the artificial intelligence ultrasonic auxiliary system, the artificial intelligence ultrasonic auxiliary system is communicated with each current functional component of the ultrasonic machine, and the purposes of image acquisition, image transmission, image framing, automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis are achieved.
The application also protects the application of an artificial intelligence ultrasonic auxiliary system in the audio-visual information system, and the audio-visual information system comprises the artificial intelligence ultrasonic auxiliary system, the artificial intelligence ultrasonic auxiliary system is communicated with each existing functional component of the ultrasonic image system, and the purposes of image acquisition, image transmission, image framing, automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis are achieved.
The application also protects the application of the artificial intelligence ultrasonic auxiliary system in the remote medical system, the remote medical system comprises the artificial intelligence ultrasonic auxiliary system, the artificial intelligence ultrasonic auxiliary system is communicated with each existing functional component in the remote medical system, and the image acquisition, the image transmission, the image framing, the automatic classification, the automatic film selection, the teaching assistance, the film reading prompt and the quality control analysis are achieved. The remote medical system refers to the behavior of medical consultation activities such as remote and interactive guidance, examination, diagnosis, treatment and the like which are developed among medical institutions by utilizing communication technology, computer and network technology and combining with medical technology.
After the scheme is adopted, the artificial intelligent ultrasonic auxiliary system is used independently, or the artificial intelligent ultrasonic auxiliary system is arranged in the ultrasonic machine, or the artificial intelligent ultrasonic auxiliary system is arranged in the image information system and is combined with the external ultrasonic machine for use, or is applied to a telemedicine system, and the following beneficial effects can be realized:
1. makes up the shortage of primary medical resources: the national primary hospitals lack doctors, especially experienced ultrasonic doctors, and low-annual-capital doctors can also get hands quickly after the artificial intelligent ultrasonic auxiliary system is implanted into the ultrasonic machine, the image information system and the remote medical system, so that the shortage of primary medical resources is made up.
2. The work efficiency is improved: the standard tangent plane is selected quickly for negative cases, and the suspected cases are judged quickly and accurately, so that the working efficiency is improved, the working intensity is reduced, and meanwhile, the time for a remote doctor to obtain the standard tangent plane is greatly shortened.
3. Unified inspection standard: automatic film selection is performed through the embedded unified standard of the artificial intelligence model, so that artificial subjective differentiation judgment is avoided, and the accuracy is improved.
4. Prevention of missed diagnosis/misdiagnosis: the artificial intelligence aims at the positive cases and the suspicious cases in the checking process, and the condition of missed diagnosis/misdiagnosis caused by the interference of other external factors on the judgment of doctors is avoided.
5. Quality control management enhancement: through intelligent automatic film selection, images provided for doctors are all subjected to artificial intelligent screening, the matching rate is greater than 80% of images, and the quality control of the ultrasonic images is guaranteed.
6. And (3) improving the skill of the doctor: the film reading prompt module provides a film reading observation prompt for a doctor, and an online auxiliary module can be provided, so that the skill of the doctor is improved.
7. Quality control analysis report form: the quality control analysis is carried out on the retained ultrasonic image through the artificial intelligence module, and the advantages of real-time performance, full coverage and unified standard are provided.
8. And (3) compatibility enhancement: the invention is used as a functional module, is implanted into an ultrasonic machine and a related image information system thereof, provides an open interface, is compatible with various brands of ultrasonic machines in the industry and related ultrasonic image information systems thereof, and is compatible with various brands of telemedicine and related equipment in the industry.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is a schematic diagram of an ultrasound machine with an artificial intelligence ultrasound assistance system built in, in an embodiment of the present invention.
FIG. 2 is a schematic diagram of an ultrasound image information system with an artificial intelligence ultrasound-assisted system built therein according to an embodiment of the present invention.
FIG. 3 is a schematic diagram of an artificial intelligence ultrasound assistance system.
Fig. 4 is a schematic structural diagram of the teaching assistance module 80 according to an embodiment of the present invention.
In the drawings: 10-an image collection module, 20-an image transmission module, 30-an image framing module, 40-an automatic classification module, 50-an automatic slice selection module, 51-a standard slice, 52-a qualified slice, 53-an unqualified slice, 60-a slice reading prompt module, 80-a teaching auxiliary module and 90-a quality control analysis module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1
Referring to fig. 1-2, fig. 1 shows an artificial intelligence ultrasound auxiliary system implanted in an ultrasound machine, which is intercommunicated with the existing functional components of the ultrasound machine (e.g., the functional components of a probe, an ultrasound transmitting/receiving device, signal processing, image display, etc.), so that the ultrasound machine can realize the functions of automatic classification of cases, automatic selection of standard sections for negative cases, automatic prompt for reading suspected cases, etc. with the assistance of an artificial intelligence module. The ultrasonic machine of the present invention includes, but is not limited to, various ultrasonic machines disclosed in the prior art.
Example 2
Fig. 2 shows that an artificial intelligence ultrasound auxiliary system is implanted in various image information systems connected with an ultrasound machine, and the artificial intelligence ultrasound auxiliary system is communicated with the existing functional components of the image information systems (such as an image processing system and a report management system), so that the ultrasound image system can realize the functions of automatic classification of cases, automatic selection of standard sections for negative cases, automatic prompt for suspected case reading, and the like under the assistance of an artificial intelligence module. The image information system in the invention includes various image information systems already disclosed in the prior art.
The implementation mode mainly comprises a model training step and a model recognition step, wherein the model training mainly comprises three steps of data acquisition, feature labeling and model training.
A data acquisition step: acquiring image sample data required currently;
a characteristic marking step: performing feature marking aiming at a focus area and other related areas;
model training: and selecting marked sample data to carry out model training, and carrying out corresponding model parameter tuning and improvement in the training process to obtain the feature classification model with the best effect.
Example 3
Referring to fig. 3-4, the artificial intelligence ultrasound assistant system of the present embodiment is applied to the implementation 1 and 2, and includes the following functional modules
The image collection module: the ultrasonic image acquisition system is used for acquiring an ultrasonic image acquired by a doctor in real time from the ultrasonic machine; the ultrasonic examination device has the main functions of collecting ultrasonic images collected in the examination process of a doctor, when the ultrasonic doctor holds the ultrasonic probe for carrying out ultrasonic examination, the ultrasonic probe obtains the ultrasonic images of organs to be examined from the area to be examined of a patient, and the ultrasonic collection module obtains the ultrasonic images collected by the doctor in real time from the ultrasonic machine;
the image transmission module: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
an image framing module: the ultrasonic image processing device is used for automatically framing the received ultrasonic image and dividing the ultrasonic video into ultrasonic sectional images of one frame and one frame; specifically, after the ultrasonic image framing module receives the transmitted image, framing the ultrasonic image, and dividing the ultrasonic video into one frame of ultrasonic two-dimensional image;
an automatic classification module: the method is used for classifying the cases of the framed ultrasonic sectional images through feature matching and dividing the cases into suspected cases and negative cases to be diagnosed. The method specifically comprises the following steps:
s1, preprocessing an input image, and performing gamma conversion and mean filtering processing on the frame-divided ultrasonic image;
s2, establishing coordinate axes by the images and randomly generating n equal regions to be selected;
s3, calculating and normalizing the pixel characteristic tensor of each to-be-selected area;
s4, calculating the pixel feature tensor of each to-be-selected area and the standard training pixel feature tensor (the convolution kernel function performs cavity convolution on the image), and obtaining a similarity value for the area by adopting a svm algorithm method;
s5, removing the 2 areas with the lowest similarity each time;
s6, repeating (S4-S5) until a final one of the regions is retained;
and S7, putting the area obtained in the step S6 into an intercommunicating and fusing convolution network to obtain a suspected case and a negative case image, performing coding downsampling and jumping link characteristic fusion, decoding upsampling to restore the original resolution, calculating that the iou of the mask is greater than 0.9 as the suspected case and less than 0.9 as the negative to be diagnosed (the iou is an abbreviation of an interaction over Unit, and the ratio of the predicted mask area to the real mask area).
If the negative case to be diagnosed is confirmed by a doctor, entering an automatic film selection step, and if the negative case is a suspected case, entering a film reading prompting step, dividing a focus area and prompting the focus area to the doctor;
the automatic film selecting module comprises: the automatic slice selection module is used for extracting the ultrasonic image with the matching rate of more than 90% with the standard section as the standard section, taking the ultrasonic image with the matching rate of 70-90% with the standard section as the passing section, and taking the ultrasonic image with the matching rate of less than 70% as the failing section.
Specifically, the automatic slice selection module is used for extracting an ultrasonic image with a standard slice matching rate greater than 90% required by the ultrasonic examination guideline before birth (2012) in the conference of sonographers of the chinese physician association as a standard slice, extracting an ultrasonic image with a standard slice matching rate of 70-90% required by the ultrasonic examination guideline before birth (2012) in the conference of sonographers of the chinese physician association as a pass slice, extracting an ultrasonic image with a standard slice matching rate greater than 90% required by the ultrasonic examination guideline before birth (2012) in the conference of sonographers of the chinese physician association less than 70% as a fail slice, preferentially extracting an ultrasonic image with a standard slice matching rate greater than 90% required by the ultrasonic examination guideline before birth (2012) in the conference of sonographers of the chinese physician association greater than 90% to the sonographers, and then providing the ultrasonic image with a standard slice matching rate of 70-90% required by the ultrasonic examination guideline before birth (2012) in the conference of sonographers of the conference of the chinese physician association to the conference of sonographers The passing section of (2) is provided for the sonographer, the failing section corresponding to the standard section matching rate lower than 70% required by the sonographer in the Chinese physician association < prenatal ultrasonic examination guideline (2012) > is not provided for the sonographer, and the sonographer is prompted to continue to re-collect the blood.
And (3) extracting the ultrasonic image with the matching rate of more than 90% with the standard section as the standard section, taking the ultrasonic image with the matching rate of 70-90% with the standard section as the passing section, and taking the ultrasonic image with the matching rate of less than 70% as the failing section.
For negative cases, two step operations are performed, 1. image preprocessing: carrying out Gaussian filtering on image denoising, removing noise by gabor filtering to obtain useful information, and 2. extracting time characteristics of sequence images: and putting each frame of code into a deep convolutional neural network discrimination submodule.
The deep convolutional neural network discrimination submodule is as follows:
a first layer: size 3 x 3 and number 32 convolution layers followed by pooling;
a second layer: 3, stacking and pooling 64 convolutional layers by 3, and adding a residual module;
and a third layer: 3 × 3, layering 128 convolutional layers into pools, and adding a residual module;
a fourth layer: 3 × 3, 256 convolutional layers are pooled, and a residual module is added;
and a fifth layer: 3 x 3 number of 512 convolutional layers pooled, fully connected layers.
The fusion obtains the final judgment basis for comparing and matching the input ultrasonic images by the output characteristic tensors of the two modules, preferentially extracts the ultrasonic images with the matching rate of more than 90% and provides the ultrasonic images for an ultrasonic doctor, and then extracts the passing section with the matching rate of between 70 and 90%, and the failing section does not provide the ultrasonic images for the doctor.
And in the examination process, the doctor judges the section selected by the automatic film selection module, if the section is approved, the frame image is used as a standard section or a qualified section for the next operation, if the section is not approved, the automatic film selection module can continue to select until the doctor is satisfied, and in addition, in the working process of automatic film selection, if the matching rate of the doctor in the ultrasonic image acquisition process is lower than 70%, the doctor is prompted, and the doctor continues to acquire the ultrasonic image again.
The teaching auxiliary module: the system is used for prompting a doctor to observe images in a text, symbol and sound mode, and the skill of the doctor is improved. Specifically, for a negative case, the automatic film selection module helps a doctor to obtain a standard tangent plane, and simultaneously, the ultrasonic image of the frame is transmitted to the teaching auxiliary module, and the previously trained convolutional neural network-based teaching auxiliary model identifies the anatomical structures of the organs contained in the image of the frame.
The image is observed to the doctor through characters, symbols and sound mode and is reminded, for example in four chambered cardiotomy face acoustic image, the suggestion doctor is paid attention to heart position, apex of the heart pointing etc. lets the doctor constantly improve the study in the course of the work, strengthens self technical improvement.
In the automatic film selecting process, the automatic film selecting range suitable for the ultrasonic image section comprises the following steps: the three-stage prenatal ultrasound examination involves a thalamus horizontal cross section, a lateral ventricle horizontal cross section, a cerebellum horizontal cross section, a nasolabial coronary section, a four-cavity heart section, a left ventricle outflow tract section, a right ventricle outflow tract section, a fetal heart rate chart (Doppler or M type), an upper abdomen cross section (abdominal circumference measurement section), an umbilical cord abdominal wall entrance and abdomen cross section, an umbilical artery horizontal bladder cross section, a double kidney cross section, a spine sagittal section, a humerus long axis section (left, right), an ulna and radial long axis section (left, right), a femur long axis section (left, right), a tibia long axis section (left, right), a cervical canal sagittal section, and a portal vein sagittal section, a portal vein trunk section, a second hepatic portal section, a gallbladder section, an extrahepatic bile duct upper section, a left kidney long axis section, a right kidney long axis section, a spleen long axis section (including the spleen portal) and the like, Cutting pancreas into section. The method is suitable for a thalamus horizontal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a third ventricle, a transparent partition cavity, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure, a caudate nucleus and a lateral ventricle anterior horn; the method is suitable for cerebellum horizontal cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a cerebral fossa cisterna, a cerebellum hemisphere, a cerebral foot, a thalamus and a cerebellar lumbricus part; the device is suitable for four-cavity heart sections, and when feature extraction and feature matching are carried out, the device can be used as a feature anatomical organ and comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the upper abdominal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise descending vena cava, abdominal aorta, umbilical vein and gastric bleb; the method is suitable for the horizontal cross section of the bladder, and when feature extraction and feature matching are carried out, the horizontal cross section of the bladder is taken as a feature anatomical organ comprising an umbilical artery and the bladder; the method is suitable for double-kidney cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a left kidney, a right kidney and a spleen; the femur long axis section is suitable for being used as a characteristic anatomical organ comprising a vertebral arch and a vertebral body when characteristic extraction and characteristic matching are carried out; the lateral ventricle horizontal cross section is suitable for feature extraction and feature matching, and the characteristic anatomical organs comprise a third ventricle, a lateral ventricle posterior horn, a choroid plexus, a thalamus, a caudate nucleus, a transparent separation cavity, a lateral ventricle anterior horn and a cerebral lateral fissure; the device is suitable for the coronal section of the nose lip, and comprises human middle and nose, upper lip, lower lip and lower jaw as characteristic anatomical organs when characteristic extraction and characteristic matching are carried out; the device is suitable for the section of the outflow tract of the left ventricle, and when feature extraction and feature matching are carried out, the device is used as a feature anatomical organ and comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when feature extraction and feature matching are carried out, the section of the outflow tract of the right ventricle serves as a feature anatomical organ and comprises the right ventricle, the ascending aorta and the pulmonary artery trunk; the method is suitable for a fetal heart rate graph, and when feature extraction and feature matching are carried out, the anatomical organs serving as features comprise a left ventricle rear wall, a ventricular septum, a left ventricle and a right ventricle.
Example 4
Referring to fig. 3, the artificial intelligence ultrasound-assisted system of this embodiment can be applied to the implementations 1, 2, and 5, and includes the following functional modules, and on the basis of the automatic classification module of embodiment 3, the following functional modules are adopted for processing suspected cases.
A film reading prompt module: the film reading prompting module is used for automatically marking the abnormal focus area, automatically measuring, facilitating the identification of abnormal anatomical structures in the images by a consultation end doctor and providing help for accurate diagnosis. Specifically, two steps of operation are carried out for a suspected case, 1, automatic inclusions are carried out, image features from high scale to low scale are collected and spliced, a suspicious lesion area is obtained through the image features of the previous step through an image convolution network, the suspicious lesion area is accurately positioned through a convolution regression network, coordinates are output, and inclusions are marked; 2. and prompting instructions, namely adding the anatomical name of the suspicious lesion region, comparing related contents in the feature library, adding necessary instructions, and reasonably superposing the instructions on the suspicious lesion region, and prompting a doctor in a text, symbol or sound mode if the matching rate of anatomical structure features of each organ contained in the region and the dysplasia library or lesion abnormality feature matching library is high, so that the doctor is helped to finally and accurately diagnose positive cases and negative cases, and the risks of missed diagnosis and misdiagnosis of the doctor are reduced.
Quality control analysis module: the quality control analysis module is used for carrying out intelligent comprehensive analysis on the images acquired by the remote end doctor and forming a report through report display. Specifically, the doctor collects the retained first-class ultrasonic images, the first-class ultrasonic images can be manually selected by the doctor, the third-class ultrasonic images can also be automatically obtained by the automatic source ultrasonic film selection module and are transmitted to the quality control analysis module through the ultrasonic transmission module, the definition, the tangent plane standard property and the doctor image mark of the frame of image are compared by the pre-trained quality control analysis model based on the convolutional neural network, a judgment score is obtained, and then the judgment score is displayed according to various formats to obtain an analysis report 1, an analysis report 2 and an analysis report N, such as a transverse analysis report of a department: comparing the whole ultrasonic image quality control values of each doctor in the current month, and comparing the ultrasonic image quality control values of each doctor in a certain ultrasonic section, such as: comparing quality control values of the double kidney sections, comparing quality control values of a certain anatomical structure of a certain section, and reporting a longitudinal analysis report of a department: comparing the quality control values of the whole ultrasonic images of every month and every day department, comparing the quality control values of the ultrasonic images of a certain section of every month and every day department, and performing personal transverse analysis report: the quality control values of the ultrasonic images of each section of each doctor every month and each day are compared, the quality control values of the ultrasonic images of each patient to be examined every month and each day are compared, and a personal longitudinal analysis report form comprises the following steps: and comparing the quality control values of the whole ultrasonic images of a single doctor every month and every day, and comparing the quality control values of each ultrasonic image of the single doctor every month and every day.
Example 5
Referring to fig. 3-4, an embodiment of the present invention provides an application of an artificial intelligence ultrasound-assisted system in a telemedicine system. The technical personnel in the field can understand that the artificial intelligence auxiliary system is implanted in the remote medical system, and the artificial intelligence auxiliary system is communicated with the existing functional components of the remote medical system, so that the remote medical system can realize the functions of automatic classification of cases, automatic selection of standard sections for negative cases, automatic prompt for reading suspected cases and the like under the assistance of the artificial intelligence module.
The implementation mode mainly comprises a model training step and a model recognition step, wherein the model training mainly comprises three steps of data acquisition, feature labeling and model training.
1) A data acquisition step: acquiring image sample data for current needs.
2) A characteristic marking step: feature labeling is performed for the lesion area and other related areas.
3) Model training: and selecting marked sample data to carry out model training, and carrying out corresponding model parameter tuning and improvement in the training process to obtain the feature classification model with the best effect.
It will be understood by those of ordinary skill in the art that all or part of the processes in the system implementing the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the embodiments of the methods and modules described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the system is divided into different functional units or modules to perform all or part of the above-mentioned functions.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and all are included in the scope of the present invention.

Claims (14)

1. An artificial intelligence ultrasound-assisted system, characterized by: including signal transmission's image collection module in proper order, image transmission module, image framing module, automatic classification module and automatic film selection module, wherein:
the image acquisition module: in the inspection process, an ultrasonic image of an inspected person is acquired through an image collecting module;
the image transmission module: transmitting the ultrasonic image acquired by the image collecting module to the image framing module through wired network transmission or wireless network transmission;
an image framing module: receiving an ultrasonic section which divides the transmitted ultrasonic image into frames;
an automatic classification module: classifying the ultrasonic images of the frames into case classification through feature matching, dividing the case classification into negative cases and suspected cases to be confirmed and finally confirming the negative cases and the positive cases by doctors;
an automatic film selection module: for the ultrasonic image of the negative case to be diagnosed, after the confirmation of the doctor, the ultrasonic image enters an automatic film selection module, the ultrasonic image with the matching rate of more than 90% of the standard section is extracted as the standard section, the ultrasonic image with the matching rate of 70-90% of the standard section is used as a passing section, the ultrasonic image with the matching rate of less than 70% is used as a non-standard section, the ultrasonic image with the matching rate of more than 90% is preferentially extracted and provided for the ultrasonic doctor, the passing section with the matching rate of 70-90% is extracted, and the failing section is not provided for the doctor; if the ultrasonic image with the matching rate exceeding 70% is not found during comparison, a prompt result is returned to the doctor, the doctor is told that the matching result does not meet the requirement, and the image is required to be acquired again.
2. The artificial intelligence ultrasound-assisted system of claim 1, wherein: still including reading piece suggestion module, read piece suggestion module: aiming at suspected cases, the film reading prompting module can automatically include and mark abnormal focus areas, automatically measure the abnormal focus areas and facilitate a consultation end doctor to identify abnormal anatomical structures in images.
3. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the auxiliary teaching model is suitable for four conditions of negative cases, suspected cases, negative cases and positive cases to be diagnosed, ultrasonic images containing the four types are transmitted to the auxiliary teaching module, the anatomical structures of organs contained in the frame of image are identified by the pre-trained convolutional neural network-based auxiliary teaching model, and images observed by doctors are prompted in a text, symbol and sound mode.
4. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the remote end doctor remote monitoring system further comprises a quality control analysis module, wherein the quality control analysis module is used for carrying out intelligent comprehensive analysis on the images acquired by the remote end doctor and forming a report through report display.
5. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the image transmission module comprises a wireless transmission mode and/or a wired transmission mode, wherein the wireless transmission mode comprises the following steps: one or more of WiFi, 3G/4G/5G/6G, Bluetooth and microwave transmission; the wired transmission method comprises the following steps: one or more of Ethernet, fiber, VGA, DVI, HDMI and DP.
6. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the image framing module is used as a basis for realizing automatic classification steps, and subsequent automatic classification, automatic film selection, teaching assistance, film reading prompt and quality control analysis all rely on automatic framing to frame the video stream of the ultrasonic machine into single-frame pictures so as to work on the basis of the single-frame pictures.
7. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the automatic classification module comprises the following working steps:
1) preprocessing an input image, and performing gamma conversion and mean filtering;
2) establishing a coordinate axis by using the image and randomly generating n equal regions to be selected;
3) calculating and normalizing the pixel characteristic tensor of each to-be-selected area;
4) calculating the svm method of the pixel characteristic tensor of each to-be-selected area and the standard training pixel characteristic tensor to the area to obtain a similarity value;
5) removing 2 areas with the lowest similarity each time;
6) repeating 4) until a final one of the regions is retained;
7) putting the region into an intercommunicating and fusing convolution network to obtain whether the region is a focus region (coding down-sampling, jumping link characteristic fusion, decoding up-sampling to restore to the original resolution,
8) calculating the iou of the mask to be more than 0.9 as a suspected case, otherwise, as a negative case to be diagnosed.
8. The artificial intelligence ultrasound-assisted system of claim 7, wherein: the automatic classification module performs feature extraction and comparison on the image after framing: if no lesion characteristic prompt is listed in the negative case to be diagnosed, if a lesion characteristic prompt is listed in the suspected case, the doctor finally confirms the negative case, and screens the suspected case to confirm that the negative case is a positive case or a negative case.
9. The artificial intelligence ultrasound-assisted system of claim 1, wherein: aiming at negative cases, automatic selection is carried out, and two steps of operations are carried out: 1) image preprocessing: image denoising and Gaussian filtering, and gabor filtering and noise removal to obtain useful information, 2) sequence image time feature extraction: and putting each frame of code into a deep convolutional neural network discrimination submodule, and fusing output characteristic tensors of the two modules to obtain a final discrimination basis to compare and match the input ultrasonic image.
10. The artificial intelligence ultrasound-assisted system of claim 1, wherein: the automatic wafer selection module is used for extracting an ultrasonic image with a standard section matching rate larger than 90% required by the ultrasonic examination guideline before birth (2012) in the department of sonographers of the Chinese physician association as a standard section, extracting an ultrasonic image with a standard section matching rate between 70% and 90% required by the ultrasonic examination guideline before birth (2012) in the department of sonographers of the Chinese physician association as a pass section, extracting an ultrasonic image with a standard section matching rate larger than 90% required by the ultrasonic examination guideline before birth (2012) in the department of sonographers of the Chinese physician association less than 70% as a fail section, preferentially extracting an ultrasonic image with a standard section matching rate larger than 90% required by the ultrasonic examination guideline before birth (2012) in the department of sonographers of the Chinese physician association more than 90% to the sonographers, and then providing the ultrasonic image with a standard section matching rate between 70% and 90% required by the ultrasonic examination guideline before birth (2012) in the department of sonographers of the Chinese physician association to the ultrasonic examination of the China association to the sonographers as a fail section The method is provided for the sonographer, the failing section with the standard section matching rate lower than 70 percent required by the ultrasonic physician in the prenatal ultrasonic examination guideline (2012) of the Chinese physician association is not provided for the sonographer, and the method can give a prompt to the sonographer to continue to collect the samples.
11. The artificial intelligence ultrasound-assisted system of claim 1, wherein: and after the automatic film selection module selects the standard section and the passing section, presenting the selected standard section or the passing section to a doctor, allowing the doctor to judge, if the standard section or the passing section is approved, taking the frame image as the standard section or the passing section, keeping the image and performing the next operation, and if the standard section or the passing section is not approved, allowing the automatic film selection module to continue to select the film until the doctor selects the standard section or the passing section which is satisfied.
12. The artificial intelligence ultrasound-assisted system of claim 1, wherein: when the automatic film selection module performs automatic film selection, the automatic film selection range applicable to the ultrasonic image section comprises the following steps: the three-stage prenatal ultrasound examination involves a thalamus horizontal cross section, a lateral ventricle horizontal cross section, a cerebellum horizontal cross section, a nasolabial coronary section, a four-cavity heart section, a left ventricle outflow tract section, a right ventricle outflow tract section, a fetal heart rate chart (Doppler or M type), an upper abdomen cross section (abdominal circumference measurement section), an umbilical cord abdominal wall entrance and abdomen cross section, an umbilical artery horizontal bladder cross section, a double kidney cross section, a spine sagittal section, a humerus long axis section (left, right), an ulna and radial long axis section (left, right), a femur long axis section (left, right), a tibia long axis section (left, right), a cervical canal sagittal section, and a portal vein sagittal section, a portal vein trunk section, a second hepatic portal section, a gallbladder section, an extrahepatic bile duct upper section, a left kidney long axis section, a right kidney long axis section, a spleen long axis section (including the spleen portal) and the like, Cutting pancreas into section.
13. The artificial intelligence ultrasound-assisted system of claim 12, wherein: the method is suitable for a thalamus horizontal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a third ventricle, a transparent partition cavity, a lateral ventricle posterior horn, a choroid plexus, a lateral fissure, a caudate nucleus and a lateral ventricle anterior horn; the method is suitable for cerebellum horizontal cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a cerebral fossa cisterna, a cerebellum hemisphere, a cerebral foot, a thalamus and a cerebellar lumbricus part; the device is suitable for four-cavity heart sections, and when feature extraction and feature matching are carried out, the device can be used as a feature anatomical organ and comprises a left ventricle, a right ventricle, a left atrium and a right atrium; the method is suitable for the upper abdominal cross section, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise descending vena cava, abdominal aorta, umbilical vein and gastric bleb; the method is suitable for the horizontal cross section of the bladder, and when feature extraction and feature matching are carried out, the horizontal cross section of the bladder is taken as a feature anatomical organ comprising an umbilical artery and the bladder; the method is suitable for double-kidney cross sections, and when feature extraction and feature matching are carried out, the characteristic anatomical organs comprise a left kidney, a right kidney and a spleen; the femur long axis section is suitable for being used as a characteristic anatomical organ comprising a vertebral arch and a vertebral body when characteristic extraction and characteristic matching are carried out; the lateral ventricle horizontal cross section is suitable for feature extraction and feature matching, and the characteristic anatomical organs comprise a third ventricle, a lateral ventricle posterior horn, a choroid plexus, a thalamus, a caudate nucleus, a transparent separation cavity, a lateral ventricle anterior horn and a cerebral lateral fissure; the device is suitable for the coronal section of the nose lip, and comprises human middle and nose, upper lip, lower lip and lower jaw as characteristic anatomical organs when characteristic extraction and characteristic matching are carried out; the device is suitable for the section of the outflow tract of the left ventricle, and when feature extraction and feature matching are carried out, the device is used as a feature anatomical organ and comprises the left ventricle, the right ventricle, the left atrium, the right atrium and the ascending aorta; the method is suitable for the section of the outflow tract of the right ventricle, and when feature extraction and feature matching are carried out, the section of the outflow tract of the right ventricle serves as a feature anatomical organ and comprises the right ventricle, the ascending aorta and the pulmonary artery trunk; the method is suitable for a fetal heart rate graph, and when feature extraction and feature matching are carried out, the anatomical organs serving as features comprise a left ventricle rear wall, a ventricular septum, a left ventricle and a right ventricle.
14. An application of an artificial intelligence ultrasonic auxiliary system is characterized in that: applying an artificial intelligent ultrasonic auxiliary system in an ultrasonic machine, wherein the artificial intelligent ultrasonic auxiliary system is communicated with each existing functional component in the ultrasonic machine; the artificial intelligent ultrasonic auxiliary system is applied to an image information system related to an ultrasonic machine, and the artificial intelligent ultrasonic auxiliary system is communicated with all the existing functional components in the image information system related to the ultrasonic machine; the artificial intelligence ultrasound auxiliary system is applied to a remote medical system, and the artificial intelligence ultrasound auxiliary system is communicated with each existing functional component in the remote medical system.
CN202010137967.XA 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof Active CN111310851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010137967.XA CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010137967.XA CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Publications (2)

Publication Number Publication Date
CN111310851A true CN111310851A (en) 2020-06-19
CN111310851B CN111310851B (en) 2023-04-28

Family

ID=71161965

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010137967.XA Active CN111310851B (en) 2020-03-03 2020-03-03 Artificial intelligence ultrasonic auxiliary system and application thereof

Country Status (1)

Country Link
CN (1) CN111310851B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754485A (en) * 2020-06-24 2020-10-09 成都市温江区人民医院 Artificial intelligence ultrasonic auxiliary system for liver
CN111798967A (en) * 2020-07-18 2020-10-20 贵州精准健康数据有限公司 Wisdom ultrasonic testing system
CN111820950A (en) * 2020-06-23 2020-10-27 无锡祥生医疗科技股份有限公司 Personalized information determination device and ultrasonic training method
CN111860636A (en) * 2020-07-16 2020-10-30 无锡祥生医疗科技股份有限公司 Measurement information prompting method and ultrasonic training method
CN112102925A (en) * 2020-09-11 2020-12-18 高容科技(上海)有限公司 Supplementary minimal access surgery artificial intelligence platform in
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112992338A (en) * 2021-02-08 2021-06-18 青岛大学附属医院 Learning system combining ultrasonic inspection technology and artificial intelligence technology
CN112991289A (en) * 2021-03-10 2021-06-18 深圳市鹭鸣科技有限公司 Method and device for processing standard image section
CN113035329A (en) * 2021-03-22 2021-06-25 杭州联众医疗科技股份有限公司 Medical image quality control system
CN113469388A (en) * 2021-09-06 2021-10-01 江苏中车数字科技有限公司 Maintenance system and method for rail transit vehicle
CN113741209A (en) * 2021-09-27 2021-12-03 成都脉讯科技有限公司 Intelligent AI quality control system for obstetrics and gynecology department
CN114334095A (en) * 2021-12-31 2022-04-12 深圳度影医疗科技有限公司 Intelligent identification method and system for ultrasonic examination and terminal equipment
CN114783575A (en) * 2022-04-20 2022-07-22 北京中捷互联信息技术有限公司 Medical image processing system and method
CN114783572A (en) * 2022-04-07 2022-07-22 西安和华瑞博科技有限公司 Medical image processing method and device and medical image transmission system
CN116521912A (en) * 2023-07-04 2023-08-01 广东恒腾科技有限公司 Ultrasonic data storage management system and method based on artificial intelligence
CN116982953A (en) * 2023-09-27 2023-11-03 包头市中心医院 Pregnant and lying-in woman remote monitoring system based on 5G technology

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354839A (en) * 2008-09-02 2009-01-28 深圳市蓝韵实业有限公司 System and method of foetus ultrasonic image teaching
CN102283675A (en) * 2011-05-27 2011-12-21 华南理工大学 Rotation judgment and error correction method in medical ultrasonic panoramic imaging
CN103927559A (en) * 2014-04-17 2014-07-16 深圳大学 Automatic recognition method and system of standard section of fetus face of ultrasound image
CN103955698A (en) * 2014-03-12 2014-07-30 深圳大学 Method for automatically positioning standard tangent plane from ultrasonic image
CN104636754A (en) * 2015-01-31 2015-05-20 华南理工大学 Intelligent image classifying method based on tongue body partition color feature
CN105232081A (en) * 2014-07-09 2016-01-13 无锡祥生医学影像有限责任公司 Medical ultrasound assisted automatic diagnosis device and medical ultrasound assisted automatic diagnosis method
CN106548134A (en) * 2016-10-17 2017-03-29 沈阳化工大学 GA optimizes palmmprint and the vena metacarpea fusion identification method that SVM and normalization combine
CN107203995A (en) * 2017-06-09 2017-09-26 合肥工业大学 Endoscopic images intelligent analysis method and system
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108038513A (en) * 2017-12-26 2018-05-15 北京华想联合科技有限公司 A kind of tagsort method of liver ultrasonic
CN108573490A (en) * 2018-04-25 2018-09-25 王成彦 A kind of intelligent read tablet system for tumor imaging data
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN110009007A (en) * 2019-03-18 2019-07-12 武汉大学 A kind of artificial intelligence surgical assistant system towards polymorphic type disease
CN110033020A (en) * 2019-03-07 2019-07-19 李胜利 The Plays tangent plane picture recognition methods of fetal ultrasound image and identifying system based on deep learning
CN110111329A (en) * 2019-05-17 2019-08-09 四川大学华西第二医院 One kind being based on artificial intelligence ultrasonic image detection method and system
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110767312A (en) * 2019-12-26 2020-02-07 杭州迪英加科技有限公司 Artificial intelligence auxiliary pathological diagnosis system and method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101354839A (en) * 2008-09-02 2009-01-28 深圳市蓝韵实业有限公司 System and method of foetus ultrasonic image teaching
CN102283675A (en) * 2011-05-27 2011-12-21 华南理工大学 Rotation judgment and error correction method in medical ultrasonic panoramic imaging
CN103955698A (en) * 2014-03-12 2014-07-30 深圳大学 Method for automatically positioning standard tangent plane from ultrasonic image
CN103927559A (en) * 2014-04-17 2014-07-16 深圳大学 Automatic recognition method and system of standard section of fetus face of ultrasound image
CN105232081A (en) * 2014-07-09 2016-01-13 无锡祥生医学影像有限责任公司 Medical ultrasound assisted automatic diagnosis device and medical ultrasound assisted automatic diagnosis method
CN104636754A (en) * 2015-01-31 2015-05-20 华南理工大学 Intelligent image classifying method based on tongue body partition color feature
CN106548134A (en) * 2016-10-17 2017-03-29 沈阳化工大学 GA optimizes palmmprint and the vena metacarpea fusion identification method that SVM and normalization combine
CN107203995A (en) * 2017-06-09 2017-09-26 合肥工业大学 Endoscopic images intelligent analysis method and system
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN108038513A (en) * 2017-12-26 2018-05-15 北京华想联合科技有限公司 A kind of tagsort method of liver ultrasonic
CN108573490A (en) * 2018-04-25 2018-09-25 王成彦 A kind of intelligent read tablet system for tumor imaging data
CN109166105A (en) * 2018-08-01 2019-01-08 中国人民解放军南京军区南京总医院 The malignancy of tumor risk stratification assistant diagnosis system of artificial intelligence medical image
CN110033020A (en) * 2019-03-07 2019-07-19 李胜利 The Plays tangent plane picture recognition methods of fetal ultrasound image and identifying system based on deep learning
CN110009007A (en) * 2019-03-18 2019-07-12 武汉大学 A kind of artificial intelligence surgical assistant system towards polymorphic type disease
CN110111329A (en) * 2019-05-17 2019-08-09 四川大学华西第二医院 One kind being based on artificial intelligence ultrasonic image detection method and system
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110767312A (en) * 2019-12-26 2020-02-07 杭州迪英加科技有限公司 Artificial intelligence auxiliary pathological diagnosis system and method

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111820950A (en) * 2020-06-23 2020-10-27 无锡祥生医疗科技股份有限公司 Personalized information determination device and ultrasonic training method
CN111754485A (en) * 2020-06-24 2020-10-09 成都市温江区人民医院 Artificial intelligence ultrasonic auxiliary system for liver
CN111860636A (en) * 2020-07-16 2020-10-30 无锡祥生医疗科技股份有限公司 Measurement information prompting method and ultrasonic training method
CN111798967A (en) * 2020-07-18 2020-10-20 贵州精准健康数据有限公司 Wisdom ultrasonic testing system
CN112102925A (en) * 2020-09-11 2020-12-18 高容科技(上海)有限公司 Supplementary minimal access surgery artificial intelligence platform in
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112992338A (en) * 2021-02-08 2021-06-18 青岛大学附属医院 Learning system combining ultrasonic inspection technology and artificial intelligence technology
CN112991289A (en) * 2021-03-10 2021-06-18 深圳市鹭鸣科技有限公司 Method and device for processing standard image section
CN112991289B (en) * 2021-03-10 2024-03-26 深圳市鹭鸣科技有限公司 Processing method and device for standard section of image
CN113035329A (en) * 2021-03-22 2021-06-25 杭州联众医疗科技股份有限公司 Medical image quality control system
CN113469388B (en) * 2021-09-06 2021-11-23 江苏中车数字科技有限公司 Maintenance system and method for rail transit vehicle
CN113469388A (en) * 2021-09-06 2021-10-01 江苏中车数字科技有限公司 Maintenance system and method for rail transit vehicle
CN113741209A (en) * 2021-09-27 2021-12-03 成都脉讯科技有限公司 Intelligent AI quality control system for obstetrics and gynecology department
CN114334095A (en) * 2021-12-31 2022-04-12 深圳度影医疗科技有限公司 Intelligent identification method and system for ultrasonic examination and terminal equipment
CN114783572A (en) * 2022-04-07 2022-07-22 西安和华瑞博科技有限公司 Medical image processing method and device and medical image transmission system
CN114783575A (en) * 2022-04-20 2022-07-22 北京中捷互联信息技术有限公司 Medical image processing system and method
CN114783575B (en) * 2022-04-20 2023-09-29 广州唯顶软件科技有限公司 Medical image processing system and method
CN116521912A (en) * 2023-07-04 2023-08-01 广东恒腾科技有限公司 Ultrasonic data storage management system and method based on artificial intelligence
CN116521912B (en) * 2023-07-04 2023-10-27 广东恒腾科技有限公司 Ultrasonic data storage management system and method based on artificial intelligence
CN116982953A (en) * 2023-09-27 2023-11-03 包头市中心医院 Pregnant and lying-in woman remote monitoring system based on 5G technology
CN116982953B (en) * 2023-09-27 2023-12-08 包头市中心医院 Pregnant and lying-in woman remote monitoring system based on 5G technology

Also Published As

Publication number Publication date
CN111310851B (en) 2023-04-28

Similar Documents

Publication Publication Date Title
CN111310851B (en) Artificial intelligence ultrasonic auxiliary system and application thereof
CN111260209B (en) Cardiovascular disease risk prediction and evaluation system combining electronic medical record and medical image
CN107564580B (en) Gastroscope visual aids processing system and method based on integrated study
CN113011485B (en) Multi-mode multi-disease long-tail distribution ophthalmic disease classification model training method and device
WO2019052063A1 (en) Medical image classification processing system and method based on artificial intelligence
KR20190105210A (en) System for providing integrated medical diagnostic service and method thereof
CN109948719B (en) Automatic fundus image quality classification method based on residual dense module network structure
CN109615633A (en) Crohn disease assistant diagnosis system and method under a kind of colonoscopy based on deep learning
WO2012128121A1 (en) Diagnosis assistance system utilizing panoramic radiographs, and diagnosis assistance program utilizing panoramic radiographs
CN109411084A (en) A kind of intestinal tuberculosis assistant diagnosis system and method based on deep learning
CN112950737B (en) Fundus fluorescence contrast image generation method based on deep learning
CN113962311A (en) Knowledge data and artificial intelligence driven ophthalmic multi-disease identification system
WO2019098415A1 (en) Method for determining whether subject has developed cervical cancer, and device using same
CN112562860A (en) Training method and device of classification model and coronary heart disease auxiliary screening method and device
WO2018176717A1 (en) Medical cloud platform-based breast screening image analysis system and method
Weinreich et al. Development of an artificially intelligent mobile phone application to identify cardiac devices on chest radiography
CN111402184B (en) Method and system for realizing remote fundus screening and health service
CN107427210A (en) Method and apparatus for the noninvasively estimating of intracranial pressure
KR102034648B1 (en) Medical Image Management System, Method and Computer Readable Recording Medium
CN111540442A (en) Medical image diagnosis scheduling management system based on computer vision
CN114999638B (en) Big data visualization processing method and system for medical diagnosis based on artificial intelligence
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
CN115089112B (en) Post-stroke cognitive impairment risk assessment model building method and device and electronic equipment
CN115050456A (en) Artificial intelligence medical image automatic diagnosis system and method
Haja et al. Advancing glaucoma detection with convolutional neural networks: a paradigm shift in ophthalmology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB03 Change of inventor or designer information

Inventor after: Chen Xin

Inventor after: Luo Hong

Inventor after: Zhang Bo

Inventor after: Li Kejun

Inventor after: Xia Jiao

Inventor before: Luo Hong

Inventor before: Zhang Bo

Inventor before: Li Kejun

Inventor before: Xia Jiao

CB03 Change of inventor or designer information
GR01 Patent grant
GR01 Patent grant