CN112597982A - Image classification method, device, equipment and medium based on artificial intelligence - Google Patents

Image classification method, device, equipment and medium based on artificial intelligence Download PDF

Info

Publication number
CN112597982A
CN112597982A CN202110237602.9A CN202110237602A CN112597982A CN 112597982 A CN112597982 A CN 112597982A CN 202110237602 A CN202110237602 A CN 202110237602A CN 112597982 A CN112597982 A CN 112597982A
Authority
CN
China
Prior art keywords
image
dimensional
frequency spectrum
detected
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110237602.9A
Other languages
Chinese (zh)
Other versions
CN112597982B (en
Inventor
何昆仑
杨菲菲
郭华源
林锡祥
陈煦
邓玉姣
钟琴
汪驰
李瑶
于立恒
段永杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese PLA General Hospital
Original Assignee
Chinese PLA General Hospital
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese PLA General Hospital filed Critical Chinese PLA General Hospital
Priority to CN202110237602.9A priority Critical patent/CN112597982B/en
Publication of CN112597982A publication Critical patent/CN112597982A/en
Application granted granted Critical
Publication of CN112597982B publication Critical patent/CN112597982B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0883Detecting organic movements or changes, e.g. tumours, cysts, swellings for diagnosis of the heart
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • G06F2218/10Feature extraction by analysing the shape of a waveform, e.g. extracting parameters relating to peaks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The application discloses an image classification method, device, equipment and medium based on artificial intelligence, comprising: acquiring an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image; inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models respectively to obtain video features and frequency spectrum features; and fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features, so that the type of the region to be detected can be effectively determined.

Description

Image classification method, device, equipment and medium based on artificial intelligence
Technical Field
The present disclosure relates generally to the field of computer image processing technologies, and in particular, to the field of ultrasound image processing technologies, and in particular, to an image classification method, apparatus, device, and medium based on artificial intelligence.
Background
Echocardiography uses the principle of ultrasonic distance measurement, utilizes pulse ultrasonic waves to penetrate through the chest wall and soft tissues and measures the periodic activities of structures such as cardiac walls, ventricles, valves and the like below the ultrasonic waves. Echocardiography contains multi-modal information such as videos and images, and is the most common means for clinically evaluating the structure and function of the heart. Therefore, accurate echocardiography analysis is of great significance to the medical field.
Disclosure of Invention
In view of the desire in the prior art, it is desirable to provide an artificial intelligence based image classification method, apparatus, device and medium, which can effectively determine the belonging type of the region to be detected.
In a first aspect, an embodiment of the present application provides an image classification method based on artificial intelligence, including the following steps:
acquiring an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image;
inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models respectively to obtain video features and frequency spectrum features;
and fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features.
In some embodiments, the inputting the three-dimensional video image and the two-dimensional spectrum image into the corresponding feature extraction network models respectively includes:
performing image processing on the two-dimensional frequency spectrum image to obtain a waveform profile of frequency spectrum ripples in the two-dimensional frequency spectrum image;
acquiring a flow speed-time curve corresponding to the area to be detected according to the waveform profile;
and inputting the flow velocity-time curve into a feature extraction network model corresponding to the two-dimensional frequency spectrum image.
In some embodiments, a three-dimensional convolutional neural network is employed as the feature extraction network model corresponding to the three-dimensional video image;
and adopting a long-term and short-term memory network model as the feature extraction network model corresponding to the two-dimensional frequency spectrum image.
In some embodiments, the types include a normal type and an exception type, the method further comprising:
when the area to be detected is of an abnormal type, inputting the two-dimensional frequency spectrum image into a segmented neural network model to obtain a waveform mask of the frequency spectrum ripple;
acquiring a measuring line of the frequency spectrum ripple according to the position information corresponding to the region to be detected and the waveform mask;
acquiring a measurement parameter corresponding to the area to be detected according to the measurement line;
and determining the abnormal degree of the area to be detected according to the measurement parameters.
In some embodiments, when the position information corresponding to the region to be detected is a mitral valve, the obtaining a measurement line of the spectral ripple according to the position information corresponding to the region to be detected and the waveform mask includes:
inputting the waveform mask into an HRNet model to obtain key points of the frequency spectrum ripple;
and drawing a measuring line of the frequency spectrum ripple by using the key point.
In some embodiments, the determining the degree of abnormality of the region to be detected according to the measurement parameter includes:
identifying a measurement parameter interval in which the measurement parameter is located;
and taking the abnormal degree corresponding to the measurement parameter interval as the abnormal degree of the area to be detected.
In a second aspect, an embodiment of the present application provides an artificial intelligence-based image classification apparatus, including:
the acquisition module is used for acquiring an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image;
the extraction module is used for respectively inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models to obtain video features and frequency spectrum features;
and the determining module is used for fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features.
In a third aspect, embodiments of the present application provide an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to implement the method described in the embodiments of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method as described in the embodiments of the present application.
The image classification method based on artificial intelligence obtains an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image, the three-dimensional video image and the two-dimensional frequency spectrum image are respectively input into corresponding feature extraction network models to obtain video features and frequency spectrum features, the video features and the frequency spectrum features are fused to obtain fusion features, and the type of the region to be detected is determined according to the fusion features. Therefore, the method and the device can fully utilize the characteristics of the three-dimensional video image and the two-dimensional frequency spectrum image to carry out classification and identification on the type of the region to be detected, can contain richer characteristic information, and effectively improve the accuracy and the reliability of identification on the type of the region to be detected. Moreover, the three-dimensional video image and the two-dimensional frequency spectrum image are analyzed by using the deep learning model, so that the diagnosis speed can be greatly increased, interference and artificial errors caused by factors in charge can be avoided, and the consistency and repeatability of results are effectively improved.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 shows a flow diagram of an artificial intelligence based image classification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of the structure of the three-dimensional convolutional neural network model S3D in one embodiment of the present application;
FIG. 3 is a schematic diagram illustrating the structure of a two-dimensional convolutional neural network model Xconcept in one embodiment of the present application;
FIG. 4 illustrates a schematic diagram of an artificial intelligence based image classification method according to an embodiment of the present application;
FIG. 5 shows a flow diagram of an artificial intelligence based image classification method of another embodiment of the present application;
FIG. 6 shows a schematic structural diagram of a UNet model in an embodiment of the present application;
FIG. 7 is a schematic diagram of the structure of the HRNet model in one embodiment of the present application;
fig. 8 shows the corresponding measurement line for the mitral valve;
FIG. 9 shows the corresponding measurement lines of the aortic valve;
FIG. 10 is a schematic structural diagram of an artificial intelligence-based image classification apparatus according to an embodiment of the present application;
fig. 11 shows a schematic structural diagram of a computer system suitable for implementing the electronic device or the server according to the embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
In recent years, a neural network deep learning technology in the field of artificial intelligence is rapidly developed, the technology simulates a human neural network, multi-layer neuron cascade learning is adopted, so that abstract features of different levels are obtained in data, and classification prediction is carried out. It has made remarkable progress in the fields of speech recognition, image recognition, etc.
At present, clinicians often need years of training and experience accumulation to accurately read body state information expressed in an echocardiogram, but in practice, the reading accuracy is seriously influenced by professional experience and subjective feeling, so that the repeatability is not high, the manual error is large, and the problems of long time consumption of manual analysis, high manpower input and the like exist.
Based on the method, the device, the equipment and the medium, the image classification method based on artificial intelligence is provided so as to accurately classify and identify the echocardiograms through the neural network model.
As shown in fig. 1, the image classification method based on artificial intelligence in the embodiment of the present application includes the following steps:
step 101, obtaining an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image.
In the embodiment of the present application, the region to be detected may be a region of a valve of a heart of a patient, and further, the region to be detected may be any one of four groups of valves of the heart of the patient, such as a mitral valve, an aortic valve, a tricuspid valve, a pulmonary valve, and the like.
Further, when the region to be detected is different valves, the corresponding ultrasonic image is a section image which is specified in the medical field and is related to the region to be detected. For example, when the region to be detected is a mitral valve, the section image of the ultrasound image corresponding to the region to be detected includes a three-dimensional video image of a long-axis two-dimensional section beside the sternum and a four-chamber heart two-dimensional section at the apex and a two-dimensional spectrum image of a four-chamber heart mitral valve at the apex, and when the region to be detected is an aortic valve, the section type of the ultrasound image corresponding to the region to be detected includes a three-dimensional video image of a long-axis two-dimensional section beside the sternum, a short-axis two-dimensional image of a aorta beside the sternum and a two.
It should be noted that the originally obtained echocardiogram usually includes different sections of various valve positions of the heart of the patient at the same time, and therefore, the original echocardiogram needs to be subjected to section classification so as to screen out the ultrasonic image corresponding to the actual region to be detected and reduce noise caused by irrelevant section images. It should be understood that, since the ultrasound image includes a three-dimensional video image and a two-dimensional spectrum image, the three-dimensional video image and the two-dimensional spectrum image need to be classified separately when the original echocardiogram is subjected to slice classification. Optionally, for the three-dimensional video image, a trained three-dimensional convolutional neural network model may be used, specifically, a three-dimensional convolutional neural network S3D (reliable 3D CNN) may be used to perform slice classification, where the structure of the three-dimensional convolutional neural network model S3D is shown in fig. 2, or a plurality of frames of images may be randomly extracted from an original echocardiogram and input into a trained two-dimensional convolutional neural network Xception to perform slice classification, and then the classification results of the frames of images are summarized and voted to obtain a final slice classification result of the three-dimensional video image, where the structure of the two-dimensional convolutional neural network model Xception is shown in fig. 3; and for the two-dimensional spectrum image, the trained two-dimensional convolutional neural network model Xception can be directly used for carrying out section classification.
It should be understood that a plurality of deep learning models with the same structure can be adopted in the embodiment of the application, the plurality of deep learning models can be trained by using different training sets when in application so as to be capable of classifying different types of images, and one deep learning model can also be used, and the plurality of deep learning models can be classified by simultaneously using different types of images when in training. For example, the present application may provide two-dimensional convolutional neural networks Xception, one of which trains by using a frame image in a three-dimensional video image to enable the frame image to perform slice classification on a random frame in the three-dimensional video image, and the other of which trains by using a two-dimensional spectrum image to enable the two-dimensional spectrum image to perform slice classification, or only provide one two-dimensional convolutional neural network Xception, which trains by using the frame image and the two-dimensional spectrum image in the three-dimensional video image at the same time during training.
And 102, respectively inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models to obtain video features and frequency spectrum features.
In the embodiment of the application, a three-dimensional convolutional neural network is adopted as a feature extraction network model corresponding to a three-dimensional video image, wherein the three-dimensional convolutional neural network can be an S3D model; and adopting a Long Short-Term Memory network model (LSTM) as a feature extraction network model corresponding to the two-dimensional spectrum image. It should be appreciated that since the flow-time data is obtained after the spectral image is processed, the long-short term memory network model is more suitable for processing time series data, and effectively transmitting information in a long time series.
Further, as shown in fig. 4, for any one region to be detected, a feature extraction network branch corresponding to each section type may be provided, the feature extraction network branches are combined into a feature extraction network model corresponding to the three-dimensional video image, and after the three-dimensional video image of at least one section type corresponding to the region to be detected is obtained, the three-dimensional video image may be respectively input into a three-dimensional convolution neural network S3D (obtained by training using the three-dimensional video image corresponding to the section classification thereof) corresponding to the section classification thereof. For example, when the region to be detected is a mitral valve, the two three-dimensional convolutional neural networks S3D are included, one of which is used for extracting the video features of the long-axis two-dimensional slice (three-dimensional video slice 1) beside the sternum to obtain video features 1, and the other is used for extracting the video features of the four-chamber heart two-dimensional slice (three-dimensional video slice 2) at the apex of the heart to obtain video features 2.
In some embodiments, due to problems of shooting operation of a doctor and the like, there may be a problem of section data missing, and at this time, the three-dimensional video image of the currently missing section type may be replaced by an all-zero matrix. It should be understood that the three-dimensional video image corresponding to at least one slice needs to be preserved. In the process of training the model, on the premise of ensuring that at least one original input tangent plane is reserved, a plurality of tangent planes are randomly removed, and the removed tangent planes are replaced by the all-zero matrix as input, so that the condition that tangent plane data is incomplete in practice is simulated, and the trained model can cope with the condition of data missing. For example, when the area to be detected is a mitral valve, the three-dimensional video images of the parasternal long axis two-dimensional section and the apical four-chamber heart two-dimensional section can be simultaneously obtained, or only any one of the parasternal long axis two-dimensional section and the apical four-chamber heart two-dimensional section can be obtained, and when both the parasternal long axis two-dimensional section and the apical four-chamber heart two-dimensional section are missing, the video feature extraction cannot be completed. Therefore, the method and the device can meet the requirement of video feature extraction when the section classification is missing, so that the method provided by the application is closer to the actual situation, and the application range is wider.
For a two-dimensional spectrum image, inputting the two-dimensional spectrum image to a corresponding feature extraction network model, comprising: performing image processing on the two-dimensional frequency spectrum image to obtain a waveform profile of frequency spectrum ripples in the two-dimensional frequency spectrum image; acquiring a flow velocity-time curve corresponding to the area to be detected according to the waveform profile; and inputting the flow velocity-time curve into a feature extraction network model corresponding to the two-dimensional frequency spectrum image.
Optionally, image data processing is performed on the two-dimensional spectrum image by using image processing methods such as opening operation and closing operation, so as to obtain a waveform profile of a spectrum ripple in the two-dimensional spectrum image, that is, coordinate positions of pixel points describing a waveform in the image, then a ratio of a pixel distance to an actual unit is obtained by using original data of the two-dimensional spectrum image, and the waveform profile is further processed according to the ratio of the pixel distance to the actual unit, so as to obtain a flow velocity-time curve corresponding to the region to be detected. And inputting the flow velocity-time curve into a feature extraction network model corresponding to the two-dimensional frequency spectrum image to obtain the frequency spectrum feature.
And 103, fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features.
The video features and the frequency spectrum features can be fused by splicing, summing and other methods. The types include a normal type and an exception type. In the present embodiment, the abnormality types are heart valve stenosis abnormality, mitral valve stenosis, aortic valve stenosis, and the like.
Further, after the fusion features are obtained, the fusion features need to be input into a classifier, so that the classifier is used for classifying and identifying the fusion features to obtain the probability of whether the region to be detected is abnormal, and the type of the region to be detected is determined.
The image classification method based on artificial intelligence obtains an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image, the three-dimensional video image and the two-dimensional frequency spectrum image are respectively input into corresponding feature extraction network models to obtain video features and frequency spectrum features, the video features and the frequency spectrum features are fused to obtain fusion features, and the type of the region to be detected is determined according to the fusion features. Therefore, the method and the device can fully utilize the characteristics of the three-dimensional video image and the two-dimensional frequency spectrum image to carry out classification and identification on the type of the region to be detected, can contain richer characteristic information, and effectively improve the accuracy and the reliability of identification on the type of the region to be detected. In addition, the three-dimensional video image and the two-dimensional frequency spectrum image are analyzed by using the deep learning model, so that the diagnosis speed can be greatly increased, interference and artificial errors caused by subjective factors can be avoided, and the consistency and repeatability of results are effectively improved.
In some embodiments, as shown in fig. 5, the artificial intelligence based image classification method further includes:
step 201, when the region to be detected is of an abnormal type, inputting the two-dimensional frequency spectrum image into the segmented neural network model to obtain a waveform mask of the frequency spectrum ripple.
The segmented neural network model is a 5-layer UNet model, as shown in fig. 6.
Step 202, obtaining a measurement line of the spectrum ripple according to the position information corresponding to the region to be detected and the waveform mask.
It should be noted that, according to the different regions to be detected, the medically specified measurement line is different, for example, when the region to be detected is a mitral valve, the measurement line is a connecting line between the maximum peak value in the diastolic period of blood flow and the lowest point of the slope that descends along the frequency spectrum, and when the region to be detected is a main valve, the measurement line is an envelope curve of the maximum flow rate of the aortic valve systolic blood flow frequency spectrum.
When the position information corresponding to the region to be detected is the mitral valve, acquiring a measurement line of the spectrum ripple according to the position information corresponding to the region to be detected and the waveform mask, wherein the measurement line comprises: and inputting the waveform mask into the HRNet model, acquiring key points of the frequency spectrum ripple, and drawing a measuring line of the frequency spectrum ripple by using the key points.
The structure of the HRNet model is shown in fig. 7, the key points of the spectral ripples are the maximum peak in the early diastole and the lowest point in the slope of the descending along the spectrum, and the output result of the model can be represented by a heat map.
It should be understood that because quantitative judgment is adopted when the stenosis degree is judged, higher image processing precision is needed, so that the HRNet model is utilized to extract higher-precision key points to determine the measurement line, and compared with the prior art in which a doctor selects the key points through an ultrasonic instrument observation point, the precision is greatly improved.
When the position information corresponding to the region to be detected is the aortic valve, the edge line of the waveform mask obtained by segmenting the neural network model can be used as the measuring line of the spectral ripple.
And 203, acquiring a measurement parameter corresponding to the area to be detected according to the measurement line.
When the position information corresponding to the region to be detected is the mitral valve, the measurement parameters are the effective area of the mitral valve orifice, and when the position information corresponding to the region to be detected is the aortic valve, the measurement parameters are the maximum flow velocity and the average pressure difference.
And step 204, determining the abnormal degree of the region to be detected according to the measurement parameters.
In some embodiments, a measurement parameter interval is identified in which the measurement parameter is located; and taking the abnormal degree corresponding to the measurement parameter interval as the abnormal degree of the area to be detected.
That is, a measurement parameter interval corresponding to the abnormal degree may be set, and when the measurement parameter is greater than the minimum value of the measurement parameter interval and is less than or equal to the maximum value of the measurement parameter interval, it is determined that the measurement parameter is in the measurement parameter interval, and the abnormal degree corresponding to the measurement parameter interval is used as the abnormal degree of the region to be detected.
The abnormal degree may include mild stenosis, moderate stenosis, and severe stenosis, among others.
It should be understood that when the position information corresponding to the region to be detected is an aortic valve, there are two measurement parameters, and the determination result with the most serious abnormality degree in the maximum flow velocity and the average differential pressure can be used as the final abnormality degree.
Therefore, the measuring line of the waveform mask is extracted by utilizing the deep learning image, and the accuracy of the measuring parameter is effectively improved.
The area to be detected is described in detail below as an example of a mitral valve.
Acquiring an original echocardiogram of a patient, wherein the echocardiogram comprises a three-dimensional video image and a two-dimensional frequency spectrum image, the three-dimensional video image is subjected to section classification by using a three-dimensional convolution neural network model S3D, or 10 frames of images are randomly extracted from a three-dimensional video image sequence and input into the model to obtain section classification of the 10 frames of images, and then the classification results of the frames of images are summarized and voted to obtain an ultrasonic image of each section; and classifying the sections of the two-dimensional frequency spectrum image of the echocardiogram by using a two-dimensional convolution neural network model Xceptation to obtain the ultrasonic image of each section. Extracting section classification results from the ultrasonic image into a three-dimensional video image of a long-axis two-dimensional section beside the sternum and a two-dimensional section of an apical four-chamber heart and a two-dimensional spectrum image of a continuous Doppler spectrum of an apical four-chamber heart mitral valve.
And respectively inputting the three-dimensional video images of the long-axis two-dimensional section beside the sternum and the four-chamber heart two-dimensional section at the apex of the heart into two corresponding three-dimensional convolution neural network models S3D to obtain the video characteristics 1 of the long-axis two-dimensional section beside the sternum and the video characteristics 2 of the four-chamber heart two-dimensional section at the apex of the heart. And if any three-dimensional video image of the long axis two-dimensional section beside the sternum and the four-chamber heart two-dimensional section at the apex of the heart is missing, replacing the missing three-dimensional video image with an all-zero matrix. The two-dimensional spectrum image of the continuous Doppler spectrum of the apical four-chamber mitral valve comprises mitral valve blood flow information, the two-dimensional spectrum image of the continuous Doppler spectrum of the apical four-chamber mitral valve is subjected to image processing methods such as opening operation and closing operation to obtain a waveform contour on the spectrum image, a flow velocity-time curve is obtained by combining image information in the two-dimensional spectrum image, then the flow velocity-time curve is processed by adopting a long-short term memory network (LSTM), and spectrum characteristics are extracted. And fusing the video features 1, the video features 2 and the spectrum features in a splicing mode to obtain fused features, inputting the fused features into a classifier, and obtaining classification results of types of the to-be-detected area, such as a normal type and an abnormal type, by using the classifier. Wherein the anomaly type indicates that a stenosis anomaly exists in the mitral valve.
Then, inputting the two-dimensional frequency spectrum image into a segmented neural network (5-layer UNet model) to obtain a forward blood flow frequency spectrum waveform mask of the mitral valve, segmenting a target waveform from the two-dimensional frequency spectrum image according to the waveform mask, inputting the target waveform into an HRNet model for key point detection to obtain a blood flow velocity peak and a slope peak descending along the frequency spectrum, and connecting the peak and the slope peak to obtain a peak value of blood flow velocity and a slope peak descending along the frequency spectrumDrawing a measurement line by calculating the descending slope of the measurement line, obtaining the maximum Pressure difference according to the maximum flow rate by simplified Bernoulli equation, calculating the Pressure Half Time (PHT) from the descending slope, calculating the effective area of the Mitral Valve (MVA), and determining whether the effective area of the mitral valve is 1.5-2.0cm2If yes, determining that the mitral valve is slightly narrow, and if not, further judging whether the effective area of the mitral valve orifice is 1.0-1.5 cm2If yes, determining the mitral valve is medium narrow, if not, further judging whether the effective area of the mitral valve orifice is less than 1 cm2And if so, determining that the mitral valve is severely stenosed.
The region to be detected is described in detail below as an example of an aortic valve.
Acquiring an original echocardiogram of a patient, wherein the echocardiogram comprises a three-dimensional video image and a two-dimensional frequency spectrum image, the three-dimensional video image is subjected to section classification by using a three-dimensional convolution neural network model S3D, or 10 frames of images are randomly extracted from a three-dimensional video image sequence and input into the model to obtain section classification of the 10 frames of images, and then the classification results of the frames of images are summarized and voted to obtain an ultrasonic image of each section; and classifying the sections of the two-dimensional frequency spectrum image of the echocardiogram by using a two-dimensional convolution neural network model Xceptation to obtain the ultrasonic image of each section. Extracting the section classification result from the ultrasonic image into a three-dimensional video image of a long-axis two-dimensional section beside the sternum and a short-axis two-dimensional section of the aorta beside the sternum and a two-dimensional spectrum image of a continuous Doppler spectrum of the apical five-chamber heart aortic valve.
The shape and the movement of an aortic valve can be observed through three-dimensional video images of the two-dimensional section of the long axis beside the sternum and the two-dimensional section of the short axis of the aorta beside the sternum, and the three-dimensional video images of the two-dimensional section of the long axis beside the sternum and the two-dimensional section of the short axis of the aorta beside the sternum are respectively input into two corresponding three-dimensional convolution neural network models S3D, so that the video characteristics 1 of the two-dimensional section of the long axis beside the sternum and the video characteristics 2 of the two-dimensional section of the. And if any three-dimensional video image of the two-dimensional section of the long axis beside the sternum and the two-dimensional section of the short axis of the aorta beside the sternum has a defect, replacing the missing three-dimensional video image with an all-zero matrix. The two-dimensional spectrum image of the apical five-chamber aortic heart valve continuous Doppler spectrum comprises aortic valve blood flow information, the two-dimensional spectrum image of the apical five-chamber aortic heart valve continuous Doppler spectrum is subjected to image processing methods such as opening operation and closing operation to obtain a waveform profile on the spectrum image, a flow velocity-time curve is obtained by combining image information in the two-dimensional spectrum image, then a long-short term memory network (LSTM) is adopted to process the flow velocity-time curve, and spectrum characteristics are extracted. And fusing the video features 1, the video features 2 and the spectrum features in a splicing mode to obtain fused features, inputting the fused features into a classifier, and obtaining classification results of types of the to-be-detected area, such as a normal type and an abnormal type, by using the classifier. Wherein the abnormality type indicates that the aortic valve has a stenosis abnormality.
Then, inputting the two-dimensional frequency spectrum image into a segmentation neural network (5-layer UNet model), obtaining a forward blood flow frequency spectrum waveform mask of the aortic valve, segmenting a target waveform from the two-dimensional frequency spectrum image according to the waveform mask, extracting an envelope curve of the target waveform as a measuring line, as shown in fig. 9, wherein the measuring line is a peak with the maximum flow rate, obtaining flow rate-time data of the peak, calculating the differential pressure of each point according to a simplified Bernoulli equation, averaging to obtain an average differential pressure, judging whether the maximum flow rate is 2.6-3.0m/s or the average differential pressure is less than 20mmHg, if so, determining that the aortic valve is slightly narrow, if not, further judging whether the maximum flow rate is 3.0-4.0m/s or the average differential pressure is 20-40mmHg, if so, determining that the aortic valve is moderately narrow, if not, determining whether the maximum flow rate is greater than 4.0m/s or the average differential pressure is greater than 40mmHg, the aortic valve is severely stenosed.
The method provided by the application is sequentially used for classifying the section of the echocardiogram, comprehensively judging whether the valve is narrow by using the three-dimensional video image and the two-dimensional frequency spectrum image and extracting the measuring line related to the blood flow velocity and the pressure difference by using the deep learning model, the multi-modal feature fusion image processing method is consistent with the chart judging process of a clinician, and the abnormal parameter extracted by using the deep neural network is also consistent with the clinical evaluation method, so that the multi-modal feature fusion method which is fit with the clinical diagnosis process as much as possible is high in reliability and high in accuracy.
In summary, the method and the device can make full use of the characteristics of the three-dimensional video image and the two-dimensional frequency spectrum image to perform classified identification of the type of the region to be detected, can contain richer characteristic information, and effectively improve the accuracy and the reliability of identification of the type of the region to be detected. Moreover, the three-dimensional video image and the two-dimensional frequency spectrum image are analyzed by using the deep learning model, so that the diagnosis speed can be greatly increased, interference and artificial errors caused by factors in charge can be avoided, and the consistency and repeatability of results are effectively improved.
It should be noted that while the operations of the method of the present invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results.
With further reference to FIG. 10, there is shown an exemplary block diagram of an artificial intelligence based image classification apparatus 10 according to an embodiment of the present application.
As shown in fig. 10, the artificial intelligence based image classification apparatus 10 includes:
the acquisition module 11 is configured to acquire an ultrasonic image of an area to be detected, where the ultrasonic image includes a three-dimensional video image and a two-dimensional spectrum image;
the extraction module 12 is configured to input the three-dimensional video image and the two-dimensional spectrum image to corresponding feature extraction network models respectively to obtain a video feature and a spectrum feature;
and the determining module 13 is configured to fuse the video feature and the spectrum feature to obtain a fusion feature, and determine the type of the region to be detected according to the fusion feature.
In some embodiments, the extraction module 12 is further configured to:
performing image processing on the two-dimensional frequency spectrum image to obtain a waveform profile of frequency spectrum ripples in the two-dimensional frequency spectrum image;
acquiring a flow velocity-time curve corresponding to the area to be detected according to the waveform profile;
and inputting the flow velocity-time curve into a feature extraction network model corresponding to the two-dimensional frequency spectrum image.
In some embodiments, a three-dimensional convolutional neural network is used as a feature extraction network model corresponding to a three-dimensional video image;
and adopting a long-term and short-term memory network model as a feature extraction network model corresponding to the two-dimensional spectrum image.
In some embodiments, the determining module 13 is further configured to:
when the area to be detected is of an abnormal type, inputting the two-dimensional frequency spectrum image into a segmentation neural network model to obtain a waveform mask of frequency spectrum ripples;
acquiring a measuring line of the frequency spectrum ripple according to the position information corresponding to the region to be detected and the waveform mask;
acquiring a measurement parameter corresponding to the area to be detected according to the measurement line;
and determining the abnormal degree of the region to be detected according to the measurement parameters.
In some embodiments, when the position information corresponding to the region to be detected is a mitral valve, the determining module 13 is further configured to:
inputting the waveform mask into an HRNet model to obtain key points of frequency spectrum ripple;
and drawing a measuring line of the frequency spectrum ripple by using the key point.
In some embodiments, the determining module 13 is further configured to:
identifying a measurement parameter interval in which the measurement parameter is located;
and taking the abnormal degree corresponding to the measurement parameter interval as the abnormal degree of the area to be detected.
It should be understood that the units or modules recited in the artificial intelligence based image classification apparatus 10 correspond to the various steps in the method described with reference to fig. 1. Thus, the operations and features described above for the method are equally applicable to the artificial intelligence based image classification apparatus 10 and the units included therein, and will not be described in detail here. The image classification apparatus 10 based on artificial intelligence may be implemented in a browser or other security applications of the electronic device in advance, or may be loaded into the browser or other security applications of the electronic device by downloading or the like. The corresponding units in the artificial intelligence based image classification apparatus 10 may cooperate with units in the electronic device to implement the solution of the embodiment of the present application.
The division into several modules or units mentioned in the above detailed description is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
In summary, the method and the device can make full use of the characteristics of the three-dimensional video image and the two-dimensional frequency spectrum image to perform classified identification of the type of the region to be detected, can contain richer characteristic information, and effectively improve the accuracy and the reliability of identification of the type of the region to be detected. Moreover, the three-dimensional video image and the two-dimensional frequency spectrum image are analyzed by using the deep learning model, so that the diagnosis speed can be greatly increased, interference and artificial errors caused by factors in charge can be avoided, and the consistency and repeatability of results are effectively improved.
Referring now to fig. 11, fig. 11 illustrates a schematic diagram of a computer system suitable for use in implementing an electronic device or server of an embodiment of the present application,
as shown in fig. 11, the computer system 1100 includes a Central Processing Unit (CPU) 1101, which can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 1102 or a program loaded from a storage section 1108 into a Random Access Memory (RAM) 1103. In the RAM1103, various programs and data necessary for operation instructions of the system are also stored. The CPU1101, ROM1102, and RAM1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
The following components are connected to the I/O interface 1105; an input portion 1106 including a keyboard, mouse, and the like; an output portion 1107 including a signal output unit such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 1108 including a hard disk and the like; and a communication section 1109 including a network interface card such as a LAN card, a modem, or the like. The communication section 1109 performs communication processing via a network such as the internet. A driver 1110 is also connected to the I/O interface 1105 as necessary. A removable medium 1111 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1110 as necessary, so that a computer program read out therefrom is mounted into the storage section 1108 as necessary.
In particular, according to embodiments of the present application, the process described above with reference to the flowchart fig. 2 may be implemented as a computer software program. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program comprises program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication portion 1109 and/or installed from the removable medium 1111. The above-described functions defined in the system of the present application are executed when the computer program is executed by a Central Processing Unit (CPU) 1101.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operational instructions of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present application may be implemented by software or hardware. The described units or modules may also be provided in a processor, and may be described as: a processor includes an acquisition module, an extraction module, and a determination module. The names of these units or modules do not in some cases form a limitation on the units or modules themselves, for example, the acquisition module may also be described as "acquiring an ultrasound image of an area to be detected, the ultrasound image including a three-dimensional video image and a two-dimensional spectrum image".
As another aspect, the present application also provides a computer-readable storage medium, which may be included in the electronic device described in the above embodiments, or may exist separately without being assembled into the electronic device. The computer-readable storage medium stores one or more programs which, when executed by one or more processors, perform the artificial intelligence based image classification method described herein.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the disclosure. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (9)

1. An image classification method based on artificial intelligence is characterized by comprising the following steps:
acquiring an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image;
inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models respectively to obtain video features and frequency spectrum features;
and fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features.
2. The artificial intelligence based image classification method according to claim 1, wherein the inputting the three-dimensional video image and the two-dimensional spectrum image into the corresponding feature extraction network models respectively comprises:
performing image processing on the two-dimensional frequency spectrum image to obtain a waveform profile of frequency spectrum ripples in the two-dimensional frequency spectrum image;
acquiring a flow speed-time curve corresponding to the area to be detected according to the waveform profile;
and inputting the flow velocity-time curve into a feature extraction network model corresponding to the two-dimensional frequency spectrum image.
3. The artificial intelligence based image classification method according to claim 1, further comprising:
adopting a three-dimensional convolution neural network as the feature extraction network model corresponding to the three-dimensional video image;
and adopting a long-term and short-term memory network model as the feature extraction network model corresponding to the two-dimensional frequency spectrum image.
4. The artificial intelligence based image classification method of claim 1, wherein the belonging types include a normal type and an abnormal type, the method further comprising:
when the area to be detected is of an abnormal type, inputting the two-dimensional frequency spectrum image into a segmented neural network model to obtain a waveform mask of frequency spectrum ripples;
acquiring a measuring line of the frequency spectrum ripple according to the position information corresponding to the region to be detected and the waveform mask;
acquiring a measurement parameter corresponding to the area to be detected according to the measurement line;
and determining the abnormal degree of the area to be detected according to the measurement parameters.
5. The image classification method based on artificial intelligence according to claim 4, wherein when the position information corresponding to the region to be detected is a mitral valve, the obtaining the measurement line of the spectral ripple according to the position information corresponding to the region to be detected and the waveform mask includes:
inputting the waveform mask into an HRNet model to obtain key points of the frequency spectrum ripple;
and drawing a measuring line of the frequency spectrum ripple by using the key point.
6. The artificial intelligence based image classification method according to claim 4, wherein the determining the degree of abnormality of the region to be detected according to the measurement parameter comprises:
identifying a measurement parameter interval in which the measurement parameter is located;
and taking the abnormal degree corresponding to the measurement parameter interval as the abnormal degree of the area to be detected.
7. An image classification device based on artificial intelligence, comprising:
the acquisition module is used for acquiring an ultrasonic image of a region to be detected, wherein the ultrasonic image comprises a three-dimensional video image and a two-dimensional frequency spectrum image;
the extraction module is used for respectively inputting the three-dimensional video image and the two-dimensional frequency spectrum image into corresponding feature extraction network models to obtain video features and frequency spectrum features;
and the determining module is used for fusing the video features and the frequency spectrum features to obtain fusion features, and determining the type of the region to be detected according to the fusion features.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the artificial intelligence based image classification method as claimed in any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the artificial intelligence based image classification method according to any one of claims 1 to 6.
CN202110237602.9A 2021-03-04 2021-03-04 Image classification method, device, equipment and medium based on artificial intelligence Active CN112597982B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110237602.9A CN112597982B (en) 2021-03-04 2021-03-04 Image classification method, device, equipment and medium based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110237602.9A CN112597982B (en) 2021-03-04 2021-03-04 Image classification method, device, equipment and medium based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN112597982A true CN112597982A (en) 2021-04-02
CN112597982B CN112597982B (en) 2021-05-28

Family

ID=75210321

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110237602.9A Active CN112597982B (en) 2021-03-04 2021-03-04 Image classification method, device, equipment and medium based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN112597982B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN114376603A (en) * 2022-01-07 2022-04-22 乐普(北京)医疗器械股份有限公司 Two-dimensional spectrum Doppler ultrasonic cardiogram image processing method and device
CN117110722A (en) * 2023-10-20 2023-11-24 中国科学院长春光学精密机械与物理研究所 Pulse width measuring method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881680A (en) * 2015-05-25 2015-09-02 电子科技大学 Alzheimer's disease and mild cognitive impairment identification method based on two-dimension features and three-dimension features
US20190065897A1 (en) * 2017-08-28 2019-02-28 Boe Technology Group Co., Ltd. Medical image analysis method, medical image analysis system and storage medium
CN112116562A (en) * 2020-08-26 2020-12-22 重庆市中迪医疗信息科技股份有限公司 Method, device, equipment and medium for detecting focus based on lung image data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104881680A (en) * 2015-05-25 2015-09-02 电子科技大学 Alzheimer's disease and mild cognitive impairment identification method based on two-dimension features and three-dimension features
US20190065897A1 (en) * 2017-08-28 2019-02-28 Boe Technology Group Co., Ltd. Medical image analysis method, medical image analysis system and storage medium
CN112116562A (en) * 2020-08-26 2020-12-22 重庆市中迪医疗信息科技股份有限公司 Method, device, equipment and medium for detecting focus based on lung image data

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114091507A (en) * 2021-09-02 2022-02-25 北京医准智能科技有限公司 Ultrasonic focus area detection method and device, electronic equipment and storage medium
CN114376603A (en) * 2022-01-07 2022-04-22 乐普(北京)医疗器械股份有限公司 Two-dimensional spectrum Doppler ultrasonic cardiogram image processing method and device
WO2023130661A1 (en) * 2022-01-07 2023-07-13 乐普(北京)医疗器械股份有限公司 Method and apparatus for processing two-dimensional spectral doppler echocardiographic image
CN114376603B (en) * 2022-01-07 2023-11-28 乐普(北京)医疗器械股份有限公司 Processing method and device for two-dimensional spectrum Doppler ultrasound cardiac image
CN117110722A (en) * 2023-10-20 2023-11-24 中国科学院长春光学精密机械与物理研究所 Pulse width measuring method and device

Also Published As

Publication number Publication date
CN112597982B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN112597982B (en) Image classification method, device, equipment and medium based on artificial intelligence
CN113194836B (en) Automated clinical workflow
US7925064B2 (en) Automatic multi-dimensional intravascular ultrasound image segmentation method
JP6842481B2 (en) 3D quantitative analysis of the retinal layer using deep learning
US20110243401A1 (en) System and method for image sequence processing
CN112837306B (en) Coronary artery disease lesion functional quantitative method based on deep learning and mesopic theory
CN111275755B (en) Mitral valve orifice area detection method, system and equipment based on artificial intelligence
CN111297399B (en) Fetal heart positioning and fetal heart rate extraction method based on ultrasonic video
CN111612756B (en) Coronary artery specificity calcification detection method and device
US20220012875A1 (en) Systems and Methods for Medical Image Diagnosis Using Machine Learning
Sirjani et al. Automatic cardiac evaluations using a deep video object segmentation network
CN114782358A (en) Method and device for automatically calculating blood vessel deformation and storage medium
CN111340794A (en) Method and device for quantifying coronary artery stenosis
Singh et al. Good view frames from ultrasonography (USG) video containing ONS diameter using state-of-the-art deep learning architectures
CN116452579B (en) Chest radiography image-based pulmonary artery high pressure intelligent assessment method and system
CN116664592A (en) Image-based arteriovenous blood vessel separation method and device, electronic equipment and medium
CN114010227B (en) Right ventricle characteristic information identification method and device
CN114898882A (en) Method and system for ultrasound-based assessment of right heart function
CN113592802B (en) Mitral valve annular displacement automatic detection system based on ultrasonic image
Huang et al. POST-IVUS: A perceptual organisation-aware selective transformer framework for intravascular ultrasound segmentation
CN115482223A (en) Image processing method, image processing device, storage medium and electronic equipment
Hernanda et al. Semantic segmentation of venous on deep vein thrombosis (DVT) case using UNet-ResNet
CN114864095A (en) Analysis method for blood circulation change of narrow coronary artery under combination of multiple exercise strengths
CN115035028A (en) Left ventricular ejection fraction automatic calculation method based on ultrasonic image
CN113689469A (en) Method for automatically identifying ultrasonic contrast small liver cancer focus and ultrasonic system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant