CN111227864B - Device for detecting focus by using ultrasonic image and computer vision - Google Patents

Device for detecting focus by using ultrasonic image and computer vision Download PDF

Info

Publication number
CN111227864B
CN111227864B CN202010029034.9A CN202010029034A CN111227864B CN 111227864 B CN111227864 B CN 111227864B CN 202010029034 A CN202010029034 A CN 202010029034A CN 111227864 B CN111227864 B CN 111227864B
Authority
CN
China
Prior art keywords
image
focus
nodule
ultrasonic
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010029034.9A
Other languages
Chinese (zh)
Other versions
CN111227864A (en
Inventor
钱林学
卜云芸
庞浩
刘涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010029034.9A priority Critical patent/CN111227864B/en
Publication of CN111227864A publication Critical patent/CN111227864A/en
Application granted granted Critical
Publication of CN111227864B publication Critical patent/CN111227864B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5292Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/032Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Abstract

The application relates to a device for detecting focus by using ultrasonic images and utilizing computer vision technology, which comprises the steps of directly acquiring video signals from medical ultrasonic equipment to obtain ultrasonic images; detecting focus of each frame of ultrasonic image in real time, separating foreground and background of the ultrasonic image, and marking the detected focus area; obtaining screenshot of all suspected lesions generated by current scanning from an original ultrasonic image by using marks obtained by lesion detection, and orderly arranging according to scanning generation time; judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus; and reading the scanning information of the ultrasonic equipment and the background information of the patient, and carrying out comprehensive judgment processing by combining all the images confirmed as the focus acquired by the scanning. The invention can adapt to the current ultrasonic diagnosis scene, effectively assist doctors to identify focus in video frame images, and improve diagnosis efficiency and accuracy.

Description

Device for detecting focus by using ultrasonic image and computer vision
Technical Field
The present application relates to the field of ultrasound image analysis, and in particular, to a device for detecting a lesion using computer vision technology using an ultrasound image.
Background
Ultrasonic examination (US examination) is to observe reflection of ultrasonic waves by a human body. In general, ultrasonic examination called US is performed by irradiating a body with weak ultrasonic waves and imaging reflected waves of tissues. In the related art, ultrasonic examination has become an important and noninvasive method for detecting structures and exercise functions of human organs. The medical ultrasonic equipment has low cost and is used in hospitals and physical examination centers at all levels. The ultrasonic examination has low cost and becomes an important means for early screening and diagnosis of various diseases.
In the whole medical image field, many techniques for performing auxiliary diagnosis by using a computer are available, for example: CN109222859a is an intelligent endoscope imaging system with ai auxiliary diagnosis function, which constructs a set of endoscope system, can transmit images to a computer for intelligent analysis, and returns analysis to the endoscope system for doctor reference. Currently, few auxiliary diagnostic techniques are specifically applied to ultrasound examination scenarios. For example: the patent CN206365899U describes a set of ultrasonic auxiliary diagnostic system for reducing the labor intensity of doctors and assisting the doctors in diagnosis, instead of using computer technology for auxiliary diagnosis.
In clinical practice, one factor in missed diagnosis by an sonographer is the lack of awareness of a flash of lesion images. When only a few frames of images of the scanned video contain lesions, it is difficult for the physician to perceive. However, the imaging principle is limited, the imaging of the tissue by the ultrasonic image itself is not particularly clear, and whether the probe can scan the focus during ultrasonic examination is completely dependent on the doctor's technique. The doctor needs to continuously adjust the scanning angle and pay close attention to the scanning screen at the same time, so that the labor intensity is high, and the examination result is very dependent on the experience of the doctor.
The existing method and equipment are not suitable for the scene of ultrasonic diagnosis, the existing computer technology is not fully utilized, and in clinical practice, one factor of misdiagnosis and missed diagnosis of an ultrasonic doctor is that a flash focus image is not noticed. When only a few frames of images of the scanned video contain lesions, it is difficult for the physician to perceive. The existing mode is difficult to fundamentally lighten the workload of doctors and improve the diagnosis efficiency and accuracy.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the application provides a method and a device for detecting a focus by using an ultrasonic image and utilizing computer vision.
According to a first aspect of embodiments of the present application, there is provided a method for lesion detection using computer vision using ultrasound images, comprising the steps of:
directly acquiring video signals from medical ultrasonic equipment to obtain an ultrasonic image;
detecting focus of each frame of ultrasonic image in real time, separating foreground and background of the ultrasonic image, taking various focuses in the ultrasonic image as foreground and normal tissue as background; marking the detected focus area, wherein the marked area is a suspected focus area;
processing the real-time video signal and simultaneously returning the processed video frame containing the mark to display in real time;
obtaining screenshot of all suspected lesions generated by current scanning from an original ultrasonic image by using marks obtained by lesion detection, and orderly arranging according to scanning generation time;
judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus;
and reading the scanning information of the ultrasonic equipment and the background information of the patient, and carrying out comprehensive judgment processing by combining all the images confirmed as the focus acquired by the scanning.
Further, the speed of processing images for focus detection of each frame of ultrasonic image in real time is faster than the refreshing speed of real-time video data; in the process of focus detection for each frame of ultrasonic image in real time, the maximum detected foreground quantity is set to be less than or equal to 5 on the same ultrasonic image.
Further, the step of comprehensive judgment processing comprises the step of nodule key frame extraction, and the steps are as follows:
1) Inputting the ultrasonic video signal into the nodule detection module in the form of continuous frames;
2) The method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
3) Extracting key frames according to the processing result:
firstly, according to the coordinate distance of the center point of the nodule, which is converted from the coordinates of the continuous circumscribed rectangle, several nodules are detected in the video; for all detected bounding rectangles for each nodule, the frame with the largest bounding rectangle diagonal distance is selected as the keyframe.
Further, in the comprehensive judging step, the node property of the extracted key frame needs to be judged, and the key frame node property judging step is as follows:
resetting the part of the key frame except the rectangle circumscribed by the nodule to black;
judging the properties of the node by the processed key frame image;
image feature vectors for the lesion are acquired in the key frame image, wherein,
a single focus can extract a plurality of key frame images, and the image feature vectors of the focus can be extracted through a single image; and measuring the mean value and standard deviation of the feature vectors of the plurality of key frame images, and connecting the feature vectors in parallel to obtain the feature vector for the focus, and further obtaining the final image feature vector for the focus.
Further, for the single focus condition, after the feature vector of the structured data is obtained by using the structured data of the patient, the feature vector is connected in series with the image feature vector extracted from the single image, so that the comprehensive data analysis conclusion of the patient is obtained;
and for the condition of a plurality of focuses, utilizing the structured data of the patient to obtain feature vectors of the structured data, and connecting the feature vectors and standard deviation vectors of all the focuses with the structured feature vectors in series to obtain a comprehensive data analysis conclusion aiming at the patient.
Further, the step of obtaining the image feature vector for the focus from the key frame image further comprises:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix;
the key frame nodule property judging step is provided with a feature extraction and transformation part, an image feature vector part and a solver part;
the feature extraction and transformation part is a convolutional neural network model, and after the feature extraction and transformation part is changed, the image feature vector part generates a one-dimensional vector from ultrasonic image matrix data, and the vector is a feature vector of an original image;
and the solver part takes the image feature vector as input and combines the feature vector of the structural data of the patient to obtain the final output result.
According to a second aspect of embodiments of the present application, there is provided an apparatus for lesion detection using computer vision technology using ultrasound images, the apparatus being configured to implement the method for lesion detection described above, the apparatus comprising: the ultrasonic video signal acquisition module is used for directly acquiring video signals from medical ultrasonic equipment to obtain ultrasonic images;
the focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, taking various focuses in the ultrasonic image as foreground and normal tissues as background; marking the detected focus area, wherein the marked area is a suspected focus area, and simultaneously, returning and displaying the processed video frame containing the mark in real time;
the comprehensive diagnosis equipment automatically acquires screenshot of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection equipment by using the marks obtained by the focus detection equipment, and the screenshot is orderly arranged according to the scanning generation time;
the comprehensive diagnosis equipment is internally provided with an image screening and classifying module, wherein the image screening and classifying module is used for secondarily confirming suspected focuses detected in the focus detection module, distinguishing whether the focuses are real focuses or not and classifying the confirmed focuses;
and the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and the background information of the patient, and carrying out comprehensive judgment processing by combining all the images confirmed as the focus acquired by the scanning.
Further, a nodule key frame extraction module and a key frame nodule property judgment module are arranged in the comprehensive judgment module; wherein the method comprises the steps of
The method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
extracting key frames according to the processing result: firstly, according to the coordinate distance of the center point of the nodule, which is converted from the coordinates of the continuous circumscribed rectangle, several nodules are detected in the video; selecting a frame with the largest diagonal distance of the circumscribed rectangle as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the nodule property of the extracted key frame image and extracting the image feature vector of the key frame image; the key frame nodule property determination module is configured to have the following functions:
inputting the processed key frame image into a key frame nodule property judging module to judge the nodule property;
image feature vectors for the lesion are acquired in the key frame image, wherein,
a single focus can extract a plurality of key frame images, and the image feature vectors of the focus can be extracted through a single image; and measuring the mean value and standard deviation of the feature vectors of the plurality of key frame images, and connecting the feature vectors in parallel to obtain the feature vector for the focus, and further obtaining the final image feature vector for the focus.
Further, the device also comprises a structured data module for obtaining the characteristic vector of the structured data of the patient;
for the single focus, the feature vector of the structured data is obtained by using the structured data of the patient, and then is connected with the image feature vector extracted from the single image in series, so that the comprehensive data analysis conclusion of the patient is obtained;
and for the condition of a plurality of focuses, utilizing the structured data of the patient to obtain feature vectors of the structured data, and connecting the feature vectors and standard deviation vectors of all the focuses with the structured feature vectors in series to obtain a comprehensive data analysis conclusion aiming at the patient.
Further, in the key frame nodule property judging step, a feature extraction and transformation part, an image feature vector part and a solver part are further arranged;
the method comprises the steps of inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, generating a one-dimensional vector from ultrasonic image matrix data by the image feature vector part after the feature extraction and transformation part is changed, inputting the image feature vector by a solver part, and combining the feature vector of structural data of a patient to obtain a final output result.
The technical scheme provided by the embodiment of the application can comprise the following beneficial effects:
1) The existing method and equipment are not suitable for the scene of ultrasonic diagnosis, the existing computer technology is not fully utilized, and in clinical practice, one factor of misdiagnosis and missed diagnosis of an ultrasonic doctor is that a flash focus image is not noticed. When only a few frames of images of the scanned video contain lesions, it is difficult for the physician to perceive. The existing mode is difficult to fundamentally lighten the workload of doctors and improve the diagnosis efficiency and accuracy. The method processes the real-time video signal and simultaneously returns the processed video frame containing the mark to display in real time; each frame of image is processed, and the detected focus area is framed by a rectangular frame with obvious color so as to prompt a doctor to have a suspected focus. The doctor, when viewing the returned video, can clearly perceive the presence of a color box (mark) that suggests a possible lesion. The doctor is effectively assisted in identifying the focus in the video frame image.
2) In the invention, a nodule detection module is utilized to process each frame of image in real time, extract a key frame, judge the node property of the key frame, and then extract an image feature vector of the key frame to be combined with a structured data vector of a patient so as to obtain a comprehensive data analysis conclusion of the patient; compared with the existing method for judging through doctor experience, the detection method and device provided by the invention effectively improve diagnosis efficiency and accuracy.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for detecting focus by using computer vision technology in an ultrasonic image according to the invention;
FIG. 2 is a flow chart of a nodule detection method of the present invention;
FIG. 3 is a flow chart of a method for extracting key frames according to the present invention;
FIG. 4 is a flow chart of key frame nodule property determination in accordance with the present invention;
FIG. 5 is a schematic representation of the present invention using structured data of a patient to derive feature vectors;
FIG. 6 is a schematic diagram of obtaining image feature vectors for a lesion from a key frame image according to the present invention;
FIG. 7 is a schematic diagram of a lesion detection device according to the present invention;
fig. 8 is a schematic diagram of a focus detection system according to the present invention.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present application as detailed in the accompanying claims.
Fig. 1 is a flowchart illustrating a lesion detection method according to an exemplary embodiment, and as shown in fig. 1, a method for lesion detection using a computer vision technique using an ultrasonic image includes the following steps.
S1, directly acquiring video signals from medical ultrasonic equipment to obtain an ultrasonic image;
s2, detecting focuses of each frame of ultrasonic image in real time, separating foreground and background of the ultrasonic image, taking various focuses in the ultrasonic image as foreground and normal tissues as background; marking the detected focus area, and framing the focus area by a rectangular frame with obvious color, wherein the marked area is a suspected focus area;
s3, processing the real-time video signal and simultaneously returning and displaying the processed video frame containing the mark (rectangular frame) in real time;
s4, acquiring screenshot of all suspected lesions generated by current scanning from an original ultrasonic image by utilizing a rectangular frame obtained by lesion detection, and orderly arranging according to scanning generation time;
s5, judging whether the focus in the screenshot of the suspected focus is a real focus or not, and accurately classifying the confirmed focus;
s6, reading scanning information of the ultrasonic equipment and background information of a patient, and carrying out comprehensive judgment processing by combining all images confirmed as focuses acquired by the scanning.
The detection method provided by the embodiment needs to be supplemented that the speed of processing images for focus detection of each frame of ultrasonic image in real time is faster than the refresh speed of real-time video data; in the process of focus detection for each frame of ultrasonic image in real time, the maximum detected foreground quantity is set to be less than or equal to 5 on the same ultrasonic image. Because of the specificity of the ultrasound detection scenario, the number of lesions that may appear on the same ultrasound image is very limited. Therefore, in order to increase the speed, the maximum number of foreground that can be detected is set to 5 here, namely: in an ultrasound image, the lesion detection model need only suggest less than or equal to 5 possible suspicious lesion areas.
In the detection method provided by the embodiment, firstly, the image of the focus needs to be confirmed, namely whether the image frame contains a nodule or not is detected, and the supplementary explanation is needed that the step only detects whether the image contains the nodule or not; the key frames are extracted from the image containing the nodes, and then the key frames are subjected to property judgment and feature vectors are extracted, which is now described in detail as follows:
as shown in fig. 2, as a preferred implementation, the ultrasound video signal is input to the nodule detection module in the form of continuous frames in this example; the method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
as shown in fig. 3, in this embodiment, according to the processing result of the record on all video frames, a key frame is extracted from the video frames in a specific manner: firstly, according to the coordinate distance of the center point of the nodule, which is converted from the coordinates of the continuous circumscribed rectangle, the number of the nodules commonly detected in the video is obtained (a plurality of nodules are commonly detected); for all detected bounding rectangles for each nodule, the frame with the largest bounding rectangle diagonal distance is selected as the keyframe.
The following supplementary explanation is needed here: a total of several nodules were detected in the video first obtained, after which further processing was done for each nodule. The nodule is a three-dimensional structure, and the outline of the nodule is irregular, that is, the section of the nodule may be distributed in a plurality of video frames, in order to facilitate detection and provide better assistance for doctors, the largest section of the nodule is selected, and the largest section can be selected by adopting the modes of the maximum diameter, the longest periphery and the like of the nodule, in this embodiment, the mode of selecting the frame image with the largest diagonal distance and the largest section as the key frame is provided.
In this embodiment, a frame with the largest diagonal distance of the rectangle circumscribed by the nodule is selected as the key frame, but the present invention is not limited thereto.
As a preferred embodiment, in the comprehensive determining step in this embodiment, the nodule property determination needs to be performed on the extracted key frame, as shown in fig. 4, where the key frame nodule property determination step is as follows:
resetting the part of the key frame except the rectangle circumscribed by the nodule to black;
judging the properties of the node by the processed key frame image;
image feature vectors for the lesion are acquired in the key frame image, wherein,
as shown in fig. 5, for the case of a single focus, after the feature vector of the structured data is obtained by using the structured data of the patient, the feature vector is connected in series with the image feature vector extracted from the single image, so as to obtain the analysis conclusion of the comprehensive data of the patient;
for the case of a plurality of lesions, for example, an image analysis structure of the lesion 1 is obtained by analysis for the lesion 1, and an image analysis structure of the lesion N is obtained by analysis for the lesion N. The embodiment also comprises a structured data module, which is used for obtaining the characteristic vector of the structured data of the patient; the structured data includes patient height, weight, gender, medical history, test results, and the like. And obtaining feature vectors of the structured data by using the structured data of the patient, and connecting the average value of the graphic feature vectors and the standard deviation of the graphic feature vectors of all focuses with the structured feature vectors in series to obtain a comprehensive data analysis conclusion aiming at the patient.
As shown in fig. 6, the step of obtaining the image feature vector for the lesion in the key frame image further includes:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix; the gray scale map is two-dimensional, length x width. The color map is a three-dimensional, length x width x color channel, and the value of each pixel is shaped, and is a gray value or a color value, such as an RGB value, of the pixel.
The image data is preprocessed and then input into a model, wherein the model comprises a feature extraction and transformation part, an image feature vector part and a solver part;
the feature extraction and transformation part is a convolutional neural network model, and after the feature extraction and transformation part is changed, the image feature vector part generates a one-dimensional vector by utilizing ultrasonic image matrix data, and the vector is a feature vector of an original image;
and using a solver part (a classification solver or a regression solver) to obtain a final output result by taking the image feature vector as an input. The image feature vector is an important intermediate variable in model production, the quality of which directly determines the quality of the final output.
The method processes the real-time video signal and simultaneously returns the processed video frame containing the mark to display in real time; each frame of image is processed, and the detected focus area is framed by a rectangular frame with obvious color so as to prompt a doctor to have a suspected focus. The doctor, when viewing the returned video, can clearly perceive the presence of a color box (mark) that suggests a possible lesion. The doctor is effectively assisted in identifying the focus in the video frame image. In the invention, a nodule detection module is utilized to process each frame of image in real time, a key frame is extracted, the node property of the key frame is judged, and then the image feature vector of the key frame is extracted to be combined with the structured data vector of a patient, so that the comprehensive data analysis conclusion of the patient is obtained; compared with the existing method for judging through doctor experience, the detection method and device provided by the invention effectively improve diagnosis efficiency and accuracy.
Fig. 7 is an illustration of an apparatus for lesion detection using computer vision techniques using ultrasound images, according to an exemplary embodiment. Referring to fig. 7 and 8, the apparatus for implementing the method for detecting a lesion as described above includes an ultrasonic workstation and a background computer system in which a lesion detection device is disposed; the apparatus further comprises:
the ultrasonic video signal acquisition module is used for directly acquiring video signals from medical ultrasonic equipment to obtain ultrasonic images and transmitting the signals to an ultrasonic workstation and a background computer system at the same time; in the embodiment, a video splitter is adopted to directly collect video signals from medical ultrasonic equipment, one path of the video signals is transmitted to an ultrasonic workstation for original video display, and the other path of the video signals is transmitted to focus detection equipment.
The focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, taking various focuses in the ultrasonic image as foreground and normal tissues as background; marking the detected focus area, wherein the marked area is a suspected focus area, and simultaneously, returning and displaying the processed video frame containing the mark in real time; as shown in fig. 7, the display is used to display the auxiliary diagnostic result, and the processed video frame containing the mark is displayed in real time. The red rectangular frame is used for marking in the present embodiment, and the marking manner of the red rectangular frame is merely provided as an example, but is not limited thereto.
The comprehensive diagnosis equipment automatically acquires screenshot of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection equipment by using marks (rectangular frames) obtained by focus detection, and the screenshot is orderly arranged according to the scanning generation time;
the comprehensive diagnosis equipment is internally provided with an image screening and classifying module, and the image screening and classifying module is used for secondarily identifying suspected focuses detected in the focus detection module, distinguishing whether the focuses are real focuses or not and accurately secondarily classifying the identified focuses. It should be noted that the secondary classification herein is mainly based on clinical needs, for example: distinguishing benign from malignant nodules, distinguishing carcinoma in situ from invasive carcinoma, and the like.
And the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and the additional background information (such as the height, weight, sex, medical history, test result and the like of the patient) of the patient, and comprehensively judging by combining all the images confirmed as the focus acquired by the scanning to obtain a focus detection result.
It should be noted that in clinical practice, one factor in missed diagnosis by the sonographer is the failure to notice a flash lesion image. When only a few frames of images of the scanned video contain lesions, it is difficult for the physician to perceive. Therefore, by constructing a lesion detection model, it is necessary to directly process the ultrasound video data, giving a doctor a special hint when a lesion appears. This is also the first problem to be solved.
To solve the first problem, a set of devices is designed to directly acquire video signals from medical ultrasound equipment and transmit the signals to both the ultrasound workstation and the background computer system. In addition, the apparatus includes equipment responsible for running the lesion detection model.
The focus detection needs to ensure the processing speed preferentially, the speed of processing images by the focus detection model is faster than the refreshing speed of real-time video data, and meanwhile, the model runs on proprietary focus detection equipment to ensure the real-time performance of processing.
The focus detection step takes various focuses in the ultrasonic image as foreground and normal tissues as background. This step only requires separation of foreground and background. Because of the specificity of the ultrasound detection scenario, the number of lesions that may appear on the same ultrasound image is very limited. Therefore, in order to increase the speed, the maximum number of foreground that can be detected is set to 5 here, namely: in an ultrasound image, the lesion detection model need only suggest less than or equal to 5 possible suspicious lesion areas.
The ultrasonic detection model needs to return the processed result in real time while processing the real-time video, and the return is the video stream after the original video is processed. The model processes each frame of image, and the detected focus area is framed by a rectangular frame with obvious color to prompt doctors to have suspected focuses. The doctor, when viewing the returned video, can obviously perceive the presence of a color box that suggests a possible lesion.
As a preferred embodiment, a nodule key frame extraction module and a key frame nodule property determination module are further provided in the comprehensive determination module; wherein the method comprises the steps of
The method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
extracting key frames according to the processing result: firstly, according to the coordinate distance of the center point of the nodule, which is converted from the coordinates of the continuous circumscribed rectangle, several nodules are detected in the video; selecting a frame with the largest diagonal distance of the circumscribed rectangle as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the nodule property of the extracted key frame image and extracting the image feature vector of the key frame image; the key frame nodule property determination module is configured to have the following functions:
inputting the processed key frame image into a key frame nodule property judging module to judge the nodule property;
image feature vectors for the lesion are acquired in the key frame image, wherein,
a single focus can extract a plurality of key frame images, and the image feature vectors of the focus can be extracted through a single image; and measuring the mean value and standard deviation of the feature vectors of the plurality of key frame images, and connecting the feature vectors in parallel to obtain the feature vector for the focus, and further obtaining the final image feature vector for the focus.
In this embodiment, in the key frame nodule property determining step, a feature extraction and transformation portion, an image feature vector portion, and a solver portion are further provided;
the method comprises the steps of inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, after the feature extraction and transformation part is changed, the image feature vector part generates a one-dimensional vector from ultrasonic image matrix data, and a classification solver or a regression solver is utilized, takes the image feature vector as input, and combines the feature vector of structural data of a patient to obtain a final output result.
For further details of the invention, the following is expanded:
the single model is used for image analysis, so that the speed and the accuracy are difficult to ensure, and the focus detection process can not accurately classify the focus while pursuing the detection speed. Thus, it is necessary to construct a separate process for precise classification of lesions.
To solve the second problem, a separate integrated diagnostic device is required for running the specialized image classification model. The computing device automatically obtains from the lesion detection device, shots of all suspected lesions generated by the current scan (taken from the original image using the rectangular box obtained by the lesion detection), the shots being arranged in time order by the scan generation.
The image classification model firstly distinguishes whether the focus is a true focus, the result obtained in the focus detection step is confirmed for the second time, and then the confirmed focus is accurately classified. In the case of accurate classification, it is necessary to read scan information given in an ultrasound workstation, for example: scanning depth, the position of the probe, etc.
The image classification model obtains background information of the patient from the ultrasound workstation: and (3) comprehensively judging the information such as age, sex, medical history and the like by combining all the images confirmed as the focus acquired by the scanning, and finally obtaining a comprehensive diagnosis result aiming at the scanning.
In this embodiment, the comprehensive judgment module is provided with a nodule detection module and a nodule key frame extraction module; the nodule detection module is used for processing each frame of image in real time, detecting whether the frame of image contains a nodule, and recording the coordinates of the circumscribed rectangle of the nodule if the frame of image contains the nodule; in the process, recording the processing results of all video frames;
the nodule key frame extraction module is used for obtaining a plurality of nodules which are detected in the video together according to the nodule center point coordinate distance which is converted according to the coordinates of the continuous circumscribed rectangles; for all detected bounding rectangles for each nodule, the frame with the largest bounding rectangle diagonal distance is selected as the keyframe.
On the other hand, the comprehensive judgment module is also provided with a nodule classification module, an image feature vector acquisition module and a diagnosis conclusion output module; wherein the method comprises the steps of
The node classification module inputs the processed key frame image into the trained node classification module, and judges the properties of the nodes according to the specific purpose of the classification network;
the image feature vector acquisition module is configured to: the last layer of the nodule classifying module is an output layer, and the penultimate layer is a full-connection layer and is used for obtaining a feature vector aiming at a focus image; the feature vector for the focus is obtained by using the penultimate layer while classifying by using the network; the method comprises the steps that a plurality of key frames acquire a plurality of feature vectors, wherein the average value and the variance of all the feature vectors are taken to represent final image feature vectors;
the diagnosis conclusion output module digitizes the identity information of the patient and adds the digitized identity information to the back of the feature vector obtained in the third step, and the obtained comprehensive feature vector is processed to finally obtain the comprehensive diagnosis conclusion.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
It is to be understood that the same or similar parts in the above embodiments may be referred to each other, and that in some embodiments, the same or similar parts in other embodiments may be referred to.
Although embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives, and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (9)

1. An apparatus for lesion detection using computer vision techniques using ultrasound images, the apparatus comprising:
the ultrasonic video signal acquisition module is used for directly acquiring video signals from medical ultrasonic equipment to obtain ultrasonic images;
the focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, taking various focuses in the ultrasonic image as foreground and normal tissues as background; marking the detected focus area, wherein the marked area is a suspected focus area, and simultaneously, returning and displaying the processed video frame containing the mark in real time;
the comprehensive diagnosis equipment automatically acquires screenshot of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection equipment by using the marks obtained by the focus detection equipment, and the screenshot is orderly arranged according to the scanning generation time;
the comprehensive diagnosis equipment is internally provided with an image screening and classifying module, wherein the image screening and classifying module is used for secondarily identifying suspected focuses detected in the focus detection module, distinguishing whether the focuses are real focuses or not and secondarily classifying the identified focuses;
the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and the background information of the patient, and carrying out comprehensive judgment processing by combining all the images confirmed as focuses acquired by the scanning;
the comprehensive judgment module is provided with a nodule key frame extraction module and a key frame nodule property judgment module; wherein the method comprises the steps of
The method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
extracting key frames according to the processing result: firstly, the nodule key frame extraction module obtains a plurality of nodules in the video according to the nodule center point coordinate distance converted by coordinates of continuous circumscribed rectangles; selecting a frame with the largest diagonal distance of the circumscribed rectangle as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the nodule property of the extracted key frame image and extracting the image feature vector of the key frame image; the key frame nodule property determination module is configured to have the following functions:
inputting the processed key frame image into a key frame nodule property judging module to judge the nodule property;
image feature vectors for the lesion are acquired in the key frame image, wherein,
a single focus can extract a plurality of key frame images, and the image feature vectors of the single focus can be extracted through the single image; and measuring the mean value and standard deviation of the feature vectors of the plurality of key frame images, and connecting the feature vectors in parallel to obtain the feature vector for the focus, and further obtaining the final image feature vector for the focus.
2. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 1, further comprising a structured data module for deriving feature vectors of the patient structured data;
for the single focus, the feature vector of the structured data is obtained by using the structured data of the patient, and then is connected with the image feature vector extracted from the single image in series, so that the comprehensive data analysis conclusion of the patient is obtained;
and for the condition of a plurality of focuses, utilizing the structured data of the patient to obtain feature vectors of the structured data, and connecting the feature vectors and standard deviation vectors of all the focuses with the structured feature vectors in series to obtain a comprehensive data analysis conclusion aiming at the patient.
3. The apparatus for lesion detection using computer vision techniques according to claim 2, wherein the key frame nodule property determination step is further provided with a feature extraction and transformation section, an image feature vector section, and a solver section;
the method comprises the steps of inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, generating a one-dimensional vector from ultrasonic image matrix data by the image feature vector part after the feature extraction and transformation part is changed, inputting the image feature vector by a solver part, and combining the feature vector of structural data of a patient to obtain a final output result.
4. A device for lesion detection using computer vision techniques using ultrasound images according to any of claims 1 to 3, characterized in that the device is implemented as follows:
directly acquiring video signals from medical ultrasonic equipment to obtain an ultrasonic image;
detecting focus of each frame of ultrasonic image in real time, separating foreground and background of the ultrasonic image, taking various focuses in the ultrasonic image as foreground and normal tissues as background; marking the detected focus area, wherein the marked area is a suspected focus area;
processing the real-time video signal and simultaneously returning the processed video frame containing the mark to display in real time;
obtaining screenshot of all suspected lesions generated by current scanning from an original ultrasonic image by using marks obtained by lesion detection, and orderly arranging according to scanning generation time;
judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus;
and reading the scanning information of the ultrasonic equipment and the background information of the patient, and carrying out comprehensive judgment processing by combining all the images confirmed as the focus acquired by the scanning.
5. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 4, further comprising:
the speed of processing images for focus detection of each frame of ultrasonic image in real time is faster than the refreshing speed of real-time video data; in the process of focus detection for each frame of ultrasonic image in real time, the maximum detected foreground quantity is set to be less than or equal to 5 on the same ultrasonic image.
6. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 4, wherein the step of comprehensively judging includes nodule key frame extraction, comprising the steps of:
1) Inputting the ultrasonic video signal into the nodule detection module in the form of continuous frames;
2) The method comprises the steps that a nodule detection module processes each frame of image in real time, detects whether a nodule is contained in the frame of image, and records the coordinates of the circumscribed rectangle of the nodule if the nodule is contained in the frame of image; in the process, recording the processing results of all video frames;
3) Extracting key frames according to the processing result:
firstly, according to the coordinate distance of the center point of the nodule, which is converted from the coordinates of the continuous circumscribed rectangle, several nodules are detected in the video; for all detected bounding rectangles for each nodule, the frame with the largest bounding rectangle diagonal distance is selected as the keyframe.
7. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 6, wherein: in the comprehensive judging step, the extracted key frames need to be subjected to nodule property judgment, and the key frame nodule property judgment step is as follows:
resetting the part of the key frame except the rectangle circumscribed by the nodule to black;
judging the properties of the node by the processed key frame image;
image feature vectors for the lesion are acquired in the key frame image, wherein,
a single focus can extract a plurality of key frame images, and the image feature vectors of the focus can be extracted through a single image; and measuring the mean value and standard deviation of the feature vectors of the plurality of key frame images, and connecting the feature vectors in parallel to obtain the feature vector for the focus, and further obtaining the final image feature vector for the focus.
8. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 7, wherein: for the single focus, the feature vector of the structured data is obtained by using the structured data of the patient, and then is connected with the image feature vector extracted from the single image in series, so that the comprehensive data analysis conclusion of the patient is obtained;
and for the condition of a plurality of focuses, utilizing the structured data of the patient to obtain feature vectors of the structured data, and connecting the feature vectors and standard deviation vectors of all the focuses with the structured feature vectors in series to obtain a comprehensive data analysis conclusion aiming at the patient.
9. The apparatus for lesion detection using computer vision techniques using ultrasound images according to claim 8, wherein: the step of obtaining the image feature vector for the focus in the key frame image further comprises the following steps:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix;
the key frame nodule property judging step is provided with a feature extraction and transformation part, an image feature vector part and a solver part;
the feature extraction and transformation part is a convolutional neural network model, and after the feature extraction and transformation part is changed, the image feature vector part generates a one-dimensional vector from ultrasonic image matrix data, and the vector is a feature vector of an original image;
and the solver part takes the image feature vector as input and combines the feature vector of the structured data of the patient to obtain the final output result.
CN202010029034.9A 2020-01-12 2020-01-12 Device for detecting focus by using ultrasonic image and computer vision Active CN111227864B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010029034.9A CN111227864B (en) 2020-01-12 2020-01-12 Device for detecting focus by using ultrasonic image and computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010029034.9A CN111227864B (en) 2020-01-12 2020-01-12 Device for detecting focus by using ultrasonic image and computer vision

Publications (2)

Publication Number Publication Date
CN111227864A CN111227864A (en) 2020-06-05
CN111227864B true CN111227864B (en) 2023-06-09

Family

ID=70861705

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010029034.9A Active CN111227864B (en) 2020-01-12 2020-01-12 Device for detecting focus by using ultrasonic image and computer vision

Country Status (1)

Country Link
CN (1) CN111227864B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781439B (en) * 2020-11-25 2022-07-29 北京医准智能科技有限公司 Ultrasonic video focus segmentation method and device
CN112862752A (en) * 2020-12-31 2021-05-28 北京小白世纪网络科技有限公司 Image processing display method, system electronic equipment and storage medium
CN112641466A (en) * 2020-12-31 2021-04-13 北京小白世纪网络科技有限公司 Ultrasonic artificial intelligence auxiliary diagnosis method and device
CN112863647A (en) * 2020-12-31 2021-05-28 北京小白世纪网络科技有限公司 Video stream processing and displaying method, system and storage medium
CN112766066A (en) * 2020-12-31 2021-05-07 北京小白世纪网络科技有限公司 Method and system for processing and displaying dynamic video stream and static image
CN113344855A (en) * 2021-05-10 2021-09-03 深圳瀚维智能医疗科技有限公司 Method, device, equipment and medium for reducing false positive rate of breast ultrasonic lesion detection
CN113344854A (en) * 2021-05-10 2021-09-03 深圳瀚维智能医疗科技有限公司 Breast ultrasound video-based focus detection method, device, equipment and medium
CN113379693B (en) * 2021-06-01 2024-02-06 东软教育科技集团有限公司 Capsule endoscope key focus image detection method based on video abstraction technology
CN113616945B (en) * 2021-08-13 2024-03-08 湖北美睦恩医疗设备有限公司 Detection method based on focused ultrasonic image recognition and beauty and body-building device
CN114664410B (en) * 2022-03-11 2022-11-08 北京医准智能科技有限公司 Video-based focus classification method and device, electronic equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104470443A (en) * 2012-07-18 2015-03-25 皇家飞利浦有限公司 Method and system for processing ultrasonic imaging data
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2003070102A2 (en) * 2002-02-15 2003-08-28 The Regents Of The University Of Michigan Lung nodule detection and classification
CN2868229Y (en) * 2004-12-09 2007-02-14 林礼务 Ultrasonic anatomical positioning-marking device
EP2147395A1 (en) * 2007-05-17 2010-01-27 Yeda Research And Development Company Limited Method and apparatus for computer-aided diagnosis of cancer and product
US7876943B2 (en) * 2007-10-03 2011-01-25 Siemens Medical Solutions Usa, Inc. System and method for lesion detection using locally adjustable priors
KR101623431B1 (en) * 2015-08-06 2016-05-23 주식회사 루닛 Pathological diagnosis classifying apparatus for medical image and pathological diagnosis system using the same
CN105447872A (en) * 2015-12-03 2016-03-30 中山大学 Method for automatically identifying liver tumor type in ultrasonic image
CN106780448B (en) * 2016-12-05 2018-07-17 清华大学 A kind of pernicious categorizing system of ultrasonic Benign Thyroid Nodules based on transfer learning and Fusion Features
CN108665456B (en) * 2018-05-15 2022-01-28 广州尚医网信息技术有限公司 Method and system for real-time marking of breast ultrasound lesion region based on artificial intelligence
CN109685102A (en) * 2018-11-13 2019-04-26 平安科技(深圳)有限公司 Breast lesion image classification method, device, computer equipment and storage medium
CN110223287A (en) * 2019-06-13 2019-09-10 首都医科大学北京友谊医院 A method of early diagnosing mammary cancer rate can be improved
CN110349141A (en) * 2019-07-04 2019-10-18 复旦大学附属肿瘤医院 A kind of breast lesion localization method and system
CN110648344B (en) * 2019-09-12 2023-01-17 电子科技大学 Diabetes retinopathy classification device based on local focus characteristics

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104470443A (en) * 2012-07-18 2015-03-25 皇家飞利浦有限公司 Method and system for processing ultrasonic imaging data
CN107274402A (en) * 2017-06-27 2017-10-20 北京深睿博联科技有限责任公司 A kind of Lung neoplasm automatic testing method and system based on chest CT image
CN107680678A (en) * 2017-10-18 2018-02-09 北京航空航天大学 Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system

Also Published As

Publication number Publication date
CN111227864A (en) 2020-06-05

Similar Documents

Publication Publication Date Title
CN111227864B (en) Device for detecting focus by using ultrasonic image and computer vision
CN111214255B (en) Medical ultrasonic image computer-aided method
US11101033B2 (en) Medical image aided diagnosis method and system combining image recognition and report editing
US11100645B2 (en) Computer-aided diagnosis apparatus and computer-aided diagnosis method
US11633169B2 (en) Apparatus for AI-based automatic ultrasound diagnosis of liver steatosis and remote medical diagnosis method using the same
EP1994878B1 (en) Medical image processing device and medical image processing method
CN113781439B (en) Ultrasonic video focus segmentation method and device
US9959622B2 (en) Method and apparatus for supporting diagnosis of region of interest by providing comparison image
TWI473598B (en) Breast ultrasound image scanning and diagnostic assistance system
CN102056547A (en) Medical image processing device and method for processing medical image
CN111242921B (en) Automatic updating method and system for medical ultrasonic image auxiliary diagnosis system
KR102531400B1 (en) Artificial intelligence-based colonoscopy diagnosis supporting system and method
CN111950388B (en) Vulnerable plaque tracking and identifying system and method
US8805043B1 (en) System and method for creating and using intelligent databases for assisting in intima-media thickness (IMT)
CN114782307A (en) Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning
KR20210014267A (en) Ultrasound diagnosis apparatus for liver steatosis using the key points of ultrasound image and remote medical-diagnosis method using the same
CN111862090A (en) Method and system for esophageal cancer preoperative management based on artificial intelligence
KR20230097646A (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method to improve gastro polyp and cancer detection rate
EP4129197A1 (en) Computer program, information processing method, information processing device, and method for generating model
JP2000300557A (en) Ultrasonic diagnostic device
JP3255668B2 (en) Image analysis device
CN112002407A (en) Breast cancer diagnosis device and method based on ultrasonic video
US20190333399A1 (en) System and method for virtual reality training using ultrasound image data
KR102536369B1 (en) Artificial intelligence-based gastroscopy diagnosis supporting system and method
CN114004854A (en) System and method for processing and displaying slice image under microscope in real time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant