CN111227864A - Method and apparatus for lesion detection using ultrasound image using computer vision - Google Patents
Method and apparatus for lesion detection using ultrasound image using computer vision Download PDFInfo
- Publication number
- CN111227864A CN111227864A CN202010029034.9A CN202010029034A CN111227864A CN 111227864 A CN111227864 A CN 111227864A CN 202010029034 A CN202010029034 A CN 202010029034A CN 111227864 A CN111227864 A CN 111227864A
- Authority
- CN
- China
- Prior art keywords
- image
- focus
- feature vector
- ultrasonic
- nodule
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5207—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B8/00—Diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/52—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
- A61B8/5292—Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves using additional data, e.g. patient information, image labeling, acquisition parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30096—Tumor; Lesion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/03—Recognition of patterns in medical or anatomical images
- G06V2201/032—Recognition of patterns in medical or anatomical images of protuberances, polyps nodules, etc.
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Quality & Reliability (AREA)
- Multimedia (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
Abstract
The application relates to a method and a device for detecting a focus by using an ultrasonic image and utilizing a computer vision technology, which comprises the steps of directly acquiring a video signal from medical ultrasonic equipment to obtain the ultrasonic image; detecting a focus of each frame of ultrasonic image in real time, separating a foreground from a background of the ultrasonic image, and marking a detected focus area; acquiring screenshots of all suspected focuses generated by current scanning from an original ultrasonic image by using marks obtained by focus detection, and orderly arranging the screenshots according to scanning generation time; judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus; and reading the scanning information of the ultrasonic equipment and the background information of the patient, and performing comprehensive judgment processing by combining all images confirmed as the focus acquired by the scanning. The invention can be adapted to the current ultrasonic diagnosis scene, effectively assists doctors to identify focuses in video frame images, and improves the diagnosis efficiency and accuracy.
Description
Technical Field
The present invention relates to the field of ultrasound image analysis technologies, and in particular, to a method and an apparatus for performing lesion detection using an ultrasound image using a computer vision technique.
Background
Ultrasonic examination (US examination) is an examination using the reflection of ultrasonic waves by a human body. Generally, ultrasonic examination called US is an ultrasonic examination in which a body is irradiated with weak ultrasonic waves to image reflected waves of tissues. In the related art, ultrasonic examination has become an important and non-invasive detection method for displaying the organ structure and the motion function of the human body. The medical ultrasonic equipment has low manufacturing cost and is used in hospitals and physical examination centers at all levels. The ultrasonic examination is low in cost and becomes an important means for early screening and diagnosis of various diseases.
In the whole medical image field, there are many technologies for performing auxiliary diagnosis by using a computer, such as: CN109222859A is an intelligent endoscope image system with ai auxiliary diagnosis function, which constructs a set of endoscope system, and can transmit images to a computer for intelligent analysis and return the analysis to the endoscope system for reference of doctors. Currently, there are few auxiliary diagnostic techniques that are specifically applied to ultrasound examination scenarios. For example: the patent CN206365899U discloses an ultrasound-assisted diagnosis system, which does not use computer technology to assist diagnosis, but describes a set of devices to reduce the labor intensity of doctors and assist them in diagnosis.
In clinical practice, one factor that sonographers miss diagnoses is not noticing a flashing image of the lesion. When only a few frames of images of the video obtained by scanning contain the focus, the doctor can not detect the focus. However, the ultrasound image itself is not very clear for imaging the tissue, and whether the probe can scan the lesion or not depends on the physician's manipulation. The doctor needs to constantly adjust the scanning angle and closely watches the scanning screen, so that the labor intensity is high, and the examination result is very dependent on the experience of the doctor.
The existing method and equipment are not adapted to the ultrasonic diagnosis scene, the existing computer technology is not fully utilized, and in clinical practice, one factor of misdiagnosis and missed diagnosis of an ultrasonograph is that a sudden focus image is not noticed. When only a few frames of images of the video obtained by scanning contain the focus, the doctor can not detect the focus. The existing mode is difficult to fundamentally reduce the workload of doctors and improve the diagnosis efficiency and accuracy.
Disclosure of Invention
In order to overcome the problems in the related art at least to a certain extent, the method and the device for detecting the focus by using the ultrasonic image and utilizing the computer vision are provided.
According to a first aspect of embodiments of the present application, there is provided a method for lesion detection using ultrasound images using computer vision, comprising the steps of:
acquiring a video signal directly from medical ultrasonic equipment to obtain an ultrasonic image;
detecting a focus of each frame of ultrasonic image in real time, performing foreground and background separation processing on the ultrasonic images, regarding various focuses in the ultrasonic images as a foreground, and regarding normal tissues as a background; marking the detected focus area, wherein the marked area is a suspected focus area;
the real-time video signal processing is carried out, and the processed video frame containing the mark is returned and displayed in real time;
acquiring screenshots of all suspected focuses generated by current scanning from an original ultrasonic image by using marks obtained by focus detection, and orderly arranging the screenshots according to scanning generation time;
judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus;
and reading the scanning information of the ultrasonic equipment and the background information of the patient, and performing comprehensive judgment processing by combining all images confirmed as the focus acquired by the scanning.
Furthermore, the image processing speed of the lesion detection of each frame of ultrasonic image in real time is faster than the refreshing speed of real-time video data; in the process of detecting the focus of each frame of ultrasonic image in real time, the detected maximum foreground quantity is set to be less than or equal to 5 on the same ultrasonic image.
Further, the comprehensive judgment processing step includes extraction of a nodule key frame, and the steps are as follows:
1) inputting the ultrasonic video signal into a nodule detection module in a form of continuous frames;
2) the nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records the coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
3) and extracting the key frame according to the processing result:
firstly, converting the coordinate distance of the center point of each nodule according to the coordinates of the continuous circumscribed rectangle to obtain a plurality of nodules detected in a video; and selecting the frame with the maximum diagonal distance of the circumscribed rectangles as the key frame aiming at all the detected circumscribed rectangles of each nodule.
Further, in the comprehensive judgment step, the nodule property judgment needs to be performed on the extracted key frame, and the key frame nodule property judgment step is as follows:
resetting the part outside the bounding rectangle of the junction in the key frame to black;
judging the properties of the knots by using the processed key frame images;
an image feature vector for the lesion is obtained in the key frame image, wherein,
a plurality of key frame images can be extracted from a single focus, and image feature vectors can be extracted from a single image; and (3) taking the mean value and the standard deviation of the feature vectors of a plurality of key frame images, connecting the mean value and the standard deviation in parallel to obtain a feature vector aiming at the focus, and further obtaining a final image feature vector aiming at the focus.
Further, for the condition of a single focus, after the feature vector of the structured data is obtained by using the structured data of the patient, the feature vector is connected with the image feature vector extracted from the single image in series, and then the comprehensive data analysis conclusion for the patient is obtained;
and for the condition of a plurality of focuses, obtaining the feature vector of the structured data by using the structured data of the patient, and connecting the feature vectors and the standard deviation vectors of all the focuses in series with the structured feature vector to obtain a comprehensive data analysis conclusion aiming at the patient.
Further, the step of obtaining an image feature vector for the lesion in the key frame image further includes:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix;
the key frame nodule property judging step is provided with a feature extraction and transformation part, an image feature vector part and a solver part;
the characteristic extraction and transformation part is a convolution neural network model, and after the characteristic extraction and transformation part is changed, the image characteristic vector part generates ultrasonic image matrix data into a one-dimensional vector which is the characteristic vector of an original image;
and (4) obtaining a final output result by using a solver part and taking the image feature vector as input and combining the feature vector of the structured data of the patient.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for performing lesion detection using an ultrasound image using computer vision technology, the apparatus being configured to implement the above-mentioned method for lesion detection, the apparatus including: the ultrasonic video signal acquisition module is used for directly acquiring a video signal from medical ultrasonic equipment to obtain an ultrasonic image;
the focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, and taking various focuses in the ultrasonic image as a foreground and normal tissues as a background; marking the detected lesion area, wherein the marked area is a suspected lesion area, and simultaneously returning and displaying the processed video frame containing the mark in real time;
the comprehensive diagnosis device automatically acquires screenshots of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection device by using marks obtained by the focus detection device, and the screenshots are orderly arranged according to scanning generation time;
the comprehensive diagnosis equipment runs an image screening and classifying module which is used for secondarily confirming the suspected focus detected in the focus detection module, distinguishing whether the focus is a real focus or not and classifying the confirmed focus;
and the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and the background information of the patient and carrying out comprehensive judgment processing by combining all images which are confirmed to be focuses and acquired by the scanning.
Furthermore, a nodule key frame extraction module and a key frame nodule property judgment module are arranged in the comprehensive judgment module; wherein
The nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
and extracting key frames according to the processing result: firstly, converting the coordinate distance of the center point of each nodule according to the coordinates of the continuous circumscribed rectangle to obtain a plurality of nodules detected in a video; selecting a frame with the maximum diagonal distance of the circumscribed rectangles as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the properties of nodules of the extracted key frame images and extracting image feature vectors of the extracted key frame images; the key frame nodule property determination module is configured to function as:
inputting the processed key frame image into a key frame nodule property judging module to judge the property of the nodule;
an image feature vector for the lesion is obtained in the key frame image, wherein,
a plurality of key frame images can be extracted from a single focus, and image feature vectors can be extracted from a single image; and (3) taking the mean value and the standard deviation of the feature vectors of a plurality of key frame images, connecting the mean value and the standard deviation in parallel to obtain a feature vector aiming at the focus, and further obtaining a final image feature vector aiming at the focus.
Further, the system also comprises a structured data module used for obtaining a feature vector of the structured data of the patient;
for the condition of a single focus, after a feature vector of structured data is obtained by using the structured data of a patient, the feature vector is connected with an image feature vector extracted from a single image in series, and then a comprehensive data analysis conclusion for the patient is obtained;
and for the condition of a plurality of focuses, obtaining the feature vector of the structured data by using the structured data of the patient, and connecting the feature vectors and the standard deviation vectors of all the focuses in series with the structured feature vector to obtain a comprehensive data analysis conclusion aiming at the patient.
Further, in the key frame nodule property judging step, a feature extraction and transformation part, an image feature vector part and a solver part are further arranged;
inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, after the feature extraction and transformation part is changed, the image feature vector part generates ultrasonic image matrix data into a one-dimensional vector, and a solver part is used for inputting the image feature vector and combining the feature vector of the structured data of the patient to obtain a final output result.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
1) the existing method and equipment are not adapted to the scene of ultrasonic diagnosis, the existing computer technology is not fully utilized, and in clinical practice, one factor of misdiagnosis and missed diagnosis of an ultrasonograph is that a sudden focus image is not noticed. When only a few frames of images of the video obtained by scanning contain the focus, the doctor can not detect the focus. The existing mode is difficult to fundamentally reduce the workload of doctors and improve the diagnosis efficiency and accuracy. The method carries out real-time return display on the processed video frame containing the mark while carrying out real-time video signal processing; each frame of image is processed and the detected lesion area is framed by a rectangular frame with a distinct color to indicate to the physician the presence of a suspected lesion. The physician, while viewing the returned video, can visibly detect the presence of a color box (mark) that suggests a possible lesion. The method effectively assists a doctor to identify the focus in the video frame image.
2) In the invention, a nodule detection module is used for processing images of each frame in real time, extracting key frames, judging the node properties of the key frames, extracting image characteristic vectors of the key frames and combining with structured data vectors of patients, and further obtaining a comprehensive data analysis conclusion aiming at the patients; compared with the existing method for judging through doctor experience, the detection method and the detection device provided by the invention have the advantage that the diagnosis efficiency and accuracy are effectively improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of a method for detecting a lesion in an ultrasound image using computer vision technology according to the present invention;
FIG. 2 is a flow chart of a nodule detection method of the present invention;
FIG. 3 is a flow chart of a method for extracting key frames according to the present invention;
FIG. 4 is a flowchart of the keyframe nodule property determination of the present invention;
FIG. 5 is a schematic diagram of the present invention using structured data of a patient to obtain feature vectors;
FIG. 6 is a schematic diagram of obtaining image feature vectors for a lesion from a keyframe image according to the present invention;
FIG. 7 is a schematic view of a lesion detection apparatus according to the present invention;
fig. 8 is a schematic structural diagram of a lesion detection system according to the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating a lesion detection method according to an exemplary embodiment, and as shown in fig. 1, a method for lesion detection using an ultrasound image using a computer vision technique includes the following steps.
S1, acquiring a video signal directly from the medical ultrasonic equipment to obtain an ultrasonic image;
s2, detecting the focus of each frame of ultrasonic image in real time, separating the foreground from the background of the ultrasonic image, regarding various focuses in the ultrasonic image as the foreground, and regarding the normal tissues as the background; marking the detected focus area, and framing the focus area by using a rectangular frame with obvious color, wherein the marked area is a suspected focus area;
s3, returning and displaying the processed video frame containing the mark (rectangular frame) in real time while processing the real-time video signal;
s4, acquiring screenshots of all suspected focuses generated by current scanning from the original ultrasonic image by using a rectangular frame obtained by focus detection, and orderly arranging according to scanning generation time;
s5, judging whether the focus in the screenshot of the suspected focus is a real focus or not, and accurately classifying the confirmed focus;
and S6, reading the scanning information of the ultrasonic equipment and the background information of the patient, and performing comprehensive judgment processing by combining all images confirmed as the focus acquired by the scanning.
In the detection method provided by this embodiment, it is necessary to supplement that the image processing speed of the lesion detection performed on each frame of ultrasound image in real time is faster than the refresh speed of real-time video data; in the process of detecting the focus of each frame of ultrasonic image in real time, the detected maximum foreground quantity is set to be less than or equal to 5 on the same ultrasonic image. Due to the particularity of the ultrasound examination scenario, the number of possible lesions on the same ultrasound image is very limited. Therefore, to increase speed, the maximum number of foreground that can be detected is set here to 5, i.e.: in an ultrasound image, the lesion detection model need only present less than or equal to 5 possible suspected lesion areas.
In the detection method provided in this embodiment, it is first necessary to confirm an image of a lesion, that is, to detect whether an image frame contains a nodule, and it is to be noted that this step is only to detect whether an image contains a nodule; the key frame is extracted from the image containing the node, and then the property judgment is performed on the key frame and the feature vector is extracted, which is detailed as follows:
as shown in fig. 2, in the present embodiment, the ultrasound video signal is input to the nodule detection module in the form of consecutive frames; the nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
as shown in fig. 3, in this embodiment, a key frame is extracted from all video frames according to the processing result of recording, and the specific manner is as follows: firstly, converting the coordinate distance of the center point of each node according to the coordinates of the continuous circumscribed rectangle to obtain the number of the commonly detected nodes in the video (a plurality of nodes are detected together); and selecting the frame with the maximum diagonal distance of the circumscribed rectangles as the key frame aiming at all the detected circumscribed rectangles of each nodule.
Supplementary explanation is needed here: a total of several nodules are detected in the resulting video first, and then further processing is performed for each nodule. The nodule is a stereo structure, the shape of which is irregular, that is, the sections of a nodule may be distributed in several video frames, in order to provide better assistance for the doctor for detection, the maximum section of the nodule is selected, and the maximum section may be selected in the form of the maximum diameter, the longest periphery, and the like of the nodule.
In this embodiment, the frame with the largest diagonal distance of the nodule bounding rectangle is selected as the key frame, but the method is not limited to this.
As a preferred implementation manner, in the present embodiment, in the comprehensive determination step, the nodule property determination needs to be performed on the extracted key frame, and as shown in fig. 4, the key frame nodule property determination step is as follows:
resetting the part outside the bounding rectangle of the junction in the key frame to black;
judging the properties of the knots by using the processed key frame images;
an image feature vector for the lesion is obtained in the key frame image, wherein,
as shown in fig. 5, for the case of a single lesion, after the feature vector of the structured data is obtained by using the structured data of the patient, the feature vector is connected in series with the image feature vector extracted from the single image, so as to obtain the comprehensive data analysis conclusion for the patient;
for the case of multiple lesions, for example, the image analysis structure of the lesion 1 is obtained by analyzing for the lesion 1, and the image analysis structure of the lesion N is obtained by analyzing for the lesion N. The embodiment also comprises a structured data module used for obtaining the feature vector of the structured data of the patient; the structured data includes the patient's height, weight, sex, medical history, test results, etc. And obtaining a feature vector of the structured data by using the structured data of the patient, and connecting the mean value of the graphic feature vector and the standard deviation of the graphic feature vector of all the focuses with the structured feature vector in series to obtain a comprehensive data analysis conclusion aiming at the patient.
As shown in fig. 6, it should be noted that the step of obtaining an image feature vector for a lesion in a key frame image further includes:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix; the grayscale map is two-dimensional, length x width. The color map is three-dimensional, length x width x color channel, and the value of each pixel is a shaping, which is a gray value or a color value, such as an RGB value, of the pixel.
Inputting the image data into a model after preprocessing, wherein the model comprises a feature extraction and transformation part, an image feature vector part and a solver part;
the characteristic extraction and transformation part is a convolution neural network model, and after the characteristic extraction and transformation part is changed, the image characteristic vector part utilizes ultrasonic image matrix data to generate a one-dimensional vector which is the characteristic vector of an original image;
and (4) utilizing a solver part (a classification solver or a regression solver) and taking the image feature vector as input to obtain a final output result. Image feature vectors are important intermediate variables for model production, and their quality directly determines the quality of the final output.
The method carries out real-time return display on the processed video frame containing the mark while carrying out real-time video signal processing; each frame of image is processed and the detected lesion area is framed by a rectangular frame with a distinct color to indicate to the physician the presence of a suspected lesion. The physician, while viewing the returned video, can visibly detect the presence of a color box (mark) that suggests a possible lesion. The method effectively assists a doctor to identify the focus in the video frame image. The method comprises the steps of processing each frame of image in real time by using a nodule detection module, extracting key frames, judging the node properties of the key frames, extracting image feature vectors of the key frames and combining the image feature vectors with structured data vectors of a patient, and further obtaining a comprehensive data analysis conclusion for the patient; compared with the existing method for judging through doctor experience, the detection method and the detection device provided by the invention have the advantage that the diagnosis efficiency and accuracy are effectively improved.
Fig. 7 illustrates an apparatus for lesion detection using computer vision techniques using ultrasound images, according to an exemplary embodiment. Referring to fig. 7 and 8, the apparatus is used for implementing the above-mentioned method for detecting a lesion, and includes an ultrasound workstation and a background computer system, wherein a lesion detection device is disposed in the background computer system; the device also includes:
the ultrasonic video signal acquisition module is used for directly acquiring a video signal from the medical ultrasonic equipment to obtain an ultrasonic image and simultaneously transmitting the signal to the ultrasonic workstation and the background computer system; in the embodiment, a video splitter is adopted to directly acquire video signals from medical ultrasonic equipment, one path of the video signals is transmitted to an ultrasonic workstation for original video display, and the other path of the video signals is transmitted to focus detection equipment.
The focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, and taking various focuses in the ultrasonic image as a foreground and normal tissues as a background; marking the detected lesion area, wherein the marked area is a suspected lesion area, and simultaneously returning and displaying the processed video frame containing the mark in real time; as shown in fig. 7, the auxiliary diagnosis result is displayed on the display, and the processed video frame containing the marker is displayed in real time. In this embodiment, a red rectangular frame is used for marking, and the marking manner of the red rectangular frame is only provided as an example, but not limited thereto.
The comprehensive diagnosis device automatically acquires screenshots of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection device by using marks (rectangular frames) obtained by focus detection, and the screenshots are orderly arranged according to scanning generation time;
the comprehensive diagnosis device is provided with an image screening and classifying module, and the image screening and classifying module is used for secondarily confirming the suspected focus detected in the focus detecting module, distinguishing whether the focus is a real focus, and accurately secondarily classifying the confirmed focus. It should be added that the secondary classification herein is mainly based on clinical needs, such as: to distinguish benign and malignant nodules, to distinguish carcinoma in situ and infiltrating carcinoma, etc.
And the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and additional background information of the patient (such as the height, the weight, the sex, the medical history, the test result and the like of the patient), and comprehensively judging by combining all images confirmed as the focus acquired by the scanning to obtain a focus detection result.
It should be added that in clinical practice, one factor of missed diagnosis by the sonographer is not noticing a flashing image of the lesion. When only a few frames of images of the video obtained by scanning contain the focus, the doctor can not detect the focus. Therefore, by constructing a focus detection model and directly processing ultrasonic video data, special prompts are required to be given to doctors when focuses appear. This is also the first problem to be solved.
In order to solve the first problem, a set of devices needs to be designed, wherein the video signals are directly collected from the medical ultrasonic equipment and are simultaneously transmitted to the ultrasonic workstation and the background computer system. In addition, the device comprises equipment for operating a focus detection model.
The focus detection needs to preferentially ensure the processing speed, the image processing speed of the focus detection model is faster than the refreshing speed of real-time video data, and meanwhile, the model runs on a special focus detection device to ensure the real-time performance of the processing.
In the focus detection step, various focuses in the ultrasonic image are taken as the foreground, and normal tissues are taken as the background. This step only requires the separation of foreground and background. Due to the particularity of the ultrasound examination scenario, the number of possible lesions on the same ultrasound image is very limited. Therefore, to increase speed, the maximum number of foreground that can be detected is set here to 5, i.e.: in an ultrasound image, the lesion detection model need only present less than or equal to 5 possible suspected lesion areas.
The ultrasonic detection model also needs to return a processed result in real time while performing real-time video processing, and the returned result is a video stream processed from the original video. The model processes each frame of image and frames the detected lesion area with a rectangular frame of apparent color to indicate to the physician the presence of a suspected lesion. The physician, while viewing the returned video, can visibly detect the presence of the color box that suggests the possible presence of the lesion.
As a preferred implementation manner, further, a nodule key frame extraction module and a key frame nodule property judgment module are arranged in the comprehensive judgment module; wherein
The nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
and extracting key frames according to the processing result: firstly, converting the coordinate distance of the center point of each nodule according to the coordinates of the continuous circumscribed rectangle to obtain a plurality of nodules detected in a video; selecting a frame with the maximum diagonal distance of the circumscribed rectangles as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the properties of nodules of the extracted key frame images and extracting image feature vectors of the extracted key frame images; the key frame nodule property determination module is configured to function as:
inputting the processed key frame image into a key frame nodule property judging module to judge the property of the nodule;
an image feature vector for the lesion is obtained in the key frame image, wherein,
a plurality of key frame images can be extracted from a single focus, and image feature vectors can be extracted from a single image; and (3) taking the mean value and the standard deviation of the feature vectors of a plurality of key frame images, connecting the mean value and the standard deviation in parallel to obtain a feature vector aiming at the focus, and further obtaining a final image feature vector aiming at the focus.
In this embodiment, in the key frame nodule property determination step, a feature extraction and transformation portion, an image feature vector portion, and a solver portion are further provided;
inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, after the feature extraction and transformation part is changed, the image feature vector part generates ultrasonic image matrix data into a one-dimensional vector, and a classification solver or a regression solver is used for inputting the image feature vector and combining the feature vector of the structured data of the patient to obtain a final output result.
For further details of the invention, the following developments are described:
the image analysis by using a single model is difficult to ensure both the speed and the accuracy, and the lesion detection process cannot accurately classify the lesions while pursuing the detection speed. Therefore, it is necessary to construct a separate process for accurate classification of lesions.
To solve the second problem, a separate comprehensive diagnostic apparatus is required for running a dedicated image classification model. The computing device automatically obtains screenshots of all suspected lesions from the current scan from the lesion detection device (captured from the original image using a rectangular box obtained from lesion detection), the screenshots ordered by scan generation time.
The image classification model firstly distinguishes whether the focus is a real focus, secondarily confirms the result obtained in the focus detection step, and then accurately classifies the confirmed focus. In the case of accurate classification, it is necessary to read scanning information given in the ultrasound workstation, for example: the scanning depth, the position of the probe, etc.
The image classification model obtains background information of the patient from the ultrasound workstation: and (4) comprehensively judging information such as age, sex, medical history and the like by combining all images confirmed as focuses acquired by the scanning, and finally obtaining a comprehensive diagnosis result aiming at the scanning.
In this embodiment, the comprehensive judgment module is provided with a nodule detection module and a nodule key frame extraction module; the nodule detection module is used for processing each frame of image in real time, detecting whether the frame of image contains nodules or not, and recording the coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
the nodule key frame extraction module is used for obtaining a plurality of nodules detected in the video according to the coordinate distance of the nodule center point converted from the coordinates of the continuous circumscribed rectangle; and selecting the frame with the maximum diagonal distance of the circumscribed rectangles as the key frame aiming at all the detected circumscribed rectangles of each nodule.
On the other hand, the comprehensive judgment module is also provided with a nodule classification module, an image feature vector acquisition module and a diagnosis conclusion output module; wherein
The nodule classification module inputs the processed key frame image into a trained nodule classification module and judges the properties of the nodules according to the specific purpose of the classification network;
the image feature vector acquisition module is configured to: the last layer of the nodule classification module is an output layer, and the second last layer is a full-connected layer and is used for obtaining a feature vector aiming at a focus image; obtaining a feature vector for the lesion by using a penultimate layer while classifying by using a network; wherein, a plurality of key frames can obtain a plurality of feature vectors, and the mean and variance of all the feature vectors are taken to represent the final image feature vector;
and the diagnosis conclusion output module is used for digitizing the identity information of the patient and adding the digitized identity information to the rear part of the feature vector obtained in the third step, and processing the obtained comprehensive feature vector to finally obtain a comprehensive diagnosis conclusion.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It is understood that the same or similar parts in the above embodiments may be mutually referred to, and the same or similar parts in other embodiments may be referred to for the content which is not described in detail in some embodiments.
Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.
Claims (10)
1. A method for lesion detection using ultrasound images using computer vision techniques, comprising the steps of:
acquiring a video signal directly from medical ultrasonic equipment to obtain an ultrasonic image;
detecting a focus of each frame of ultrasonic image in real time, performing foreground and background separation processing on the ultrasonic images, regarding various focuses in the ultrasonic images as a foreground, and regarding normal tissues as a background; marking the detected focus area, wherein the marked area is a suspected focus area;
the real-time video signal processing is carried out, and the processed video frame containing the mark is returned and displayed in real time;
acquiring screenshots of all suspected focuses generated by current scanning from an original ultrasonic image by using marks obtained by focus detection, and orderly arranging the screenshots according to scanning generation time;
judging whether the focus in the screenshot of the suspected focus is a real focus or not, and classifying the confirmed focus;
and reading the scanning information of the ultrasonic equipment and the background information of the patient, and performing comprehensive judgment processing by combining all images confirmed as the focus acquired by the scanning.
2. The method of using ultrasound images for lesion detection using computer vision techniques as defined in claim 1, further comprising:
the image processing speed of the lesion detection of each frame of ultrasonic image in real time is faster than the refreshing speed of real-time video data; in the process of detecting the focus of each frame of ultrasonic image in real time, the detected maximum foreground quantity is set to be less than or equal to 5 on the same ultrasonic image.
3. The method of claim 1, wherein the step of comprehensive decision processing comprises nodule keyframe extraction, which comprises the steps of:
1) inputting the ultrasonic video signal into a nodule detection module in a form of continuous frames;
2) the nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records the coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
3) and extracting the key frame according to the processing result:
firstly, converting the coordinate distance of the center point of each nodule according to the coordinates of the continuous circumscribed rectangle to obtain a plurality of nodules detected in a video; and selecting the frame with the maximum diagonal distance of the circumscribed rectangles as the key frame aiming at all the detected circumscribed rectangles of each nodule.
4. The method of using ultrasound images for lesion detection using computer vision techniques as claimed in claim 3, wherein: in the comprehensive judgment step, the nodule property judgment needs to be performed on the extracted key frames, and the key frame nodule property judgment step is as follows:
resetting the part outside the bounding rectangle of the junction in the key frame to black;
judging the properties of the knots by using the processed key frame images;
an image feature vector for the lesion is obtained in the key frame image, wherein,
a plurality of key frame images can be extracted from a single focus, and image feature vectors can be extracted from a single image; and (3) taking the mean value and the standard deviation of the feature vectors of a plurality of key frame images, connecting the mean value and the standard deviation in parallel to obtain a feature vector aiming at the focus, and further obtaining a final image feature vector aiming at the focus.
5. The method of using ultrasound images for lesion detection using computer vision techniques as claimed in claim 4, wherein: for the condition of a single focus, after a feature vector of structured data is obtained by using the structured data of a patient, the feature vector is connected with an image feature vector extracted from a single image in series, and then a comprehensive data analysis conclusion for the patient is obtained;
and for the condition of a plurality of focuses, obtaining the feature vector of the structured data by using the structured data of the patient, and connecting the feature vectors and the standard deviation vectors of all the focuses in series with the structured feature vector to obtain a comprehensive data analysis conclusion aiming at the patient.
6. The method of using ultrasound images for lesion detection using computer vision techniques as claimed in claim 5, wherein: the step of obtaining an image feature vector for the lesion in the key frame image further includes:
preprocessing an input ultrasonic image, wherein the input ultrasonic image is a two-dimensional or three-dimensional matrix;
the key frame nodule property judging step is provided with a feature extraction and transformation part, an image feature vector part and a solver part;
the characteristic extraction and transformation part is a convolution neural network model, and after the characteristic extraction and transformation part is changed, the image characteristic vector part generates ultrasonic image matrix data into a one-dimensional vector which is the characteristic vector of an original image;
and obtaining a final output result by using the solver part and taking the image feature vector as input and combining the feature vector of the structured data of the patient.
7. An apparatus for lesion detection using computer vision technology using ultrasound images, the apparatus being adapted to implement the method for lesion detection of any one of claims 1 to 6, the apparatus comprising:
the ultrasonic video signal acquisition module is used for directly acquiring a video signal from medical ultrasonic equipment to obtain an ultrasonic image;
the focus detection equipment is used for detecting the focus of each frame of ultrasonic image in real time, and taking various focuses in the ultrasonic image as a foreground and normal tissues as a background; marking the detected lesion area, wherein the marked area is a suspected lesion area, and simultaneously returning and displaying the processed video frame containing the mark in real time;
the comprehensive diagnosis device automatically acquires screenshots of all suspected focuses generated by current scanning from an original ultrasonic image in the focus detection device by using marks obtained by the focus detection device, and the screenshots are orderly arranged according to scanning generation time;
the comprehensive diagnosis equipment runs an image screening and classifying module which is used for secondarily confirming the suspected focus detected in the focus detection module, distinguishing whether the focus is a real focus or not and secondarily classifying the confirmed focus;
and the comprehensive judgment module is used for reading the scanning information of the ultrasonic equipment and the background information of the patient and carrying out comprehensive judgment processing by combining all images which are confirmed to be focuses and acquired by the scanning.
8. The apparatus for lesion detection using ultrasound images using computer vision technology according to claim 7, wherein a nodule key frame extraction module and a key frame nodule property determination module are provided in the comprehensive determination module; wherein
The nodule detection module processes each frame of image in real time, detects whether the frame of image contains nodules or not, and records coordinates of a circumscribed rectangle of the nodules if the frame of image contains the nodules; in the process, recording the processing result of all video frames;
and extracting key frames according to the processing result: firstly, the nodule key frame extraction module calculates the coordinate distance of a nodule center point according to the coordinates of continuous circumscribed rectangles to obtain a plurality of nodules detected in a video; selecting a frame with the maximum diagonal distance of the circumscribed rectangles as a key frame aiming at all detected circumscribed rectangles of each nodule;
the key frame nodule property judging module is used for judging the properties of nodules of the extracted key frame images and extracting image feature vectors of the extracted key frame images; the key frame nodule property determination module is configured to function as:
inputting the processed key frame image into a key frame nodule property judging module to judge the property of the nodule;
an image feature vector for the lesion is obtained in the key frame image, wherein,
a plurality of key frame images can be extracted from a single focus, and image feature vectors can be extracted from a single image; and (3) taking the mean value and the standard deviation of the feature vectors of a plurality of key frame images, connecting the mean value and the standard deviation in parallel to obtain a feature vector aiming at the focus, and further obtaining a final image feature vector aiming at the focus.
9. The apparatus for lesion detection using computer vision techniques with ultrasound images of claim 8, further comprising a structured data module for obtaining feature vectors of structured data of a patient;
for the condition of a single focus, after a feature vector of structured data is obtained by using the structured data of a patient, the feature vector is connected with an image feature vector extracted from a single image in series, and then a comprehensive data analysis conclusion for the patient is obtained;
and for the condition of a plurality of focuses, obtaining the feature vector of the structured data by using the structured data of the patient, and connecting the feature vectors and the standard deviation vectors of all the focuses in series with the structured feature vector to obtain a comprehensive data analysis conclusion aiming at the patient.
10. The apparatus for lesion detection using ultrasound images using computer vision as claimed in claim 9, wherein the keyframe nodule property determination step further includes a feature extraction and transformation section, an image feature vector section, and a solver section;
inputting the preprocessed ultrasonic image into a feature extraction and transformation part, wherein the feature extraction and transformation part is a convolutional neural network model, after the feature extraction and transformation part is changed, the image feature vector part generates ultrasonic image matrix data into a one-dimensional vector, and a solver part is used for inputting the image feature vector and combining the feature vector of the structured data of the patient to obtain a final output result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010029034.9A CN111227864B (en) | 2020-01-12 | 2020-01-12 | Device for detecting focus by using ultrasonic image and computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010029034.9A CN111227864B (en) | 2020-01-12 | 2020-01-12 | Device for detecting focus by using ultrasonic image and computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111227864A true CN111227864A (en) | 2020-06-05 |
CN111227864B CN111227864B (en) | 2023-06-09 |
Family
ID=70861705
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010029034.9A Active CN111227864B (en) | 2020-01-12 | 2020-01-12 | Device for detecting focus by using ultrasonic image and computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111227864B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446862A (en) * | 2020-11-25 | 2021-03-05 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112641466A (en) * | 2020-12-31 | 2021-04-13 | 北京小白世纪网络科技有限公司 | Ultrasonic artificial intelligence auxiliary diagnosis method and device |
CN112687387A (en) * | 2020-12-31 | 2021-04-20 | 北京小白世纪网络科技有限公司 | Artificial intelligence auxiliary diagnosis system and diagnosis method |
CN112766066A (en) * | 2020-12-31 | 2021-05-07 | 北京小白世纪网络科技有限公司 | Method and system for processing and displaying dynamic video stream and static image |
CN112863647A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Video stream processing and displaying method, system and storage medium |
CN112862752A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Image processing display method, system electronic equipment and storage medium |
CN113344854A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Breast ultrasound video-based focus detection method, device, equipment and medium |
CN113344855A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Method, device, equipment and medium for reducing false positive rate of breast ultrasonic lesion detection |
CN113379693A (en) * | 2021-06-01 | 2021-09-10 | 大连东软教育科技集团有限公司 | Capsule endoscopy key focus image detection method based on video abstraction technology |
CN113616945A (en) * | 2021-08-13 | 2021-11-09 | 湖北美睦恩医疗设备有限公司 | Detection method based on focused ultrasound image identification and beauty and body care device |
CN114664410A (en) * | 2022-03-11 | 2022-06-24 | 北京医准智能科技有限公司 | Video-based focus classification method and device, electronic equipment and medium |
CN117297554A (en) * | 2023-11-16 | 2023-12-29 | 哈尔滨海鸿基业科技发展有限公司 | Control system and method for lymphatic imaging device |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
CN2868229Y (en) * | 2004-12-09 | 2007-02-14 | 林礼务 | Ultrasonic anatomical positioning-marking device |
US20090092300A1 (en) * | 2007-10-03 | 2009-04-09 | Siemens Medical Solutions Usa, Inc. | System and Method for Lesion Detection Using Locally Adjustable Priors |
US20100142786A1 (en) * | 2007-05-17 | 2010-06-10 | Yeda Research & Development Co. Ltd. | Method and apparatus for computer-aided diagnosis of cancer and product |
CN104470443A (en) * | 2012-07-18 | 2015-03-25 | 皇家飞利浦有限公司 | Method and system for processing ultrasonic imaging data |
CN105447872A (en) * | 2015-12-03 | 2016-03-30 | 中山大学 | Method for automatically identifying liver tumor type in ultrasonic image |
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
US20170236271A1 (en) * | 2015-08-06 | 2017-08-17 | Lunit Inc. | Classification apparatus for pathologic diagnosis of medical image, and pathologic diagnosis system using the same |
CN107274402A (en) * | 2017-06-27 | 2017-10-20 | 北京深睿博联科技有限责任公司 | A kind of Lung neoplasm automatic testing method and system based on chest CT image |
CN107680678A (en) * | 2017-10-18 | 2018-02-09 | 北京航空航天大学 | Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system |
CN108665456A (en) * | 2018-05-15 | 2018-10-16 | 广州尚医网信息技术有限公司 | The method and system that breast ultrasound focal area based on artificial intelligence marks in real time |
CN109685102A (en) * | 2018-11-13 | 2019-04-26 | 平安科技(深圳)有限公司 | Breast lesion image classification method, device, computer equipment and storage medium |
CN109727243A (en) * | 2018-12-29 | 2019-05-07 | 无锡祥生医疗科技股份有限公司 | Breast ultrasound image recognition analysis method and system |
CN110223287A (en) * | 2019-06-13 | 2019-09-10 | 首都医科大学北京友谊医院 | A method of early diagnosing mammary cancer rate can be improved |
CN110349141A (en) * | 2019-07-04 | 2019-10-18 | 复旦大学附属肿瘤医院 | A kind of breast lesion localization method and system |
CN110648344A (en) * | 2019-09-12 | 2020-01-03 | 电子科技大学 | Diabetic retinopathy classification device based on local focus characteristics |
-
2020
- 2020-01-12 CN CN202010029034.9A patent/CN111227864B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050207630A1 (en) * | 2002-02-15 | 2005-09-22 | The Regents Of The University Of Michigan Technology Management Office | Lung nodule detection and classification |
CN2868229Y (en) * | 2004-12-09 | 2007-02-14 | 林礼务 | Ultrasonic anatomical positioning-marking device |
US20100142786A1 (en) * | 2007-05-17 | 2010-06-10 | Yeda Research & Development Co. Ltd. | Method and apparatus for computer-aided diagnosis of cancer and product |
US20090092300A1 (en) * | 2007-10-03 | 2009-04-09 | Siemens Medical Solutions Usa, Inc. | System and Method for Lesion Detection Using Locally Adjustable Priors |
CN104470443A (en) * | 2012-07-18 | 2015-03-25 | 皇家飞利浦有限公司 | Method and system for processing ultrasonic imaging data |
US20170236271A1 (en) * | 2015-08-06 | 2017-08-17 | Lunit Inc. | Classification apparatus for pathologic diagnosis of medical image, and pathologic diagnosis system using the same |
US20180276821A1 (en) * | 2015-12-03 | 2018-09-27 | Sun Yat-Sen University | Method for Automatically Recognizing Liver Tumor Types in Ultrasound Images |
CN105447872A (en) * | 2015-12-03 | 2016-03-30 | 中山大学 | Method for automatically identifying liver tumor type in ultrasonic image |
CN106780448A (en) * | 2016-12-05 | 2017-05-31 | 清华大学 | A kind of pernicious sorting technique of ultrasonic Benign Thyroid Nodules based on transfer learning Yu Fusion Features |
CN107274402A (en) * | 2017-06-27 | 2017-10-20 | 北京深睿博联科技有限责任公司 | A kind of Lung neoplasm automatic testing method and system based on chest CT image |
CN107680678A (en) * | 2017-10-18 | 2018-02-09 | 北京航空航天大学 | Based on multiple dimensioned convolutional neural networks Thyroid ultrasound image tubercle auto-check system |
CN108665456A (en) * | 2018-05-15 | 2018-10-16 | 广州尚医网信息技术有限公司 | The method and system that breast ultrasound focal area based on artificial intelligence marks in real time |
CN109685102A (en) * | 2018-11-13 | 2019-04-26 | 平安科技(深圳)有限公司 | Breast lesion image classification method, device, computer equipment and storage medium |
CN109727243A (en) * | 2018-12-29 | 2019-05-07 | 无锡祥生医疗科技股份有限公司 | Breast ultrasound image recognition analysis method and system |
CN110223287A (en) * | 2019-06-13 | 2019-09-10 | 首都医科大学北京友谊医院 | A method of early diagnosing mammary cancer rate can be improved |
CN110349141A (en) * | 2019-07-04 | 2019-10-18 | 复旦大学附属肿瘤医院 | A kind of breast lesion localization method and system |
CN110648344A (en) * | 2019-09-12 | 2020-01-03 | 电子科技大学 | Diabetic retinopathy classification device based on local focus characteristics |
Non-Patent Citations (6)
Title |
---|
WANG, Y ; JIA, LQ; WANG, XM; FU, LB; LIU, JB ; QIAN, LX: "DIAGNOSTIC PERFORMANCE OF 2-D SHEAR WAVE ELASTOGRAPHY FOR DIFFERENTIATION OF HEPATOBLASTOMA AND HEPATIC HEMANGIOMA IN CHILDREN UNDER 3 YEARS OF AGE", 《ULTRASOUND IN MEDICINE AND BIOLOGY》 * |
WANG, Y ; JIA, LQ; WANG, XM; FU, LB; LIU, JB ; QIAN, LX: "DIAGNOSTIC PERFORMANCE OF 2-D SHEAR WAVE ELASTOGRAPHY FOR DIFFERENTIATION OF HEPATOBLASTOMA AND HEPATIC HEMANGIOMA IN CHILDREN UNDER 3 YEARS OF AGE", 《ULTRASOUND IN MEDICINE AND BIOLOGY》, 4 June 2019 (2019-06-04), pages 1397 - 1406 * |
卜云芸,卢树强,庞浩,罗昶,钱林学: "自动识别技术在乳腺结节超声图像良恶性分类中的可行性研究", 《中华医学超声杂志(电子版)》 * |
卜云芸,卢树强,庞浩,罗昶,钱林学: "自动识别技术在乳腺结节超声图像良恶性分类中的可行性研究", 《中华医学超声杂志(电子版)》, 15 June 2019 (2019-06-15), pages 779 - 782 * |
谭利,李彬,田联房,王立非,陈萍: "基于多特征融合跟踪的微小肺结节识别算法", 《生物医学工程学杂志》 * |
谭利,李彬,田联房,王立非,陈萍: "基于多特征融合跟踪的微小肺结节识别算法", 《生物医学工程学杂志》, 25 June 2011 (2011-06-25), pages 437 - 441 * |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112446862A (en) * | 2020-11-25 | 2021-03-05 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112446862B (en) * | 2020-11-25 | 2021-08-10 | 北京医准智能科技有限公司 | Dynamic breast ultrasound video full-focus real-time detection and segmentation device and system based on artificial intelligence and image processing method |
CN112766066A (en) * | 2020-12-31 | 2021-05-07 | 北京小白世纪网络科技有限公司 | Method and system for processing and displaying dynamic video stream and static image |
CN112687387A (en) * | 2020-12-31 | 2021-04-20 | 北京小白世纪网络科技有限公司 | Artificial intelligence auxiliary diagnosis system and diagnosis method |
CN112863647A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Video stream processing and displaying method, system and storage medium |
CN112862752A (en) * | 2020-12-31 | 2021-05-28 | 北京小白世纪网络科技有限公司 | Image processing display method, system electronic equipment and storage medium |
CN112641466A (en) * | 2020-12-31 | 2021-04-13 | 北京小白世纪网络科技有限公司 | Ultrasonic artificial intelligence auxiliary diagnosis method and device |
CN113344854A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Breast ultrasound video-based focus detection method, device, equipment and medium |
CN113344855A (en) * | 2021-05-10 | 2021-09-03 | 深圳瀚维智能医疗科技有限公司 | Method, device, equipment and medium for reducing false positive rate of breast ultrasonic lesion detection |
CN113379693A (en) * | 2021-06-01 | 2021-09-10 | 大连东软教育科技集团有限公司 | Capsule endoscopy key focus image detection method based on video abstraction technology |
CN113379693B (en) * | 2021-06-01 | 2024-02-06 | 东软教育科技集团有限公司 | Capsule endoscope key focus image detection method based on video abstraction technology |
CN113616945A (en) * | 2021-08-13 | 2021-11-09 | 湖北美睦恩医疗设备有限公司 | Detection method based on focused ultrasound image identification and beauty and body care device |
CN113616945B (en) * | 2021-08-13 | 2024-03-08 | 湖北美睦恩医疗设备有限公司 | Detection method based on focused ultrasonic image recognition and beauty and body-building device |
CN114664410A (en) * | 2022-03-11 | 2022-06-24 | 北京医准智能科技有限公司 | Video-based focus classification method and device, electronic equipment and medium |
CN117297554A (en) * | 2023-11-16 | 2023-12-29 | 哈尔滨海鸿基业科技发展有限公司 | Control system and method for lymphatic imaging device |
Also Published As
Publication number | Publication date |
---|---|
CN111227864B (en) | 2023-06-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111227864B (en) | Device for detecting focus by using ultrasonic image and computer vision | |
US11101033B2 (en) | Medical image aided diagnosis method and system combining image recognition and report editing | |
CN111214255B (en) | Medical ultrasonic image computer-aided method | |
US11633169B2 (en) | Apparatus for AI-based automatic ultrasound diagnosis of liver steatosis and remote medical diagnosis method using the same | |
US11937973B2 (en) | Systems and media for automatically diagnosing thyroid nodules | |
CN113781439B (en) | Ultrasonic video focus segmentation method and device | |
CN102056547B (en) | Medical image processing device and method for processing medical image | |
CN112086197B (en) | Breast nodule detection method and system based on ultrasonic medicine | |
KR20190061041A (en) | Image processing | |
KR102531400B1 (en) | Artificial intelligence-based colonoscopy diagnosis supporting system and method | |
CN114782307A (en) | Enhanced CT image colorectal cancer staging auxiliary diagnosis system based on deep learning | |
WO2020027228A1 (en) | Diagnostic support system and diagnostic support method | |
CN111862090B (en) | Method and system for esophageal cancer preoperative management based on artificial intelligence | |
CN113855079A (en) | Real-time detection and breast disease auxiliary analysis method based on breast ultrasonic image | |
US20230206435A1 (en) | Artificial intelligence-based gastroscopy diagnosis supporting system and method for improving gastrointestinal disease detection rate | |
CN112950534A (en) | Portable ultrasonic pneumonia auxiliary diagnosis system based on artificial intelligence | |
CN111242921A (en) | Method and system for automatically updating medical ultrasonic image auxiliary diagnosis system | |
CN113298773A (en) | Heart view identification and left ventricle detection device and system based on deep learning | |
KR20220122312A (en) | Artificial intelligence-based gastroscopy diagnosis supporting system and method | |
US20190333399A1 (en) | System and method for virtual reality training using ultrasound image data | |
CN117575999B (en) | Focus prediction system based on fluorescent marking technology | |
KR102132564B1 (en) | Apparatus and method for diagnosing lesion | |
CN114391878B (en) | Ultrasonic imaging equipment | |
Leibetseder et al. | Post-surgical Endometriosis Segmentation in Laparoscopic Videos | |
CN117744026A (en) | Information fusion method based on multiple modes and tumor malignancy probability recognition system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |