CN117078545A - Tumor ultrasonic image enhancement method, equipment and medium - Google Patents

Tumor ultrasonic image enhancement method, equipment and medium Download PDF

Info

Publication number
CN117078545A
CN117078545A CN202311044978.3A CN202311044978A CN117078545A CN 117078545 A CN117078545 A CN 117078545A CN 202311044978 A CN202311044978 A CN 202311044978A CN 117078545 A CN117078545 A CN 117078545A
Authority
CN
China
Prior art keywords
tumor
image
detected
ultrasonic
ultrasonic images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202311044978.3A
Other languages
Chinese (zh)
Inventor
孔德兴
温建明
王晓林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haiyan Nanbei Lake Medical Artificial Intelligence Research Institute
Zhejiang Normal University CJNU
Original Assignee
Haiyan Nanbei Lake Medical Artificial Intelligence Research Institute
Zhejiang Normal University CJNU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haiyan Nanbei Lake Medical Artificial Intelligence Research Institute, Zhejiang Normal University CJNU filed Critical Haiyan Nanbei Lake Medical Artificial Intelligence Research Institute
Priority to CN202311044978.3A priority Critical patent/CN117078545A/en
Publication of CN117078545A publication Critical patent/CN117078545A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Abstract

The invention provides a tumor ultrasonic image enhancement method, equipment and medium, comprising the following steps: inputting an ultrasonic image to be detected into a pre-trained tumor detection model, wherein the tumor detection model acquires first image characteristic information of the ultrasonic image to be detected, and acquires a tumor area to be detected in the ultrasonic image to be detected according to the first image characteristic information; setting the gray values of the pixel points except the tumor region to be detected in the ultrasonic image to be detected to 0 to obtain a region detection image; performing edge detection extraction on the region detection image to obtain a tumor edge image; and superposing the tumor edge image and the ultrasonic image to be detected to obtain an enhanced image to be detected. After the technical scheme is adopted, the region where the tumor is located in the ultrasonic image can be automatically intercepted, the edge of the image in the region is enhanced, the pathological tissues are conveniently distinguished, and diagnosis is effectively assisted.

Description

Tumor ultrasonic image enhancement method, equipment and medium
Technical Field
The invention relates to the field of medical imaging, in particular to a tumor ultrasonic image enhancement method, equipment and medium.
Background
The ultrasonic diagnosis is medical diagnosis based on reflected or scattered echo information according to the difference of acoustic characteristic impedance of ultrasonic waves in different tissues of a human body, has the characteristics of real time, non-invasiveness, low cost, no pain and the like, and is widely applied to diagnosis of ovaries, breasts and the like. However, the ultrasonic imaging equipment is subject to the principle, and has objective factors such as weak resolution for imaging some tissues, so that the accuracy of the current ultrasonic diagnosis is only about 70%. In addition, the ultrasonic imaging apparatus is affected by system noise inside the ultrasonic imaging apparatus and external complex electromagnetic interference factors, resulting in insufficient image resolution difference of part of the microstructure. Moreover, the diagnosis of the ultrasonic image needs to be distinguished by a doctor, but the eyeball of a person has a certain limitation on the resolution capability of the gray image, so that the diagnosis doctor has certain difficulty and uncertainty in the aspects of benign and malignant ovarian lesion tissues and the probability of malignant ovarian lesion tissues. In early stage of pathological change of tumor tissue, the difference of acoustic impedance between pathological change tissue and normal tissue is small, so that the image edge is blurred, and the identification of early pathological change tissue by doctors is not facilitated.
Disclosure of Invention
In order to overcome the technical defects, the invention aims to provide a tumor ultrasonic image enhancement method, equipment and medium.
The invention discloses a tumor ultrasonic image enhancement method, which comprises the following steps:
s100, inputting an ultrasonic image to be detected into a pre-trained tumor detection model, wherein the tumor detection model acquires first image characteristic information of the ultrasonic image to be detected, and acquires a tumor area to be detected in the ultrasonic image to be detected according to the first image characteristic information;
s200, setting the gray value of the pixels except the tumor region to be detected in the ultrasonic image to be detected to 0, so as to obtain a region detection image;
s300, performing edge detection extraction on the region detection image to obtain a tumor edge image;
s400, superposing the tumor edge image and the ultrasonic image to be detected in S100 to obtain an enhanced image to be detected.
Further, the step S300 includes:
s310, after noise removal is carried out on the region detection image by Gaussian filtering, calculating and obtaining gradient values and gradient directions of all pixel points in the region detection image according to gray values of all pixel points in the region detection image to obtain a gradient image;
s320, adopting a non-maximum suppression algorithm to the gradient image to obtain a non-maximum suppression image;
and S340, screening the non-maximum suppression image by adopting a double-threshold algorithm to obtain a tumor edge image.
Further, the tumor detection model is pre-trained by the following steps:
s110, acquiring a plurality of training ultrasonic images and a plurality of test ultrasonic images, wherein a training tumor area exists in any one of the plurality of training ultrasonic images, and a test tumor area exists in any one of the plurality of test ultrasonic images;
s120, inputting the plurality of training ultrasonic images and the plurality of training tumor areas into a tumor detection model to be trained in a one-to-one correspondence manner, so that the tumor detection model to be trained learns second graph characteristic information corresponding to any one of the plurality of training ultrasonic images, and learns the relation between the plurality of second graph characteristic information and the plurality of training tumor areas;
s130, inputting the plurality of test ultrasonic images into the tumor detection model to be trained, predicting any one of the plurality of test ultrasonic images by the tumor detection model to be trained, and outputting a predicted tumor area corresponding to any one of the plurality of test ultrasonic images;
s140, respectively comparing the predicted tumor area corresponding to any one of the plurality of test ultrasonic images with the test tumor area, obtaining a loss function of the tumor detection model to be trained, and carrying out back propagation so as to carry out parameter optimization on the tumor detection model to be trained, and recording the accuracy of the tumor detection model to be trained;
repeating the steps S120-S140 until the tumor detection model to be trained converges, and obtaining the tumor detection model.
Further, the step S110 includes:
s111, acquiring a plurality of training ultrasonic images and a plurality of testing ultrasonic images, and resampling the plurality of training ultrasonic images and the plurality of testing ultrasonic images to unify pixel pitches of the plurality of training ultrasonic images and the plurality of testing ultrasonic images;
s112, marking the training tumor area in any one of the plurality of training ultrasonic images, and marking the testing tumor area in any one of the plurality of testing ultrasonic images.
Preferably, the tumor detection model is a Swin transducer model.
The invention also discloses an electronic device comprising a memory and a processor, wherein the memory stores computer executable instructions, and when the instructions are executed by the processor, the electronic device is caused to implement the tumor ultrasound image enhancement method.
The invention also discloses a computer readable storage medium, on which a computer program is stored which, when run on a computer, causes the computer to perform the aforementioned tumor ultrasound image enhancement method.
After the technical scheme is adopted, compared with the prior art, the method has the following beneficial effects: the tumor area is automatically intercepted, the edge of the image in the area is enhanced, the pathological tissues are conveniently distinguished, and diagnosis is effectively assisted.
Drawings
FIG. 1 is a schematic flow chart of a method for enhancing tumor ultrasound images disclosed by the invention;
fig. 2 is a schematic structural diagram of a tumor detection model in the method for enhancing tumor ultrasound images.
Detailed Description
Advantages of the invention are further illustrated in the following description, taken in conjunction with the accompanying drawings and detailed description.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In the description of the present invention, it should be understood that the terms "longitudinal," "transverse," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like indicate orientations or positional relationships based on the orientation or positional relationships shown in the drawings, merely to facilitate describing the present invention and simplify the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present invention.
In the description of the present invention, unless otherwise specified and defined, it should be noted that the terms "mounted," "connected," and "coupled" are to be construed broadly, and may be, for example, mechanical or electrical, or may be in communication with each other between two elements, directly or indirectly through intermediaries, as would be understood by those skilled in the art, in view of the specific meaning of the terms described above.
In the following description, suffixes such as "module", "component", or "unit" for representing elements are used only for facilitating the description of the present invention, and are not of specific significance per se. Thus, "module" and "component" may be used in combination.
As shown in fig. 1, the invention discloses a tumor ultrasonic image enhancement method, which comprises the following steps:
s100, inputting an ultrasonic image to be detected into a pre-trained tumor detection model, wherein the tumor detection model acquires first image characteristic information of the ultrasonic image to be detected, and acquires a tumor area to be detected in the ultrasonic image to be detected according to the first image characteristic information;
s200, setting the gray value of the pixels except the tumor region to be detected in the ultrasonic image to be detected to 0, so as to obtain a region detection image;
s300, performing edge detection extraction on the region detection image to obtain a tumor edge image;
s400, superposing the tumor edge image and the ultrasonic image to be detected in S100 to obtain an enhanced image to be detected.
Specifically, the first image characteristic information of the ultrasonic image to be detected is learned by using the tumor detection model, and the tumor area to be detected is automatically acquired, namely the tumor detection model detects the position of the tumor lesion tissue possibly existing according to the first image characteristic information of the ultrasonic image to be detected. And then, carrying out edge detection on an image only containing the tumor area to be detected, extracting edges, obtaining edges of tissues in the tumor area to be detected, and superposing the extracted edges and an untreated ultrasonic image to be detected, so that the ultrasonic image to be detected protrudes out of the edges of part of the tumor area to be detected, and the differential diagnosis of clinicians can be assisted.
Further, the step S300 includes:
s310, after noise removal is carried out on the region detection image by Gaussian filtering, calculating and obtaining gradient values and gradient directions of all pixel points in the region detection image according to gray values of all pixel points in the region detection image to obtain a gradient image;
s320, adopting a non-maximum suppression algorithm to the gradient image to obtain a non-maximum suppression image;
and S340, screening the non-maximum suppression image by adopting a double-threshold algorithm to obtain a tumor edge image.
Specifically, when edge detection extraction is performed, noise that may exist in the region detection image is filtered first. And generating a filtering operator according to a Gaussian formula, and then carrying out convolution operation on the gray values of the pixel points to be processed and the neighborhood pixel points in the region detection image and the filtering operator to realize weighted average operation, so that high-frequency noise can be effectively removed. The gradient calculation is performed on the region detection image based on a Gaussian filter, and the closer the filtering operator is to the center point, the larger the weight is. And calculating the gradient and the angle of each pixel point in the region detection image to obtain a gradient image of the region detection image. Because of the uneven edge width, blurring and misrecognition of the gradient image, non-maximum suppression of the gradient image is required to eliminate non-edge pixels. And finding out partial points of the local maximum value in the gradient image through non-maximum value inhibition as edge points, and setting the gray value of the pixel point corresponding to the non-maximum value to 0 to form a thin and accurate single-pixel edge. Finally, a non-maximum suppression image is obtained. However, non-maximum suppressed images still have many false edges, so a dual threshold algorithm is used to detect and connect edges. The specific thinking is that two thresholds are selected, and pixel points smaller than the low threshold are determined to be false edges; and the pixel points larger than the high threshold value are determined to be strong edges, and finally a double-threshold image is obtained as a tumor edge image.
Further, the tumor detection model is pre-trained by the following steps:
s110, acquiring a plurality of training ultrasonic images and a plurality of test ultrasonic images, wherein a training tumor area exists in any one of the plurality of training ultrasonic images, and a test tumor area exists in any one of the plurality of test ultrasonic images;
specifically, based on the part to be used for auxiliary diagnosis of the tumor detection model to be trained, the same part of a plurality of diagnosed patients with tumor is subjected to ultrasonic scanning to obtain a plurality of diagnosed ultrasonic images. For example, the tumor detection model to be trained is to be used for assisting in diagnosis of ultrasound image tumor determination of the ovary after training is completed, so that abdominal cavities of a plurality of diagnosed patients with the ovarian tumor are scanned to obtain a plurality of diagnosed ultrasound images. The plurality of diagnosed ultrasound images is divided into a set of training ultrasound images and a set of test ultrasound images. For example, all diagnosed ultrasound images are divided into a set of training ultrasound images and a set of test ultrasound images in a 4:1 ratio. Marking software is used by a practitioner to delineate the training tumor region within each training ultrasound image, and to delineate the test tumor region within each test ultrasound image, and thereby obtain coordinates of the training tumor region and the test tumor region.
S120, inputting the plurality of training ultrasonic images and the plurality of training tumor areas into a tumor detection model to be trained in a one-to-one correspondence manner, so that the tumor detection model to be trained learns second graph characteristic information corresponding to any one of the plurality of training ultrasonic images, and learns the relation between the plurality of second graph characteristic information and the plurality of training tumor areas;
s130, inputting the plurality of test ultrasonic images into the tumor detection model to be trained, predicting any one of the plurality of test ultrasonic images by the tumor detection model to be trained, and outputting a predicted tumor area corresponding to any one of the plurality of test ultrasonic images;
s140, respectively comparing the predicted tumor area corresponding to any one of the plurality of test ultrasonic images with the test tumor area, obtaining a loss function of the tumor detection model to be trained, and carrying out back propagation so as to carry out parameter optimization on the tumor detection model to be trained, and recording the accuracy of the tumor detection model to be trained;
repeating the steps S120-S140 until the tumor detection model to be trained converges, and obtaining the tumor detection model.
Specifically, a mean square error can be obtained by comparing the coordinates of the predicted tumor region and the test tumor region of the same test ultrasound image, respectively, and the mean square error is used as a loss function of the tumor detection model to be trained. And (3) when repeating the steps S120-S140, recording the accuracy and accumulated loss of each prediction of the tumor detection model to be trained.
Further, the step S110 includes:
s111, acquiring a plurality of training ultrasonic images and a plurality of testing ultrasonic images, and resampling the plurality of training ultrasonic images and the plurality of testing ultrasonic images to unify pixel pitches of the plurality of training ultrasonic images and the plurality of testing ultrasonic images;
s112, marking the training tumor area in any one of the plurality of training ultrasonic images, and marking the testing tumor area in any one of the plurality of testing ultrasonic images.
Preferably, the tumor detection model is a Swin transducer model.
Specifically, referring to fig. 2, the Swin transducer model is a transducer model applied to the image field, and aims to solve the problems of long sequence length and high computational complexity caused by large-size image input. The Swin Transformer Block module divides the input image into a plurality of non-overlapping patches, and uses a self-attention mechanism only inside the patch for each patch. The method has the advantages of reducing the calculated amount, simultaneously playing the roles of not losing global information and improving the model effect.
The whole model adopts a layering design, totally comprises 4 stages, each Stage gradually reduces the resolution of an input image, and simultaneously enlarges the receptive field layer by layer. At the beginning of the input, the Patch Part module segments the training tumor image into a plurality of non-overlapping patches. In Stage1, each patch is converted to an embedded vector by a Linear Embedd ing module and then input to a Swin Transformer Block module. Stage2, stage3, stage4 are composed of a Patch metering module and a plurality of Swin Transformer Block modules, respectively. Wherein, each Patch Merging module of the invention performs downsampling by convolution to reduce the training tumor image resolution (H×W, where H represents the width of the image and W represents the height of the image) and increase the training tumor image channel number C. And each Swin Transformer Block module calculates the self-attention of each patch in the training tumor image, calculates the attention between adjacent patches through shift operation, and finally enables the model to learn the global feature information of the training tumor image.
The invention also discloses an electronic device comprising a memory and a processor, wherein the memory stores computer executable instructions, and when the instructions are executed by the processor, the electronic device is caused to implement the tumor ultrasound image enhancement method.
The invention also discloses a computer readable storage medium, on which a computer program is stored which, when run on a computer, causes the computer to perform the aforementioned tumor ultrasound image enhancement method.
It should be noted that the embodiments of the present invention are preferred and not limited in any way, and any person skilled in the art may make use of the above-disclosed technical content to change or modify the same into equivalent effective embodiments without departing from the technical scope of the present invention, and any modification or equivalent change and modification of the above-described embodiments according to the technical substance of the present invention still falls within the scope of the technical scope of the present invention.

Claims (7)

1. A method for enhancing an ultrasound image of a tumor, comprising the steps of:
s100, inputting an ultrasonic image to be detected into a pre-trained tumor detection model, wherein the tumor detection model acquires first image characteristic information of the ultrasonic image to be detected, and acquires a tumor area to be detected in the ultrasonic image to be detected according to the first image characteristic information;
s200, setting the gray values of pixel points except the tumor region to be detected in the ultrasonic image to be detected to 0, so as to obtain a region detection image;
s300, performing edge detection extraction on the region detection image to obtain a tumor edge image;
s400, superposing the tumor edge image and the ultrasonic image to be detected in S100 to obtain an enhanced image to be detected.
2. The method of enhancing tumor ultrasound images according to claim 1, wherein said step S300 comprises:
s310, after noise removal is carried out on the region detection image by Gaussian filtering, calculating and obtaining gradient values and gradient directions of all pixel points in the region detection image according to gray values of all pixel points in the region detection image to obtain a gradient image;
s320, adopting a non-maximum suppression algorithm to the gradient image to obtain a non-maximum suppression image;
and S340, screening the non-maximum suppression image by adopting a double-threshold algorithm to obtain a tumor edge image.
3. The method of claim 1, wherein the tumor detection model is pre-trained by:
s110, acquiring a plurality of training ultrasonic images and a plurality of test ultrasonic images, wherein a training tumor area exists in any one of the plurality of training ultrasonic images, and a test tumor area exists in any one of the plurality of test ultrasonic images;
s120, inputting the plurality of training ultrasonic images and the plurality of training tumor areas into a tumor detection model to be trained in a one-to-one correspondence manner, so that the tumor detection model to be trained learns second graph characteristic information corresponding to any one of the plurality of training ultrasonic images, and learns the relation between the plurality of second graph characteristic information and the plurality of training tumor areas;
s130, inputting the plurality of test ultrasonic images into the tumor detection model to be trained, predicting any one of the plurality of test ultrasonic images by the tumor detection model to be trained, and outputting a predicted tumor area corresponding to any one of the plurality of test ultrasonic images;
s140, respectively comparing the predicted tumor area corresponding to any one of the plurality of test ultrasonic images with the test tumor area, obtaining a loss function of the tumor detection model to be trained, and carrying out back propagation so as to carry out parameter optimization on the tumor detection model to be trained, and recording the accuracy of the tumor detection model to be trained;
repeating the steps S120-S140 until the tumor detection model to be trained converges, and obtaining the tumor detection model.
4. The method of enhancing tumor ultrasound images according to claim 3, wherein said step S110 comprises:
s111, acquiring a plurality of training ultrasonic images and a plurality of testing ultrasonic images, and resampling the plurality of training ultrasonic images and the plurality of testing ultrasonic images to unify pixel pitches of the plurality of training ultrasonic images and the plurality of testing ultrasonic images;
s112, marking the training tumor area in any one of the plurality of training ultrasonic images, and marking the testing tumor area in any one of the plurality of testing ultrasonic images.
5. The method of claim 1-4, wherein the tumor detection model is a Swin transducer model.
6. An electronic device comprising a memory storing computer executable instructions and a processor, which when executed by the processor, cause the electronic device to implement the tumor ultrasound image enhancement method according to any one of claims 1-5.
7. A computer readable storage medium having stored thereon a computer program, characterized in that the instructions, when run on a computer, cause the computer to perform the tumor ultrasound image enhancement method according to any of claims 1-5.
CN202311044978.3A 2023-08-18 2023-08-18 Tumor ultrasonic image enhancement method, equipment and medium Withdrawn CN117078545A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311044978.3A CN117078545A (en) 2023-08-18 2023-08-18 Tumor ultrasonic image enhancement method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311044978.3A CN117078545A (en) 2023-08-18 2023-08-18 Tumor ultrasonic image enhancement method, equipment and medium

Publications (1)

Publication Number Publication Date
CN117078545A true CN117078545A (en) 2023-11-17

Family

ID=88709229

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311044978.3A Withdrawn CN117078545A (en) 2023-08-18 2023-08-18 Tumor ultrasonic image enhancement method, equipment and medium

Country Status (1)

Country Link
CN (1) CN117078545A (en)

Similar Documents

Publication Publication Date Title
US10561403B2 (en) Sensor coordinate calibration in an ultrasound system
US7628755B2 (en) Apparatus and method for processing an ultrasound image
EP1772103B1 (en) Ultrasound imaging system for extracting volume of an object from an ultrasound image and method for the same
KR101121353B1 (en) System and method for providing 2-dimensional ct image corresponding to 2-dimensional ultrasound image
KR101017611B1 (en) System and method for extracting anatomical feature
US7983456B2 (en) Speckle adaptive medical image processing
JP4751282B2 (en) Ultrasonic diagnostic equipment
CN106780495B (en) Automatic detection and evaluation method and system for cardiovascular implantation stent based on OCT
KR100752333B1 (en) Method for improving the quality of a three-dimensional ultrasound doppler image
CN109767400B (en) Ultrasonic image speckle noise removing method for guiding trilateral filtering
EP2466541A2 (en) Image processing apparatus, image processing method and image processing program
JP2020531074A (en) Ultrasound system with deep learning network for image artifact identification and removal
US11455720B2 (en) Apparatus for ultrasound diagnosis of liver steatosis using feature points of ultrasound image and remote medical-diagnosis method using the same
Hiremath et al. Follicle detection in ultrasound images of ovaries using active contours method
US20100322495A1 (en) Medical imaging system
DE102004060396A1 (en) Method and system for classifying a breast lesion
JP2004049925A (en) Internal organ recognition device and method therefor
CN107169978B (en) Ultrasonic image edge detection method and system
CN117078545A (en) Tumor ultrasonic image enhancement method, equipment and medium
CN113689424B (en) Ultrasonic inspection system capable of automatically identifying image features and identification method
US6358206B1 (en) Ultrasound process for the determination of the location of a parietal surface in a tissue and of the absolute radius of an artery, and ultrasound apparatus for carrying out such process
EP2074591B1 (en) Methods, system and computer program product for detecting a protrusion
Loizou Ultrasound image analysis of the carotid artery
KR20070081803A (en) Ultrasound image processing system and method
US20210022631A1 (en) Automated optic nerve sheath diameter measurement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20231117

WW01 Invention patent application withdrawn after publication