WO2021179491A1 - 图像处理方法、装置、计算机设备和存储介质 - Google Patents

图像处理方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021179491A1
WO2021179491A1 PCT/CN2020/099474 CN2020099474W WO2021179491A1 WO 2021179491 A1 WO2021179491 A1 WO 2021179491A1 CN 2020099474 W CN2020099474 W CN 2020099474W WO 2021179491 A1 WO2021179491 A1 WO 2021179491A1
Authority
WO
WIPO (PCT)
Prior art keywords
breast
image
target
breast image
lesion area
Prior art date
Application number
PCT/CN2020/099474
Other languages
English (en)
French (fr)
Inventor
伍世宾
甘伟焜
张砚博
马捷
黄凌云
刘玉宇
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021179491A1 publication Critical patent/WO2021179491A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an image processing method, device, computer equipment, and storage medium.
  • the medical imaging commonly used for breast cancer diagnosis includes three modalities: X-ray, ultrasound and magnetic resonance imaging. Different modalities have their own advantages and disadvantages, and the signs of breast cancer lesions are different.
  • X-ray images are highly sensitive to calcification and microcalcification, and are more suitable for early or early prediction of breast cancer.
  • Bilateral images with different camera positions can be used to observe breast asymmetry and structural distortion to improve the judgment of benign and malignant breast cancer. Accuracy; however, the specificity of X-ray imaging for breast masses is not high, especially for uneven dense and extremely dense breasts.
  • breast magnetic resonance images have the characteristics of multiple sequences and large amount of data. Fatigue misdiagnosis or missed diagnosis may occur during manual reading of the film, and the MRI scan has low efficiency and high cost.
  • the purpose of this application is to provide an image processing method, device, computer equipment, and storage medium to improve the positioning accuracy of the breast lesion area.
  • this application provides an image processing method, including:
  • the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality
  • the imaging modality of the target breast image is an X-ray imaging modality
  • the imaging modality of the target breast image is an ultrasound imaging modality
  • the target breast image is preprocessed first, and then the preprocessed target breast image is segmented using a preset U-Net segmentation model Processing to obtain the position information of the breast lesion area in the target breast image.
  • this application also provides an image processing device, including:
  • the image receiving module is used to receive the target breast image
  • the modality detection module is used to detect whether the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality;
  • the X-ray image processing module is used to preliminarily determine whether the target breast image contains a breast lesion area when the imaging modality of the target breast image is an X-ray imaging modality, and if so, obtain the target breast image A reference breast image corresponding to the image, and then obtaining position information of a breast lesion area in the target breast image according to the target breast image and the reference breast image;
  • the ultrasound image processing module is used to process the target breast image by using a preset full convolution network when the imaging modality of the target breast image is an ultrasound imaging modality to obtain a prediction corresponding to the target breast image Segmenting the feature map, and then processing the pre-segmented feature map using a preset RPN model to obtain position information of the breast lesion area in the target breast image;
  • the magnetic resonance image processing module is used to preprocess the target breast image when the imaging modality of the target breast image is a magnetic resonance imaging modality, and then use a preset U-Net segmentation model to perform the preprocessing Segmentation processing is performed on the target breast image to obtain position information of the breast lesion area in the target breast image.
  • this application also provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor implements the image processing method when the computer program is executed. The following steps:
  • the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality
  • the imaging modality of the target breast image is an X-ray imaging modality
  • the imaging modality of the target breast image is an ultrasound imaging modality
  • the target breast image is preprocessed first, and then the preprocessed target breast image is segmented using a preset U-Net segmentation model Processing to obtain the position information of the breast lesion area in the target breast image.
  • the present application also provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps of the image processing method are realized:
  • the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality
  • the imaging modality of the target breast image is an X-ray imaging modality
  • the imaging modality of the target breast image is an ultrasound imaging modality
  • the target breast image is preprocessed first, and then the preprocessed target breast image is segmented using a preset U-Net segmentation model Processing to obtain the position information of the breast lesion area in the target breast image.
  • the present application can locate breast lesions in different modal breast images. Compared with the prior art, which can only process single-modal breast images, the accuracy of lesion location is improved. In addition, the present application is aimed at different imaging The characteristics of the modal breast image have designed different lesion location procedures to ensure that the breast lesion area can be accurately located.
  • FIG. 1 is a flowchart of an embodiment of the image processing method of this application
  • FIG. 2 is a structural block diagram of an embodiment of an image processing device according to the present application.
  • FIG. 3 is a hardware architecture diagram of an embodiment of the computer device of this application.
  • This embodiment provides an image processing method, which is suitable for smart medical care, disease risk assessment and other fields. As shown in Figure 1, the method specifically includes the following steps:
  • the target breast image is captured by one of a plurality of preset imaging modalities.
  • the preset imaging modalities may include X-ray imaging modalities, ultrasound imaging modalities, and magnetic resonance imaging modalities.
  • the source of the target breast image may be the hospital’s medical image archiving and communication system (Picture Archiving and Communication System). Communication systems, PACS), Radioiogy information system (RIS), and Hospital Information System (HIS). This embodiment can receive breast images from PACS, RIS, and HIS in real time.
  • step S2 Detect whether the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality, and execute step S3 when the imaging modality of the target breast image is an X-ray imaging modality
  • step S4 is executed when the imaging modality of the target breast image is an ultrasound imaging modality
  • step S5 is executed when the imaging modality of the target breast image is a magnetic resonance imaging modality.
  • the image names of the breast images of different imaging modalities may be marked with different marks, so that the imaging modality of the target breast image can be determined according to the marked marks.
  • the image name of the X-ray imaging modal image is marked with "X-ray”
  • the image name of the ultrasound imaging modal image is marked with "US”
  • the image name of the magnetic resonance imaging modal image is marked with "NMR”.
  • step S31 Preliminarily judge whether the target breast image contains a breast lesion area, if so, obtain a reference breast image corresponding to the target breast image, and execute step S32; otherwise, the process ends.
  • the steps of preliminarily determining whether the target breast image contains a breast lesion area are as follows: First, the target breast image is processed using a preset breast gland classification model to obtain the breast image in the target breast image Gland type, such as fat type, small number of glands, many glands, and dense glands. The order of the density of the four glands in descending order is: dense> abundant glands Body type>small number of gland types>fat type; then, the corresponding lesion determination threshold is determined according to the obtained gland type.
  • the corresponding lesion determination threshold is set in advance; finally, the target breast image is processed using the preset breast abnormality recognition model to obtain the probability of breast abnormality in the target breast image.
  • the probability of breast abnormality is greater than the aforementioned lesion determination threshold
  • the target breast image contains the breast lesion area
  • the lesion determination threshold as 40% as an example, when the breast abnormality probability output by the breast abnormality recognition model is 45%, since 45% is greater than 40%, it is preliminarily determined that the target breast image contains the breast lesion area.
  • the breast gland classification model adopted in this embodiment is preferably a Pyramidal Residual Network (based on a pyramid residual network) model, and the breast abnormality recognition model adopted is preferably a DenseNet (Dense Convolutional Network) model.
  • the target breast image when the target breast image is a Cranial-Caudal (CC) mammography image, the MedioLateral-Oblique (MLO) mammography image corresponding to the target breast image can be obtained as Reference breast image; when the target breast image is an MLO mammography image, the CC mammography image corresponding to the target breast image can be obtained as a reference breast image.
  • the target breast image and the reference breast image can also be molybdenum target images of the contralateral breast at the same position.
  • the molybdenum target image here refers to the image obtained by projecting the two-dimensional image of the breast on the X-ray photosensitive film or digital detector by using the physical properties of X-rays and the different iso-density values of the human breast tissue, referred to as the molybdenum target image .
  • S32 Acquire position information and a benign and malignant recognition result of a breast lesion area in the target breast image according to the target breast image and the reference breast image. Specifically, it can be achieved through the following steps:
  • this step can be implemented by any existing edge detection method, for example, using the literature [Breast Boundary Detection With Active Contours, I. Balic, P. Goyal, O. Roy, N. Duric.] The method of using active contours to detect breast edges is implemented.
  • a preset FPN (Feature Pyramid Network) model is used to process the first breast region to obtain a breast feature map in the first breast region, which is recorded as the first breast feature map; at the same time, The second breast region is processed by using the feature pyramid network to obtain a breast feature map of the second breast region, which is recorded as a second breast feature map.
  • FPN is composed of two paths, bottom-up and top-down.
  • the bottom-up path is the usual convolutional network for extracting features.
  • the ResNet network is used. This network is composed of many convolutional layers, which divide the same size into one group, and halve the size between adjacent groups.
  • the top-down path reconstructs layers with higher resolution based on layers with richer semantics. Although the reconstructed layer semantics are rich enough, after the down-sampling and up-sampling process, the position of the target is no longer accurate. Therefore, FPN adds a horizontal connection between the reconstruction layer and the corresponding feature map to help the detector better predict the position. These horizontal connections also act as skip connections (similar to the practice of residual networks).
  • a preset multi-instance learning (MIL) network model is used to process the first breast feature map and the second breast feature map to obtain position information and benign and malignant probabilities of the breast lesion area in the target breast image.
  • the multi-instance learning network is an existing weakly supervised learning network.
  • the training sample is a package composed of multiple instances. The package is labeled with a concept, but the instance itself has no concept label. If a packet contains at least one positive example, the packet is a positive packet, otherwise it is an anti-packet.
  • the training examples in MIL have no concept labels, which is different from that all training examples in supervised learning have concept labels; compared with unsupervised learning, the training package in MIL has concept labels. It is also different from the training sample of unsupervised learning without any concept labels.
  • a sample that is, a package
  • MIL contains multiple instances, that is, there is a one-to-many correspondence between samples and instances. Take each patch (image block) in the first feature map and the second feature map as an example, and substitute the first feature map and the second feature map as a package containing multiple instances into the MIL network for processing to obtain the target Location information and probability of benign and malignant breast lesions in breast imaging.
  • step S32 can also be implemented through the following steps:
  • the Faster R-CNN model mainly includes four parts: Conv layers (convolutional layer), RPN (Region Proposal Networks, Hou area selection network), ROIPooling (region of interest pooling) layer and Classifier.
  • Conv layers are used to extract feature maps:
  • Faster R-CNN first uses a set of basic conv+relu+pooling (convolution + corrected linear unit + pooling) layers to extract the input image Feature maps (feature maps), the feature maps will be used in the subsequent RPN layer and fully connected layer;
  • RPN network is mainly used to generate region proposals (candidate regions), first generate a bunch of anchor boxes (anchor boxes), and use non-polar After filtering it by large suppression, it is judged by Softmax (normalized indicator function) that anchors belong to the target (foreground) or the background (background), that is, the target object or not the target object, so this is a two-category; at the same time;
  • Softmax normalized indicator function
  • Another branch bounding boxregression amends the anchor box to form a more accurate proposal (candidate box) (Note: the more accurate here is relative to the next box regression of the fully connected layer);
  • ROI Pooling layer Use the proposals (candidate boxes
  • the model After obtaining the preliminary position and preliminary recognition results of the breast lesion area in the target breast image and the reference breast image through the two-way Faster R-CNN model, use the pre-trained SENet (Squeeze-and-Excitation Networks)
  • SENet Seeze-and-Excitation Networks
  • the model processes the preliminary position information and preliminary recognition results of the breast lesion area in the target breast image and the reference breast image, to fuse the positions and recognition results of the two images through SE-Block in the SENet model to obtain the target breast
  • SE-Block in the SENet model to obtain the target breast
  • the final location information and benign and malignant recognition results of breast lesions in the image Thereby, the accuracy of localization and recognition of the breast lesion area is effectively improved, and the false positive rate is reduced.
  • this application combines the target breast image and the reference breast image to locate and identify the lesion, which more realistically simulates the doctor's actual viewing process, thereby Improve the accuracy of localization and recognition of breast lesions.
  • Fully convolutional network FCN includes multiple fully convolutional layers, which is an extension of convolutional neural network (CNN) in the segmentation field, and is a kind of image semantic segmentation. Compared with CNN, the entire picture is classified, and the full convolutional network is to classify each pixel in a picture, which can achieve the classification of specific parts of the picture, which is more suitable for segmentation.
  • CNN convolutional neural network
  • FCN can accept input images of any size, and then upsample the feature map of the last convolutional layer through the deconvolution layer, so that It restores to the same size of the input image, so that a prediction can be generated for each pixel while retaining the spatial information in the original input image. Finally, each pixel is classified on a feature map of the same size as the input image.
  • RPN Registered Proposal Network, candidate region screening network
  • RPN first performs a multi-layer convolution operation on the input pre-segmented feature map, extracts the feature maps of the pre-segmented feature map, and then uses the sliding window to perform the convolution operation on the feature map, and then uses the classification loss function And the bounding box regression loss function to calculate the area classification and the area regression to obtain the position information of the breast lesion area in the target breast image.
  • the area classification here is to judge the probability that the predicted area belongs to the foreground and background of the lesion.
  • S43 Perform a normalization process on the breast lesion area by using a preset ROI Pooling layer (ROI Pooling) to obtain a feature vector of a fixed size.
  • ROI Pooling ROI Pooling layer
  • DenseNet Dense Convolutional Network
  • the imaging modality of the target breast image is an ultrasound imaging modality
  • the above steps can be used to accurately locate and identify the breast lesion area, thereby reducing missed diagnosis and misdiagnosis.
  • the preprocessing in this embodiment mainly includes thoracic cavity removal processing and breast effective area extraction processing.
  • the thoracic cavity removal processing is mainly used to remove the thoracic cavity part in the target breast image
  • the effective breast area extraction processing is mainly used to extract the effective breast area, which can be achieved by processing methods known in the art, for example, using the document [Automatic 3D segmentation of the breast in MRI, Carlos Gallego Ortiz] public method implementation.
  • the U-net in this embodiment is a segmented network model, and the entire network is in a "U" shape, which is also the source of the network name U-net.
  • the downstream area of the "U" character belongs to the editor, and the upstream area belongs to the decoder.
  • the U-net network is a deep-supervised learning network.
  • the so-called deep-supervised learning refers to the process of using a set of samples with known correct answers to adjust the parameters of the classifier to achieve the required performance.
  • the deep supervised learning network is a network that uses labeled data to learn.
  • the initialized network continuously modifies the parameters in the network according to the difference between the predicted value and the label, so that the predicted value of the network is getting closer and closer to the label to achieve the purpose of learning.
  • a small number of labeled samples can be used to train to obtain an accurate segmentation model, so as to achieve accurate segmentation of the lesion area.
  • Each layer of the U-net editor convolves and pools the input effective area of the breast for feature extraction.
  • Each layer of the decoder uses deconvolution to decode the extracted features to obtain a mapping layer and output it.
  • mapping layer With the same size as the input image, the mapping layer indicates the meaning of each part of the effective area of the breast, that is, the segmentation result, so that the U-net segmentation model can identify which part of the effective area of the breast is the breast lesion area.
  • a preset classification network such as the commonly used ResNet (residual network) and DenseNet (dense convolutional network)
  • ResNet residual network
  • DenseNet dense convolutional network
  • the imaging modality of the target breast image is a magnetic resonance imaging modality
  • the above steps can be used to accurately identify the breast lesion area and reduce missed diagnosis and misdiagnosis.
  • this embodiment can locate and identify the lesion area in different modal breast images. Compared with the prior art, which can only locate and identify single-modal breast images, it can improve the location of the lesion. And the accuracy of recognition.
  • the recognition result of the breast lesion area in the target breast image obtained in step S3, S4 or S5 cannot be diagnosed, that is, the recognition result is the difference between the benign and malignant probabilities of the breast lesion area in the target breast image
  • the replacement imaging modality inspection prompt can also be output to prompt the breast imaging inspection of other imaging modalities on the breast corresponding to the target breast image, for example, if the target breast image is
  • the imaging modality is an X-ray imaging modality, and the other imaging modality may be an ultrasound imaging modality and/or a magnetic resonance imaging modality.
  • the method of this embodiment may further include: measuring the size of the breast lesion area, and generating a structured report based on the size of the breast lesion area, the recognition result, and other information, so that doctors and patients can refer to it.
  • the method of this embodiment may further include: performing knowledge inference on the size and recognition result of the breast lesion area according to a preset breast cancer knowledge map, so as to obtain a recommended treatment plan for the doctor's reference.
  • the breast cancer knowledge map includes Multiple entities and the relationship between the entities and the entities, the entities include the size of the breast lesion area, the benign and malignant recognition results, and the treatment plan, and may also include the age of the corresponding patient, the status of marriage and childbirth, and/or the family history of breast cancer.
  • This embodiment provides an image processing device 10, as shown in FIG. 2, the device includes:
  • the image receiving module 11 is used to receive the target breast image
  • the modality detection module 12 is used to detect whether the imaging modality of the target breast image is an X-ray imaging modality, an ultrasound imaging modality or a magnetic resonance imaging modality;
  • the X-ray image processing module 13 is configured to preliminarily determine whether the target breast image contains a breast lesion area when the imaging modality of the target breast image is an X-ray imaging modality, and if so, obtain the same A reference breast image corresponding to the breast image, and then acquiring position information of a breast lesion area in the target breast image according to the target breast image and the reference breast image;
  • the ultrasound image processing module 14 is used to process the target breast image by using a preset full convolution network when the imaging modality of the target breast image is an ultrasound imaging modality to obtain the corresponding target breast image Pre-segmenting the feature map, and then using a preset RPN model to process the pre-segmenting feature map to obtain position information of the breast lesion area in the target breast image;
  • the magnetic resonance image processing module 15 is used to preprocess the target breast image when the imaging modality of the target breast image is the magnetic resonance imaging modality, and then use a preset U-Net segmentation model to preprocess the The processed target breast image is segmented to obtain position information of the breast lesion area in the target breast image.
  • the X-ray image processing module is further configured to obtain a benign and malignant recognition result of a breast lesion area in the target breast image according to the target breast image and the reference breast image.
  • the ultrasound image processing module is further used for:
  • the feature vector is processed by using a preset classification network to obtain a benign and malignant recognition result of the breast lesion area in the target breast image.
  • the magnetic resonance image processing module is further used for:
  • a preset classification network is used to process the breast lesion area to obtain a benign and malignant recognition result of the breast lesion area.
  • the step of the X-ray image processing module to preliminarily determine whether the target breast image contains a breast lesion area is as follows:
  • the X-ray image processing module acquires a mammography target image corresponding to the target breast image in a medial squint position as a reference breast image;
  • the X-ray image processing module acquires a cranial and caudal mammography image corresponding to the target breast image as a reference breast image.
  • the steps of the X-ray image processing module acquiring position information of the breast lesion area in the target breast image according to the target breast image and the reference breast image are as follows:
  • the first breast feature map and the second breast feature map are processed by using a preset multi-instance learning network model to obtain the location information of the breast lesion area in the target breast image.
  • the steps of the X-ray image processing module acquiring position information of the breast lesion area in the target breast image according to the target breast image and the reference breast image are as follows:
  • a preset SENet model is used to process the preliminary position of the breast lesion area in the target breast image and the reference breast image to obtain the position information of the breast lesion area in the target breast image.
  • This embodiment provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server, or a cabinet server (including independent servers, or more) that can execute programs.
  • a server cluster composed of two servers) and so on.
  • the computer device 20 in this embodiment at least includes but is not limited to: a memory 21 and a processor 22 that can be communicatively connected to each other through a system bus, as shown in FIG. 3. It should be pointed out that FIG. 3 only shows the computer device 20 with components 21-22, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the memory 21 (ie, readable storage medium) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), Read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 21 may be an internal storage unit of the computer device 20, such as a hard disk or a memory of the computer device 20.
  • the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk equipped on the computer device 20, or a smart memory card (Smart Media Card, SMC), Secure Digital (SD) card, Flash Card, etc.
  • the memory 21 may also include both an internal storage unit of the computer device 20 and an external storage device thereof.
  • the memory 21 is generally used to store an operating system and various application software installed in the computer device 20, such as the program code of the image processing apparatus 10 in the second embodiment, and so on.
  • the memory 21 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 22 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 22 is generally used to control the overall operation of the computer device 20.
  • the processor 22 is used to run the program code or process data stored in the memory 21, for example, to run the image processing device 10, so as to implement the image processing method of the first embodiment.
  • This embodiment provides a computer-readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Programmable Read-Only Memory (PROM), magnetic memory, magnetic disks, optical disks, servers, App application malls, etc., on which computer programs and programs are stored The corresponding function is realized when executed by the processor.
  • the computer-readable storage medium of this embodiment is used to store the image processing device 10, and when executed by a processor, it implements the image processing method of the first embodiment.
  • the computer-readable storage medium may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

本申请提供一种图像处理方法、装置、计算机设备和存储介质,其中,该方法包括接收目标乳腺影像;检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;根据所述目标乳腺影像的成像模态,对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。本申请能够对不同模态乳腺影像中的病灶区域进行定位,相对于现有技术只能对单模态乳腺影像进行定位而言,提高了病灶定位的准确性。

Description

图像处理方法、装置、计算机设备和存储介质
本申请要求于2020年03月13日递交的申请号为CN 202010174819.5、名称为“一种图像处理方法、装置、计算机设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种图像处理方法、装置、计算机设备和存储介质。
背景技术
临床常用于乳腺癌诊断的医学影像包括X射线、超声和磁共振影像三种模态,不同模态的影像各具优缺点,且在乳腺癌病灶征象表现各不相同。例如:X射线影像对钙化、微钙化敏感性高,更适合乳腺癌早期或早早期预测,可利用双侧不同机位影像观测乳腺的不对称性和结构扭曲,以提高判断乳腺癌良恶性的准确率;然而,X射线摄影影像对乳腺肿块的特异性不高,尤其是不均匀致密型和极度致密型乳腺,肿块诊断的假阳性率高,且具辐射性;超声波具有安全无辐射、诊断速度快和价格低廉等优势,且乳腺超声发现肿块的敏感性、特异性较高,然而由于超声波2D横切、纵切和斜切影像且分辨率较低,较难发现钙化,尤其微钙化几乎很难发现,而且,超声影像中的信息细节复杂,诊断的准确性很大程度上依赖医生的临床经验,可能发生漏诊或误诊。乳腺磁共振是3D成像模态,不受腺体密度的影响,具有良好的视觉效果,可三维立体观察病变,分辨正常乳房腺体与病灶,其敏感性强,适合确定乳腺癌病人的分期,可确诊对侧乳腺的隐性病灶和胸壁的浸润情况等;然而,相对X射线影像,其分辨率不易发现较小的钙化灶,而且,乳腺磁共振影像存在多序列、数据量大的特点,人工读片时可能会出现疲劳误诊或漏诊,且磁共振扫描效率低、成本高。
随着医学影像大数据和高性能计算技术的飞速发展,医学影像分析、病灶自动识别及判断是当前医工结合交叉领域研究的重点、热点。利用深度学习技术进行乳腺癌自动识别也是研究及临床应用的热点之一。
技术问题
发明人发现,现有乳腺影像的处理方法只能对X射线、超声和磁共振之一的单模态乳腺影像进行处理,由于单模态影像自身的局限性,所以导致乳腺病灶区域的定位准确性不高。
技术解决方案
针对上述现有技术的不足,本申请的目的在于提供一种图像处理方法、装置、计算机设备和存储介质,以提高乳腺病灶区域的定位准确性。
为了实现上述目的,本申请提供一种图像处理方法,包括:
接收目标乳腺影像;
检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
为了实现上述目的,本申请还提供一种图像处理装置,包括:
影像接收模块,用于接收目标乳腺影像;
模态检测模块,用于检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
X射线影像处理模块,用于在所述目标乳腺影像的成像模态为X射线成像模态时,初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
超声影像处理模块,用于在所述目标乳腺影像的成像模态为超声成像模态时,利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
磁共振影像处理模块,用于在所述目标乳腺影像的成像模态为磁共振成像模态时,对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
为了实现上述目的,本申请还提供一种计算机设备,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现图像处理方法的以下步骤:
接收目标乳腺影像;
检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
为了实现上述目的,本申请还提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现实现图像处理方法的以下步骤:
接收目标乳腺影像;
检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
有益效果
本申请能够对不同模态乳腺影像中的乳腺病灶区域进行定位,相对于现有技术只能对单模态乳腺影像进行处理而言,提高了病灶定位的准确性,并且,本申请针对不同成像模态乳腺图像的特点,设计了不同的病灶定位流程,保证了乳腺病灶区域能够准确定位。
附图说明
图1为本申请图像处理方法的一个实施例的流程图;
图2为本申请图像处理装置的一个实施例的结构框图;
图3为本申请计算机设备的一个实施例的硬件架构图。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在本申请使用的术语是仅仅出于描述特定实施例的目的,而非旨在限制本公开。在本公开和所附权利要求书中所使用的单数形式的“一种”、“所述”和“该”也旨在包括多数形式,除非上下文清楚地表示其他含义。还应当理解,本文中使用的术语“和/或”是指并包含一个或多个相关联的列出项目的任何或所有可能组合。
实施例一
本实施例提供一种图像处理方法,适用于智慧医疗、疾病风险评估等领域。如图1所示,该方法具体包括以下步骤:
S1,接收目标乳腺影像,所述目标乳腺影像由预设的多种成像模态中的其中一种拍摄而成。其中,预设的成像模态可以包括X射线成像模态、超声成像模态和磁共振成像模态等。在本实施例中,目标乳腺影像的来源可以是医院的医学影像存档与通讯系统(Picture archiving and communication systems, PACS)、放射信息管理系统(Radioiogy information system, RIS)和医院信息系统(HIS),本实施例可以实时接收来自PACS、RIS和HIS的乳腺影像。
S2,检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态,并在目标乳腺影像的成像模态为X射线成像模态时执行步骤S3,在目标乳腺影像的成像模态为超声成像模态时执行步骤S4,在目标乳腺影像的成像模态为磁共振成像模态时执行步骤S5。在本实施例中,不同成像模态乳腺影像的影像名称中可以标注有不同标记,从而可以根据标注的标记来判断目标乳腺影像的成像模态。例如,X射线成像模态影像的影像名称中标注有“X-ray”,超声成像模态影像的影像名称中标注有“US”, 磁共振成像模态影像的影像名称中标注有“NMR”。因此,当检测到目标乳腺影像的影像名称中标注有“X-ray”时,则判定目标乳腺影像的成像模态为X射线成像;当检测到目标乳腺影像的影像名称中标注有“US”时,则判定目标乳腺影像的成像模态为超声成像;当检测到目标乳腺影像的影像名称中标注有“NMR”时,则判定目标乳腺影像成像模态为磁共振成像。
S3,当所述目标乳腺影像的成像模态为X射线成像模态时,通过如下步骤获取目标乳腺影像中乳腺病灶区域的位置信息:
S31,初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,并执行步骤S32,否则,结束流程。具体来说,初步判断所述目标乳腺影像中是否含有乳腺病灶区域的步骤如下:首先,利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型,例如脂肪型、少量腺体型、多量腺体型和致密型中的任意一种,其中四种腺体类型的腺体密度由大到小排列的顺序为:致密型>多量腺体型>少量腺体型>脂肪型;而后,根据得到的腺体类型确定对应的病灶判定阈值,本实施例中,脂肪型、少量腺体型、多量腺体型和致密型腺体类型各自对应的病灶判定阈值是通过预先设置的;最后,利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到目标乳腺影像中乳腺异常的概率,当乳腺异常的概率大于前述病灶判定阈值时,则初步判定目标乳腺影像中含有乳腺病灶区域,否则,初步判定目标乳腺影像中不含有乳腺病灶区域。以病灶判定阈值为40%为例,当乳腺异常识别模型输出的乳腺异常概率为45%时,由于45%大于40%,则初步判定目标乳腺影像中含有乳腺病灶区域,当乳腺异常识别模型输出的乳腺异常概率为35%时,由于35%小于40%,则初步判定目标乳腺影像中不含有乳腺病灶区域。其中,本实施例采用的乳腺腺体分类模型优选为Pyramidal Residual Network(基于金字塔残差网络)模型,采用的乳腺异常识别模型优选为DenseNet( Dense Convolutional Network,密集卷积网络)模型。
在本实施例中,当目标乳腺影像为头尾(Cranial-Caudal,CC)位钼靶影像时,则可以获取与目标乳腺影像对应的的内侧斜视(MedioLateral-Oblique,MLO)位钼靶影像为参考乳腺影像;当目标乳腺影像为MLO位钼靶影像时,则可以获取与目标乳腺影像对应的CC位钼靶影像为参考乳腺影像。此外,目标乳腺影像和参考乳腺影像也可以为对侧乳房在相同位置的钼靶影像。此处的钼靶影像是指利用X射线的物理性质及人体乳房组织不同的等密度值,将乳房的二维图像投影于X光感光胶片或数字化探测器之上得到的影像,简称钼靶影像。
S32,根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息和良恶性识别结果。具体可以通过如下步骤实现:
首先,对所述目标乳腺影像进行边缘检测处理,得到所述目标乳腺影像中的乳腺区域,记为第一乳腺区域;同时对所述参考乳腺影像进行边缘检测处理,得到所述参考乳腺影像中的乳腺区域,记为第二乳腺区域。通常,X射线拍摄的影像存在较大黑色背景区域,有利于采用边缘检测方法获取目标乳腺影像中的乳腺区域。需要说明的是,本步骤可采用现有的任何一种边缘检测方法实现,例如,采用文献[Breast Boundary Detection with Active Contours, I. Balic, P. Goyal, O. Roy, N. Duric.]所公开的利用主动轮廓检测乳腺边缘的方法实现。
而后,利用预设的FPN(Feature Pyramid Network,特征金字塔网络)模型对所述第一乳腺区域进行处理,得到所述第一乳腺区域中的乳腺特征图,记为第一乳腺特征图;同时,利用所述特征金字塔网络对所述第二乳腺区域进行处理,得到所述第二乳腺区域的乳腺特征图,记为第二乳腺特征图。其中,FPN由自底向上和自顶向下两个路径组成。自底向上的路径是通常的提取特征的卷积网络此处用ResNet网络,这个网络由很多卷积层组成,将大小相同的分成一组,相邻组之间大小减半。自底向上,空间分辨率递减,检测更多高层结构,网络层的语义值相应增加。自顶向下的路径,基于语义较丰富的层重建分辨率较高的层。尽管重建的层语义足够丰富,但经过下采样和上采样过程,目标的位置不再准确了。因此FPN在重建层和相应的特征映射间增加了横向连接,以帮助检测器更好地预测位置。这些横向连接同时起到了跨跃连接(skip connection)的作用(类似残差网络的做法)。
最后,利用预设的多实例学习(MIL)网络模型对第一乳腺特征图和第二乳腺特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息和良、恶性概率。多实例学习网络是现有的一种弱监督学习网络,在多实例学习过程中,训练样本是由多个实例组成的包,包是有概念标记的,但实例本身却没有概念标记。如果一个包中至少包含一个正例,则该包是一个正包,否则即为反包。与监督学习相比,MIL中的训练实例是没有概念标记的,这与监督学习中所有训练实例都有概念标记不同;与非监督学习相比,MIL中的训练包是有概念标记的,这与非监督学习的训练样本中没有任何概念标记也不同。在MIL中一个样本(即包)包含了多个实例,即样本和实例是一对多的对应关系。将第一特征图和第二特征图中的每个patch(图像块)作为一个实例,将第一特征图和第二特征图作为包含多个实例的包代入MIL网络进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息和良、恶性概率。
除上述方法以外,步骤S32还可以通过如下步骤实现:
利用预设的两路Faster R-CNN模型分别对所述目标乳腺影像和参考乳腺影像进行处理,得到目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置信息和初步识别结果,该初步识别结果表示为乳腺病灶区域的良、恶性概率。在本实施例中,Faster R-CNN模型主要包括四部分:Conv layers(卷积层)、RPN(Region Proposal Networks,侯区域选择网络)、ROIPooling(感兴趣区域池化)层和Classifier(分类器)。其中,Conv layers用于提取特征图:作为一种CNN网络目标检测方法,Faster R-CNN首先使用一组基础的conv+relu+pooling(卷积+修正线性单元+池化)层提取输入图像的feature maps(特征图),该feature maps会用于后续的RPN层和全连接层;RPN网络主要用于生成region proposals(候选区域),首先生成一堆Anchor boxes(锚框),并使用非极大抑制对其进行过滤后再通过Softmax(归一化指示函数)判断anchors(锚)属于目标(foreground)或者背景(background),即是目标物体或不是目标物体,所以这是一个二分类;同时,另一分支bounding boxregression(边框回归)修正anchor box,形成较精确的proposal(侯选框)(注:这里的较精确是相对于后面全连接层的再一次box regression而言);ROI Pooling层利用RPN生成的proposals(侯选框)和Conv layers的最后一层得到的feature map,得到固定大小的proposalfeature map(侯选框特征图),后续可利用全连接操作来进行目标识别和定位;Classifier用于将ROI Pooling层形成固定大小的feature map 进行全连接操作,利用Softmax进行具体病灶类别的分类,同时,利用L1 Loss(L1损失函数)完成bounding box regression(边框回归)回归操作获得病灶的准确位置。
在通过两路Faster R-CNN模型分别得到目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置和初步识别结果后,利用预先训练得到的SENet(Squeeze-and-Excitation Networks)模型对目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置信息和初步识别结果进行处理,以通过SENet模型中的SE-Block将两个影像的位置和识别结果进行融合,得到目标乳腺影像中乳腺病灶区域的最终位置信息和良恶性识别结果。从而,有效提高识别乳腺病灶区域定位和识别的准确性,降低假阳性率。
可见,当所述目标乳腺影像的成像模态为X射线成像模态时,本申请结合了目标乳腺影像和参考乳腺影像进行病灶定位和识别,较为真实地模拟了医生实际的看片过程,从而提高了乳腺病灶区域定位和识别的准确性。
S4,当所述目标乳腺影像的成像模态为超声成像模态时,通过如下步骤获取目标乳腺影像中乳腺病灶区域的位置信息和良恶性识别结果:
S41,利用预设的全卷积网络(FCN)对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图。全卷积网络FCN包括多个全卷积层,是对卷积神经网络(CNN)在分割领域的一个延伸,是一种图像语义分割。相对于CNN对整张图片进行分类,而全卷积网络是对一张图片中的每个像素进行分类,可以达到对图片特定部分的分类,比较适合分割。与CNN在卷积层之后使用全连接层得到固定长度的特征向量进行分类不同,FCN可以接受任意尺寸的输入图像,然后通过反卷积层对最后一个卷积层的特征图进行上采样, 使它恢复到输入图像相同的尺寸,从而可以对每个像素都产生了一个预测,同时保留了原始输入图像中的空间信息,最后在与输入图等大小的特征图上对每个像素进行分类。
S42,利用预设的RPN(Regional Proposal Network,候选区域筛选网络)对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。具体地,RPN先对输入的预分割特征图进行多层卷积操作,提取出预分割特征图的特征映射(feature maps),再使用滑动窗口对特征映射进行卷积操作,之后使用分类损失函数和边框回归损失函数两个分支来计算区域分类以及区域回归,得到所述目标乳腺影像中乳腺病灶区域的位置信息。此处的区域分类是来判断预测区域属于病灶前景和背景的概率。
S43,利用预设的感兴趣区域池化层(ROI Pooling)对所述乳腺病灶区域进行归一化处理,得到固定尺寸的特征向量。
S44,利用预设的分类网络,如常用的DenseNet(密集卷积网络),对归一化的所述特征向量进行处理,即可准确得到目标乳腺影像中乳腺病灶区域的良、恶性识别结果。其中,DenseNet的结构和原理可以参见文献[Densely Connected Convolutional Networks, Gao Huang, Zhuang Liu, Laurens van der Maaten.]。
当所述目标乳腺影像的成像模态为超声成像模态时,采用上述步骤可以对乳腺病灶区域进行准确定位和识别,减少漏诊和误诊。
S5,当所述目标乳腺影像的成像模态为磁共振成像模态时,通过如下步骤获取目标乳腺影像中乳腺病灶区域的位置信息和良恶性识别结果:
S51,对所述目标乳腺影像进行预处理,由于磁共振乳腺影像通常包含胸腔、乳房和其它部分,所以本实施例中的预处理主要包括去除胸腔处理和乳房有效区域提取处理。其中,去除胸腔处理主要用于去除目标乳腺影像中的胸腔部分,乳房有效区域提取处理主要用于提取乳房有效区域,可以采用本领域已知的处理方法实现,例如采用文献[Automatic 3D segmentation of the breast in MRI, Cristina Gallego Ortiz]公开的方法实现。
S52,利用预设的U-Net分割模型对预处理得到的所述乳房有效区域进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。本实施例的U-net是分割型网络模型,整个网络呈“U”形,也是网络名为U-net的来源。在“U”字的下行区域属于编辑器,上行区域属于解码器。U-net网络是深度监督学习网络,所谓深度监督学习是指:利用一组已知正确答案的样本调整分类器的参数,使其达到所要求性能的过程。对于深度监督学习网络就是利用有标签的数据来学习的网络,被初始化的网络根据预测值与标签的差别不断修改网络中的参数,使网络的预测值越来越接近标签,以达到学习的目的,可利用少量标注样本训练获得了精准的分割模型,从而实现对病灶区域的精准分割。U-net的编辑器各层对输入的乳房有效区域进行卷积和池化,以进行特征提取,解码器各层使用反卷积对提取的特征进行解码得到映射层并输出,映射层的大小与输入图像大小相同,映射层指示出乳房有效区域的每一部分代表的含义,即分割结果,从而通过U-net分割模型识别出乳房有效区域的哪一部分是乳腺病灶区域。
S53,利用预设的分类网络,如常用的ResNet(残差网络)、DenseNet(密集卷积网络),对所述乳腺病灶区域进行处理,即可准确得到目标乳腺影像中乳腺病灶区域的良、恶性识别结果。
当所述目标乳腺影像的成像模态为磁共振成像模态时,采用上述步骤可以对乳腺病灶区域进行准确识别,减少漏诊和误诊。
可见,通过上述步骤,使得本实施例能够对不同模态乳腺影像中的病灶区域进行定位和识别,相对于现有技术只能对单模态乳腺影像进行定位和识别而言,能够提高病灶定位和识别的准确性。同时,当步骤S3、S4或S5得到的所述目标乳腺影像中乳腺病灶区域的识别结果不能确诊时,即,所述识别结果为所述目标乳腺影像中乳腺病灶区域的良、恶性概率之差在预定的非确诊范围(如15%)内时,还可以输出更换成像模态检查提示,以提示对目标乳腺影像对应的乳房进行其它成像模态的乳腺影像检查,如,假设目标乳腺影像的成像模态为X射线成像模态,则其它成像模态可以是超声成像模态和/或磁共振成像模态。当完成其它成像模态的乳腺影像检查后,接收所述其它成像模态拍摄的乳腺影像并将接收到的乳腺影像作为新的目标乳腺影像,而后重复执行步骤S1-S5,即可得到对应的乳腺病灶区域的识别结果,供临床医生比对参考,以提升诊断效率和准确性。
进一步地,本实施例的方法还可以包括:测量所述乳腺病灶区域的尺寸,并根据乳腺病灶区域的尺寸、识别结果等信息生成结构化报告,以便医生和病人查阅。此外,本实施例的方法还可以包括:根据预先设置的乳腺癌知识图谱,对乳腺病灶区域的尺寸及识别结果等进行知识推理,以得到推荐治疗方案供医生参考,其中,乳腺癌知识图谱包括多个实体以及实体与实体之间的关系,所述实体包括乳腺病灶区域的尺寸、良恶性识别结果和治疗方案,还可以包括对应患者的年龄、婚育状况和/或乳腺癌家族史等。
需要说明的是,对于本实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作并不一定是本申请所必须的。
实施例二
本实施例提供一种图像处理装置10,如图2所示,该装置包括:
影像接收模块11,用于接收目标乳腺影像;
模态检测模块12,用于检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
X射线影像处理模块13,用于在所述目标乳腺影像的成像模态为X射线成像模态时,初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
超声影像处理模块14,用于在所述目标乳腺影像的成像模态为超声成像模态时,利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
磁共振影像处理模块15,用于在所述目标乳腺影像的成像模态为磁共振成像模态时,对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
在本申请一个实施例中,所述X射线影像处理模块还用于根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的良恶性识别结果。
在本申请一个实施例中,所述超声影像处理模块还用于:
利用预设的感兴趣区域池化层对所述乳腺病灶区域进行归一化处理,得到固定尺寸的特征向量;
利用预设的分类网络对所述特征向量进行处理,得到所述目标乳腺影像中乳腺病灶区域的良恶性识别结果。
在本申请一个实施例中,所述磁共振影像处理模块还用于:
利用预设的分类网络对所述乳腺病灶区域进行处理,得到所述乳腺病灶区域的良恶性识别结果。
在本申请一个实施例中,所述X射线影像处理模块初步判断所述目标乳腺影像中是否含有乳腺病灶区域的步骤如下:
利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型;
根据所述腺体类型确定所述乳腺病灶区域的病灶判定阈值;
利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺异常的概率,并在所述概率大于所述病灶判定阈值时,初步判定所述目标乳腺影像中含有所述乳腺病灶区域。
在本申请一个实施例中,当所述目标乳腺影像为头尾位钼靶影像时,所述X射线影像处理模块获取与所述目标乳腺影像对应的内侧斜视位钼靶影像为参考乳腺影像;当所述目标乳腺影像为内侧斜视位钼靶影像时,所述X射线影像处理模块获取与所述目标乳腺影像对应的头尾位钼靶影像为参考乳腺影像。
在本申请一个实施例中,所述X射线影像处理模块根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息的步骤如下:
对所述目标乳腺影像进行边缘检测处理,得到所述目标乳腺影像中的乳腺区域,记为第一乳腺区域;
对所述参考乳腺影像进行边缘检测处理,得到所述参考乳腺影像中的乳腺区域,记为第二乳腺区域;
利用预设的特征金字塔网络模型对所述第一乳腺区域进行处理,得到所述第一乳腺区域中的乳腺特征图;
利用所述特征金字塔网络模型对所述第二乳腺区域进行处理,得到所述第二乳腺区域中的乳腺特征图;
利用预设的多实例学习网络模型对所述第一乳腺特征图和第二乳腺特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
在本申请一个实施例中,所述X射线影像处理模块根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息的步骤如下:
利用预设的两路Faster R-CNN模型分别对所述目标乳腺影像和参考乳腺影像进行处理,得到所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置;
利用预设的SENet模型对所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置进行处理,得到所述目标乳腺影像中乳腺病灶区域的所述位置信息。
对于本装置实施例而言,其与实施例一的方法实施例基本相似,所以在此描述的比较简单,相关之处参见方法实施例的部分说明即可。同时,本领域技术人员也应该知悉,说明书中所描述的实施例属于优选实施例,所涉及的模块作并不一定是本申请所必须的。
实施例三
本实施例提供一种计算机设备,如可以执行程序的智能手机、平板电脑、笔记本电脑、台式计算机、机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。本实施例的计算机设备20至少包括但不限于:可通过系统总线相互通信连接的存储器21、处理器22,如图3所示。需要指出的是,图3仅示出了具有组件21-22的计算机设备20,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
本实施例中,存储器21(即可读存储介质)包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器21可以是计算机设备20的内部存储单元,例如该计算机设备20的硬盘或内存。在另一些实施例中,存储器21也可以是计算机设备20的外部存储设备,例如该计算机设备20上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,存储器21还可以既包括计算机设备20的内部存储单元也包括其外部存储设备。本实施例中,存储器21通常用于存储安装于计算机设备20的操作系统和各类应用软件,例如实施例二的图像处理装置10的程序代码等。此外,存储器21还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器22在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器22通常用于控制计算机设备20的总体操作。本实施例中,处理器22用于运行存储器21中存储的程序代码或者处理数据,例如运行图像处理装置10,以实现实施例一的图像处理方法。
实施例四
本实施例提供一种计算机可读存储介质,如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘、服务器、App应用商城等等,其上存储有计算机程序,程序被处理器执行时实现相应功能。本实施例的计算机可读存储介质用于存储图像处理装置10,被处理器执行时实现实施例一的图像处理方法。所述计算机可读存储介质可以是非易失性,也可以是易失性。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种图像处理方法,其中,包括:
    接收目标乳腺影像;
    检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
    当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  2. 根据权利要求1所述的图像处理方法,其中,所述初步判断所述目标乳腺影像中是否含有乳腺病灶区域,包括:
    利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型;
    根据所述腺体类型确定所述乳腺病灶区域的病灶判定阈值;
    利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺异常的概率,并在所述概率大于所述病灶判定阈值时,初步判定所述目标乳腺影像中含有所述乳腺病灶区域。
  3. 根据权利要求1所述的图像处理方法,其中,所述获取与所述目标乳腺影像对应的参考乳腺影像,包括:
    当所述目标乳腺影像为头尾位钼靶影像时,则获取与所述目标乳腺影像对应的内侧斜视位钼靶影像为参考乳腺影像;
    当所述目标乳腺影像为内侧斜视位钼靶影像时,则获取与所述目标乳腺影像对应的头尾位钼靶影像为参考乳腺影像。
  4. 根据权利要求1所述的图像处理方法,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    对所述目标乳腺影像进行边缘检测处理,得到所述目标乳腺影像中的乳腺区域,记为第一乳腺区域;
    对所述参考乳腺影像进行边缘检测处理,得到所述参考乳腺影像中的乳腺区域,记为第二乳腺区域;
    利用预设的特征金字塔网络模型对所述第一乳腺区域进行处理,得到所述第一乳腺区域中的乳腺特征图;
    利用所述特征金字塔网络模型对所述第二乳腺区域进行处理,得到所述第二乳腺区域中的乳腺特征图;
    利用预设的多实例学习网络模型对所述第一乳腺特征图和第二乳腺特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  5. 根据权利要求1所述的图像处理方法,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    利用预设的两路Faster R-CNN模型分别对所述目标乳腺影像和参考乳腺影像进行处理,得到所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置;
    利用预设的SENet模型对所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置进行处理,得到所述目标乳腺影像中乳腺病灶区域的所述位置信息。
  6. 一种图像处理装置,其中,包括:
    影像接收模块,用于接收目标乳腺影像;
    模态检测模块,用于检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
    X射线影像处理模块,用于在所述目标乳腺影像的成像模态为X射线成像模态时,初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
    超声影像处理模块,用于在所述目标乳腺影像的成像模态为超声成像模态时,利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
    磁共振影像处理模块,用于在所述目标乳腺影像的成像模态为磁共振成像模态时,对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  7. 根据权利要求6所述的图像处理装置,其中,所述X射线影像处理模块还用于:
    根据所述目标乳腺影像和所述参考乳腺影像,获取所述乳腺病灶区域的良恶性识别结果。
  8. 根据权利要求6所述的图像处理装置,其中,所述超声影像处理模块还用于:
    利用预设的感兴趣区域池化层对所述乳腺病灶区域进行归一化处理,得到固定尺寸的特征向量;
    利用预设的分类网络对所述特征向量进行处理,得到所述乳腺病灶区域的良恶性识别结果。
  9. 根据权利要求6所述的图像处理装置,其中,所述磁共振影像处理模块还用于:
    利用预设的分类网络对所述乳腺病灶区域进行处理,得到所述乳腺病灶区域的良恶性识别结果。
  10. 根据权利要求6所述的图像处理装置,其中,所述X射线影像处理模块初步判断所述目标乳腺影像中是否含有乳腺病灶区域的步骤如下:
    利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型;
    根据所述腺体类型确定所述乳腺病灶区域的病灶判定阈值;
    利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺异常的概率,并在所述概率大于所述病灶判定阈值时,初步判定所述目标乳腺影像中含有所述乳腺病灶区域。
  11. 一种计算机设备,包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述处理器执行所述计算机程序时实现图像处理方法的以下步骤:
    接收目标乳腺影像;
    检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
    当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  12. 根据权利要求11所述的计算机设备,其中,所述初步判断所述目标乳腺影像中是否含有乳腺病灶区域,包括:
    利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型;
    根据所述腺体类型确定所述乳腺病灶区域的病灶判定阈值;
    利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺异常的概率,并在所述概率大于所述病灶判定阈值时,初步判定所述目标乳腺影像中含有所述乳腺病灶区域。
  13. 根据权利要求11所述的计算机设备,其中,所述获取与所述目标乳腺影像对应的参考乳腺影像,包括:
    当所述目标乳腺影像为头尾位钼靶影像时,则获取与所述目标乳腺影像对应的内侧斜视位钼靶影像为参考乳腺影像;
    当所述目标乳腺影像为内侧斜视位钼靶影像时,则获取与所述目标乳腺影像对应的头尾位钼靶影像为参考乳腺影像。
  14. 根据权利要求11所述的计算机设备,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    对所述目标乳腺影像进行边缘检测处理,得到所述目标乳腺影像中的乳腺区域,记为第一乳腺区域;
    对所述参考乳腺影像进行边缘检测处理,得到所述参考乳腺影像中的乳腺区域,记为第二乳腺区域;
    利用预设的特征金字塔网络模型对所述第一乳腺区域进行处理,得到所述第一乳腺区域中的乳腺特征图;
    利用所述特征金字塔网络模型对所述第二乳腺区域进行处理,得到所述第二乳腺区域中的乳腺特征图;
    利用预设的多实例学习网络模型对所述第一乳腺特征图和第二乳腺特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  15. 根据权利要求11所述的计算机设备,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    利用预设的两路Faster R-CNN模型分别对所述目标乳腺影像和参考乳腺影像进行处理,得到所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置;
    利用预设的SENet模型对所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置进行处理,得到所述目标乳腺影像中乳腺病灶区域的所述位置信息。
  16. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现图像处理方法的以下步骤:
    接收目标乳腺影像;
    检测所述目标乳腺影像的成像模态是否为X射线成像模态、超声成像模态或磁共振成像模态;
    当所述目标乳腺影像的成像模态为X射线成像模态时,则首先初步判断所述目标乳腺影像中是否含有乳腺病灶区域,若有,则获取与所述目标乳腺影像对应的参考乳腺影像,而后根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为超声成像模态时,则首先利用预设的全卷积网络对所述目标乳腺影像进行处理,得到所述目标乳腺影像对应的预分割特征图,而后利用预设的RPN模型对所述预分割特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息;
    当所述目标乳腺影像的成像模态为磁共振成像模态时,则首先对所述目标乳腺影像进行预处理,而后利用预设的U-Net分割模型对经过预处理的目标乳腺影像进行分割处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  17. 根据权利要求16所述的计算机可读存储介质,其中,所述初步判断所述目标乳腺影像中是否含有乳腺病灶区域,包括:
    利用预设的乳腺腺体分类模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺的腺体类型;
    根据所述腺体类型确定所述乳腺病灶区域的病灶判定阈值;
    利用预设的乳腺异常识别模型对所述目标乳腺影像进行处理,得到所述目标乳腺影像中乳腺异常的概率,并在所述概率大于所述病灶判定阈值时,初步判定所述目标乳腺影像中含有所述乳腺病灶区域。
  18. 根据权利要求16所述的计算机可读存储介质,其中,所述获取与所述目标乳腺影像对应的参考乳腺影像,包括:
    当所述目标乳腺影像为头尾位钼靶影像时,则获取与所述目标乳腺影像对应的内侧斜视位钼靶影像为参考乳腺影像;
    当所述目标乳腺影像为内侧斜视位钼靶影像时,则获取与所述目标乳腺影像对应的头尾位钼靶影像为参考乳腺影像。
  19. 根据权利要求16所述的计算机可读存储介质,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    对所述目标乳腺影像进行边缘检测处理,得到所述目标乳腺影像中的乳腺区域,记为第一乳腺区域;
    对所述参考乳腺影像进行边缘检测处理,得到所述参考乳腺影像中的乳腺区域,记为第二乳腺区域;
    利用预设的特征金字塔网络模型对所述第一乳腺区域进行处理,得到所述第一乳腺区域中的乳腺特征图;
    利用所述特征金字塔网络模型对所述第二乳腺区域进行处理,得到所述第二乳腺区域中的乳腺特征图;
    利用预设的多实例学习网络模型对所述第一乳腺特征图和第二乳腺特征图进行处理,得到所述目标乳腺影像中乳腺病灶区域的位置信息。
  20. 根据权利要求16所述的计算机可读存储介质,其中,所述根据所述目标乳腺影像和所述参考乳腺影像,获取所述目标乳腺影像中乳腺病灶区域的位置信息,包括:
    利用预设的两路Faster R-CNN模型分别对所述目标乳腺影像和参考乳腺影像进行处理,得到所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置;
    利用预设的SENet模型对所述目标乳腺影像和参考乳腺影像中乳腺病灶区域的初步位置进行处理,得到所述目标乳腺影像中乳腺病灶区域的所述位置信息。
PCT/CN2020/099474 2020-03-13 2020-06-30 图像处理方法、装置、计算机设备和存储介质 WO2021179491A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010174819.5A CN111428709B (zh) 2020-03-13 2020-03-13 图像处理方法、装置、计算机设备和存储介质
CN202010174819.5 2020-03-13

Publications (1)

Publication Number Publication Date
WO2021179491A1 true WO2021179491A1 (zh) 2021-09-16

Family

ID=71553673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/099474 WO2021179491A1 (zh) 2020-03-13 2020-06-30 图像处理方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN111428709B (zh)
WO (1) WO2021179491A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937169A (zh) * 2022-12-23 2023-04-07 广东创新科技职业学院 一种基于高分辨率和目标检测的虾苗计数方法、系统
CN116416235A (zh) * 2023-04-12 2023-07-11 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN118039087B (zh) * 2024-04-15 2024-06-07 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) 一种基于多维度信息的乳腺癌预后数据处理方法及系统

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111986165B (zh) * 2020-07-31 2024-04-09 北京深睿博联科技有限责任公司 一种乳房图像中的钙化检出方法及装置
CN112308853A (zh) * 2020-10-20 2021-02-02 平安科技(深圳)有限公司 电子设备、医学图像指标生成方法、装置及存储介质
CN112348082B (zh) * 2020-11-06 2021-11-09 上海依智医疗技术有限公司 深度学习模型构建方法、影像处理方法及可读存储介质
CN112200161B (zh) * 2020-12-03 2021-03-02 北京电信易通信息技术股份有限公司 一种基于混合注意力机制的人脸识别检测方法
CN112529900B (zh) * 2020-12-29 2024-03-29 广州华端科技有限公司 匹配乳腺图像中roi的方法、装置、终端与存储介质
CN112712093B (zh) * 2021-01-11 2024-04-05 中国铁道科学研究院集团有限公司电子计算技术研究所 安检图像识别方法、装置、电子设备及存储介质
CN113239951B (zh) * 2021-03-26 2024-01-30 无锡祥生医疗科技股份有限公司 超声乳腺病灶的分类方法、装置及存储介质
CN113191392B (zh) * 2021-04-07 2023-01-24 山东师范大学 一种乳腺癌图像信息瓶颈多任务分类和分割方法及系统
CN113662573B (zh) * 2021-09-10 2023-06-30 上海联影医疗科技股份有限公司 乳腺病灶定位方法、装置、计算机设备和存储介质
CN114723670A (zh) * 2022-03-10 2022-07-08 苏州鸿熙融合智能医疗科技有限公司 乳腺癌病变图片智能处理方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945168A (zh) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 一种医学图像的处理方法及医学图像处理系统
CN109146848A (zh) * 2018-07-23 2019-01-04 东北大学 一种融合多模态乳腺图像的计算机辅助参考系统及方法
CN110807788A (zh) * 2019-10-21 2020-02-18 腾讯科技(深圳)有限公司 医学图像处理方法、装置、电子设备及计算机存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9911199B2 (en) * 2012-03-05 2018-03-06 Brainlab Ag Using different indicators for determining positional changes of a radiotherapy target
US10769791B2 (en) * 2017-10-13 2020-09-08 Beijing Keya Medical Technology Co., Ltd. Systems and methods for cross-modality image segmentation
CN108364006B (zh) * 2018-01-17 2022-03-08 超凡影像科技股份有限公司 基于多模式深度学习的医学图像分类装置及其构建方法
CN110738633B (zh) * 2019-09-09 2023-06-20 西安电子科技大学 一种机体组织的三维图像处理方法及相关设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945168A (zh) * 2017-11-30 2018-04-20 上海联影医疗科技有限公司 一种医学图像的处理方法及医学图像处理系统
CN109146848A (zh) * 2018-07-23 2019-01-04 东北大学 一种融合多模态乳腺图像的计算机辅助参考系统及方法
CN110807788A (zh) * 2019-10-21 2020-02-18 腾讯科技(深圳)有限公司 医学图像处理方法、装置、电子设备及计算机存储介质

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115937169A (zh) * 2022-12-23 2023-04-07 广东创新科技职业学院 一种基于高分辨率和目标检测的虾苗计数方法、系统
CN116416235A (zh) * 2023-04-12 2023-07-11 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN116416235B (zh) * 2023-04-12 2023-12-05 北京建筑大学 一种基于多模态超声数据的特征区域预测方法和装置
CN118039087B (zh) * 2024-04-15 2024-06-07 青岛山大齐鲁医院(山东大学齐鲁医院(青岛)) 一种基于多维度信息的乳腺癌预后数据处理方法及系统

Also Published As

Publication number Publication date
CN111428709A (zh) 2020-07-17
CN111428709B (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2021179491A1 (zh) 图像处理方法、装置、计算机设备和存储介质
US10580137B2 (en) Systems and methods for detecting an indication of malignancy in a sequence of anatomical images
Qian et al. M $^ 3$ Lung-Sys: A deep learning system for multi-class lung pneumonia screening from CT imaging
WO2021030629A1 (en) Three dimensional object segmentation of medical images localized with object detection
CN111553892B (zh) 基于深度学习的肺结节分割计算方法、装置及系统
US20110026791A1 (en) Systems, computer-readable media, and methods for classifying and displaying breast density
EP3814984B1 (en) Systems and methods for automated detection of visual objects in medical images
KR20230059799A (ko) 병변 검출을 위해 공동 훈련을 이용하는 연결형 머신 러닝 모델
CN111325266B (zh) 乳腺钼靶图像中微钙化簇的检测方法、装置和电子设备
US10878564B2 (en) Systems and methods for processing 3D anatomical volumes based on localization of 2D slices thereof
WO2022164374A1 (en) Automated measurement of morphometric and geometric parameters of large vessels in computed tomography pulmonary angiography
Kaliyugarasan et al. Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI
Hu et al. A multi-instance networks with multiple views for classification of mammograms
CN110738633A (zh) 一种机体组织的三维图像处理方法及相关设备
Lucassen et al. Deep learning for detection and localization of B-lines in lung ultrasound
Harrison et al. State-of-the-art of breast cancer diagnosis in medical images via convolutional neural networks (cnns)
WO2023198166A1 (zh) 图像检测方法、系统、装置及存储介质
Zhou et al. Improved breast lesion detection in mammogram images using a deep neural network
WO2022033598A1 (zh) 乳房x射线图像获取方法、装置、计算机设备和存储介质
Zhang et al. CAMS-Net: An attention-guided feature selection network for rib segmentation in chest X-rays
Park et al. 3D Breast Cancer Segmentation in DCE‐MRI Using Deep Learning With Weak Annotation
Ansar et al. Breast cancer segmentation in mammogram using artificial intelligence and image processing: a systematic review
Liu et al. Identification and diagnosis of mammographic malignant architectural distortion using a deep learning based mask regional convolutional neural network
Li et al. Automatic detection of pituitary microadenoma from magnetic resonance imaging using deep learning algorithms
Zhang et al. Pneumothorax segmentation of chest X-rays using improved UNet++

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924181

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20924181

Country of ref document: EP

Kind code of ref document: A1