WO2018120942A1 - 一种多模型融合自动检测医学图像中病变的系统及方法 - Google Patents

一种多模型融合自动检测医学图像中病变的系统及方法 Download PDF

Info

Publication number
WO2018120942A1
WO2018120942A1 PCT/CN2017/103529 CN2017103529W WO2018120942A1 WO 2018120942 A1 WO2018120942 A1 WO 2018120942A1 CN 2017103529 W CN2017103529 W CN 2017103529W WO 2018120942 A1 WO2018120942 A1 WO 2018120942A1
Authority
WO
WIPO (PCT)
Prior art keywords
detection
lesion
image
model
fusion
Prior art date
Application number
PCT/CN2017/103529
Other languages
English (en)
French (fr)
Inventor
周明
劳志强
张雪英
Original Assignee
西安百利信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西安百利信息科技有限公司 filed Critical 西安百利信息科技有限公司
Publication of WO2018120942A1 publication Critical patent/WO2018120942A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a system and method for automatically detecting medical images in conjunction with deep learning techniques, and in particular to performing suspicious lesions in medical images (such as mammography images) using a single or fusion detection model including deep learning techniques.
  • breast cancer is the most common type of cancer that threatens women's health.
  • the key to prevention and treatment of breast cancer is early detection, early diagnosis and early treatment.
  • Common methods of breast health screening include X-ray, ultrasound, and magnetic resonance imaging (MRI).
  • MRI magnetic resonance imaging
  • mammography is considered to be the most accurate method of detection because it can detect early and various suspicious lesions (such as masses, microcalcifications, structural disorders, etc.).
  • mammography diagnosis is mainly done by doctors through visual observation. The quality of diagnosis depends on the doctor's experience and meticulous observation. When a doctor is inexperienced, time-limited, and not carefully examined or tired and negligent, it will affect the diagnosis and cause missed diagnosis and misdiagnosis of breast lesions.
  • CADe/CADx breast computer-aided detection and diagnosis system
  • Traditional breast-assisted detection and diagnosis systems typically include three main steps: feature extraction, feature selection, and lesion classification. These three steps need to be handled separately and then integrated to achieve performance tuning of the overall system.
  • effective feature extraction for each disease is the most important link.
  • the quality of this part of work determines the follow-up feature selection and the effect of lesion classification.
  • Feature selection usually uses some weak classifiers as criteria to select some effective features from a set of extracted features.
  • the discrimination ability between different lesions and normal tissues is further enhanced by using some machine learning-based classifiers such as artificial neural network (ANN), support vector machine (SVM), and the like.
  • ANN artificial neural network
  • SVM support vector machine
  • the classifier used in feature selection is generally different from the classifier used in lesion classification, the "effective" feature selected at the feature selection step may not be a feature that is truly effective in the lesion classification;
  • the quality of the extraction depends on the quality of each intermediate result in image preprocessing (including image enhancement, image segmentation, etc.), and manual intervention is required for parameter adjustment, manual optimization, scheme selection, etc. After careful design and trial and error to find Satisfactory intermediate results. All of these factors can affect the final performance of the diagnostic system, making it difficult to design and optimize traditional breast-assisted diagnostic systems.
  • the technique of deep learning can change the design paradigm of the traditional breast computer-aided diagnosis system, and has the following three obvious advantages:
  • Second, the deep neural network architecture provided by deep learning can easily implement the hierarchical structure of feature interaction and inheritance, which greatly simplifies the process of feature selection.
  • the previous three steps of feature extraction, feature selection, and lesion classification can now be implemented in the same deep learning architecture. This design allows the overall performance optimization to be performed in a systematic manner. More convenient.
  • the present invention adopts the following technical solutions, taking the detection of breast lesions as an example:
  • the system for intelligent lesion detection of breast medical images includes the following five parts:
  • an image input module for acquiring digital or digitized breast images, which realizes segmentation of the breast region of interest by recognizing the nipple, skin, and chest wall muscles during image input;
  • An intelligent diagnostic module comprising a breast lesion detection processor and one or more profiles, the breast lesion detection processor spatially transforms, contrasts and looks at the breast image (which refers to the breast image that separates the breast region of interest and downsamples) Normalized processing, feature extraction, feature selection and lesion classification by calling the breast detection model;
  • a breast detection model consisting of a deep learning model, a traditional CAD model, an expert decision system, and various other pattern recognition and machine learning techniques;
  • the medical record archive is used to understand the patient's medical history in order to evaluate the current status and future development, and the pathological database is helpful according to the existing The pathological characteristic information finds similar lesions and provides early warning;
  • An image display module for displaying breast image and lesion related features.
  • a method for detecting and diagnosing lesions based on breast medical imaging data comprising the following steps:
  • ROI suspected breast lesion area of interest
  • the above system or method involves a new system architecture for breast health diagnosis, including:
  • mapping method from the detection score of an independent algorithm detection space to the standard detection space
  • a set of optimal algorithms can be selected such that the combined detection scores reflect the optimal performance of the system.
  • the invention overcomes the deficiencies of the traditional computer-aided diagnosis system by introducing deep learning technology, and can link the previously separate feature extraction, feature selection and lesion classification to an integrated convolutional neural network (CNN) model.
  • CNN convolutional neural network
  • Figure 1 is a flow chart of the operation of a conventional breast computer aided diagnostic system.
  • FIG. 2 is a flow chart showing the operation of a breast diagnostic system based on deep learning according to an embodiment of the present invention.
  • FIG. 2A is a schematic diagram of realizing the spatial conversion of breast region of interest (ROI) of FIG. 2 in accordance with an embodiment of the present invention.
  • ROI breast region of interest
  • FIG. 2B is a schematic diagram of the normalization of the appearance of the breast tissue of FIG. 2 according to an embodiment of the present invention.
  • 2C is a schematic diagram of detecting and extracting a suspicious lesion ROI according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of a typical deep learning network in accordance with an embodiment of the present invention.
  • 3A is a flowchart showing the operation of a convolution layer for generating a feature vector according to an embodiment of the present invention.
  • 3B is a schematic diagram of the convolutional layer of FIG. 3A generating a multi-wave feature image using a filter bank.
  • 3B1 is a schematic diagram illustrating the extraction of sample features from the convolutional layer of FIG. 3A, in accordance with an embodiment of the present invention.
  • Figure 3C is a schematic diagram illustrating the pooling operation from the convolutional layer of Figure 3A, in accordance with one embodiment of the present invention.
  • FIG. 3D is a schematic diagram illustrating a method for implementing various feature level fusions in a deep learning model according to an embodiment of the invention.
  • FIG. 3E is a schematic diagram showing the fusion of the deep learning model and the traditional CAD model implementation feature level according to an embodiment of the present invention.
  • 3F is a schematic diagram illustrating a method of implementing score level fusion in a deep learning model, a conventional CAD model, and other models such as an expert decision system, in accordance with an embodiment of the present invention.
  • FIG. 3G is a schematic diagram illustrating a method for implementing score normalization when merging at a score level according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram showing the various components of a breast medical image diagnostic system in accordance with an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of an interface for inputting various image processing parameters to implement human-computer interaction according to an embodiment of the invention.
  • the workflow of the existing breast CAD diagnostic system is shown in Figure 1.
  • Each of the steps listed in the figure is optimized separately in most cases. Each step passes the result as an input parameter to the subsequent steps with almost no feedback. . If the previous step is wrong, it will still be passed to the next step until the final result is reached.
  • mammography 101 requires first undergoing breast contour segmentation 102, breast region of interest preprocessing 103, and detection of suspected lesion (lesion) candidates 104, after which processing, for example, feature extraction and selection 105 for the entire
  • the performance (sensitivity and specificity) of the system plays the most important role. It requires a lot of calculations, so it usually needs to be aided by some optimization assumptions and weak classifiers (easy to calculate).
  • the selected subset of optimal features will be imported into some more powerful machine learning models, such as artificial neural networks (ANNs), to remove false positives 107 and improve their ability to identify different target classes.
  • ANNs artificial neural networks
  • FIG. 2 The workflow of the breast diagnostic system based on deep learning in an embodiment of the present invention is shown in FIG. 2.
  • Breast image data can be obtained by scanning x-ray film, CR system or DR system.
  • the breast image includes a head-to-tail position (CC position) and an internal and external oblique position (MLO position) view, all of which are processed in the same manner.
  • the image segmentation step is used to determine the location of the breast outline, nipple and chest muscles.
  • CC position head-to-tail position
  • MLO position internal and external oblique position
  • the image segmentation step is used to determine the location of the breast outline, nipple and chest muscles.
  • mammography image 201 as an example, there are various methods for segmenting the breast tissue, the nipple, and the chest wall muscle 202.
  • One implementation method is to determine the contour of the breast by estimating the position of the skin line in the CC position view, and in the MLO view by breast tissue plus breast tissue.
  • the area enclosed by the skin line and the chest muscles is the Breast Region of Interest (ROI).
  • the mammary gland of different equipment suppliers can be obtained by spatially transforming the breast region of interest 203
  • the image data corresponds to a standardized breast space. There are several ways to convert space.
  • One implementation method is to determine the spatial transition position from the internal axis of the nipple and chest wall or chest muscle.
  • Figure 2A shows a method of spatially transforming an input breast image (left CC view).
  • the input image 21 is converted in accordance with the reference image 22 in accordance with the corresponding mark position (the nipple and the midpoint of the chest wall) to obtain the aligned image 23. It is worth noting that there is a significant difference in the size of the breast exhibited by the input image 21 and the reference image 22, and the aligned image 23 can be structurally more detailed than the original input image 21.
  • the breast region of interest contrast normalization process 204 is performed on the aligned image 23 to improve the contrast of the input image in a normalized manner. There are several ways to normalize contrast.
  • One implementation method uses a tone curve conversion method to convert the input original tissue attenuation linear space into a nonlinear gray space for the purpose of enhancing the breast region of interest.
  • Another implementation method is to use the global density conversion method based on histogram matching to enhance the contrast of the input image such that the density of similar breast tissue in all input images has similar density values.
  • the breast tissue appearance normalization process 205 is then further corrected for differences in the presence of mammograms provided by different vendors.
  • Appearance normalization There are many methods in the fields of image processing, computer vision art and so on.
  • One implementation method is to use a deep machine learning appearance model based on image fragmentation to nonlinearly convert the image normalized by the manufacturer to a standard appearance space 37. For example, referring to FIG. 2B, for the contrast normalized images provided by Vendor 1, Vendor 2, and Vendor 3, separate appearance transformation models (Appearance Normalization Models 1, 2, 3) are constructed, respectively.
  • the step of detecting suspicious lesion candidates 206 is used to discover potential lesions in the breast region of interest. There are several ways to detect suspicious lesions.
  • One implementation method is shown in FIG. 2C.
  • the input image is enhanced by four bandpass filters and one averaging filter to create a fifth strip image.
  • the candidate (peak) is obtained by selecting the maximum value from the strip images of the four band pass filters, and searching for different angles to estimate the peak size. Then select a peak from the 5th strip image. The peaks generated by all of the above five strip images are combined and then reduced according to a predefined number of candidate limits.
  • the size and location of some peaks need to be corrected based on the region of interest (ROI) segmented from the original image. Peaks outside the area of interest will be deleted. The size and position of the peak will be updated based on the peak of the 5th strip image.
  • ROI region of interest
  • MammoNet mainly uses Convolutional Neural Network (CNN) technology, and its principle is partly inspired by the human visual processing mechanism, that is, learning through multi-level filter kernels, creating expressions in each layer. More abstract data.
  • depth usually refers to the multi-level nesting of nonlinear functions.
  • CNN Convolutional Neural Network
  • Its role is similar to a virtual radiologist who learns the CNN model 207 by learning the knowledge and experience of big data accumulation in breast cases to determine whether there is breast lesion in the current image. And identify the location of the lesion and the contour of the lesion.
  • the architecture of MammoNet generally includes a convolutional layer, a pooling layer, and a fully connected layer.
  • the result of each convolutional layer produces a feature map that is then down-sampled at the pooling level.
  • the pooling layer generally adopts the maximum pooling method, that is, the maximum value is selected in the adjacent area in the feature map.
  • the disadvantage of the pooling layer is that it is possible to introduce errors during data conversion, which may result in lower positioning accuracy as the data is reduced during downsampling.
  • a fully connected layer can increase the performance of the entire system architecture.
  • the kth feature map representing the Lth layer is defined by the convolution kernel Calculated according to the following formula:
  • f denotes a nonlinear activation function
  • b is a deviation term
  • Y L-1 is a feature map of the L-1th layer.
  • the modified linear unit ReLU function is used instead of the traditional sigmoid function to represent the activation function of a:
  • y stands for class tag. This allows the gradient descent optimization method to be used.
  • small batch random gradient descent (SGD) is often used when there is insufficient memory or there are many redundant samples in the data. This is not a gradient calculation of the entire data set, but is divided into several small batches for gradient calculation. The standard backpropagation method is then used to adjust the weighting coefficients of all layers.
  • a typical deep learning network structure (that is, the "breast neural network") of an embodiment of the present invention is shown in FIG. 3.
  • the CNN-based network structure includes five convolution layers (convolution layer 1 to convolution layer 5). 301, 303, 305, 306, 307, three pooling layers (pooling layer 1, pooling layer 2, pooling layer 5) 302, 304, 308 and three fully connected layers (full connection layer 6 to full connection) Layers 8) 309, 310, 311 contain approximately 60 million free parameters.
  • some important Training parameters such as number of cores, stride size and interval size are also shown in the figure.
  • FIG. 3A A convolutional layer workflow for generating feature vectors in accordance with an embodiment of the present invention is shown in FIG. 3A (convolution layer 1, 2 and convolution layer 5 in FIG. 3).
  • the feature vector generated by the pooling layer is passed to the subsequent fully connected layer.
  • FIG. 3B The process of generating a multi-wave feature image using the filter bank of the convolutional layer of FIG. 3A is shown in FIG. 3B.
  • Filter banks are used to capture signals with different properties.
  • Thresholds and activation are used to eliminate arpeggios or useless signals.
  • Figure 3B1 Features with different attributes extracted from the convolutional layer of Figure 3A are shown in Figure 3B1.
  • FIG. 3C The process of pooling the convolutional layer of Figure 3A is shown in Figure 3C. Pooling and normalization are used to generate meaningful maps of low resolution. After such convolutional layer processing, a set of simple and effective features can be extracted, and in the subsequent fully connected layer, a better classification effect can be obtained by further enhancing the discriminating ability.
  • the kernel elements of all convolution filters are trained in a guided manner by learning the marked samples.
  • This has a greater advantage over traditional computer-aided detection (CADe) methods because traditional CADe methods require artificial selection of features and depend on human design experience.
  • CADe computer-aided detection
  • MammoNet has a better chance of capturing the "core" of the image than traditionally trained systems.
  • MammoNet-like systems can be trained from random initial models or pre-processed model parameters without manual intervention, and the resulting models can detect a variety of different types of lesions or cancers. Such an operation allows MammoNet to learn features that remain spatially in the image.
  • each lesion region of interest can be converted N t times along a random vector in a two-dimensional space.
  • Such training and test data sets can be extended to a greater magnitude, which will enhance the ubiquity and trainability of the system.
  • the MammoNet model it is possible to simply predict the probability that the N random observation dimensions ⁇ P 1 (x),..., P N (x) ⁇ for each lesion region of interest are calculated to be:
  • P i (x) is the classification probability value that MammoNet calculates for each individual image fragment.
  • more complex calculation methods can be used, such as panning and mirroring image fragments, but in practice it is effective to find a simple average.
  • This random sampling method can simply and effectively increase the amount of training data.
  • the above calculation method can further increase the robustness and stability of the MammoNet system by calculating the average of the random observation dimensions.
  • the lesion interest region in which the candidate is located may have different shapes and sizes, but the size of the lesion region of interest is fixed at the time of CNN training. If the lesion's interest area is too small, the image analysis will lack sufficient information; if it is too large, the computational cost will increase and the positioning accuracy may be reduced. Therefore, in deep CNN training, the effect of using non-uniform sampling is better than uniform sampling.
  • 0 indicates a uniformly sampled lesion of interest.
  • the x-axis and y-axis offsets (l and m) of the pixels that need to be sampled exhibit exponential growth. This means intensive sampling at the center and reduced density to peripheral sampling.
  • the breast model library optimization 208 mainly includes fusion of convolutional neural networks, fusion of deep learning with other detection models, and fusion of scoring results of each detection model.
  • FIG. 3D A method for implementing feature level fusion in different deep learning models (ie, CNN networks) is illustrated in an embodiment of the present invention, as shown in FIG. 3D.
  • CNN networks ie, CNN networks
  • I the feature set extracted from the last convolutional layer of each CNN network
  • M and N are the number of feature maps
  • d is the size of the feature map
  • a i and b i are the i-th column elements of matrices A and B, respectively A feature map.
  • the output of Fusion C is:
  • K is the only hyper-parameter whose size represents the capacity of the converged network. Note that this method supports the expansion of the number of networks because the size of the converged network depends on K, not the number of networks.
  • the fusion layer is trained using standard backpropagation and stochastic gradient descent methods.
  • the results of the fusion layer can be easily accessed by many popular CNN software platforms, such as Caffe.
  • FIG. 3E shows the features generated by the deep learning of the CNN model (CNN feature 31) fused with the features of the traditional breast CAD model (manually selected feature 32).
  • the feature fusion 33 can be a simple series, or connected in series by weight coefficients, and then PCA and LDA are performed.
  • the PCA will reduce the dimension of the connected feature vector, and the LDA will enhance the feature discriminating ability and further reduce the dimension.
  • the merged features will be imported into the traditional artificial neural network (ANN34).
  • ANN34 traditional artificial neural network
  • FIG. 3F A method of implementing score level fusion in a variety of deep learning models, traditional CAD models, and other models such as expert decision systems is shown in Figure 3F.
  • Scores derived from different detection algorithms such as based on several CNNs (CNN1 score 331, CNN2 score 332, etc.) and several ANNs (ANN1 score 334, ANN2 score 335, etc.)
  • the score is first converted to the standard detection space (target curve 323) as shown in Figure 3G, and then various fusion functions are used, such as linear or non-linear, with or without weight, with or without
  • score fusion 337 is performed to generate a final detection score, and the classification result is obtained 338 (lesion or non-lesional tissue).
  • score normalization is implemented when detecting score level fusion, see FIG. 3G, which provides detection scores obtained from various detection algorithms (whether based on CNN, ANN, or other machine learning models).
  • the fusion of values It provides a high-level method of correcting the detection results of each detection algorithm. It is assumed that the detection scores obtained by each algorithm are complementary and the final optimal detection result can be obtained. Since the detection scores derived from the various algorithms have different meanings, they need to be transformed into a normalized space so that they can be compared with each other.
  • the normalized conversion is usually performed using a false positive rate (FAR) curve 321 in which the FAR curve 322 in the -log10 space is more meaningful than the original FAR curve.
  • FAR false positive rate
  • the FAR curve is expressed in the –log10 space as: among them Is the score on the FAR curve 322 of the -log10 space, and Is -log10(FAR) on the FAR curve 322 of the -log10 space, where n is the total number of all points on the curve.
  • n is the total number of all points on the curve.
  • the first derivative of the calculation calculated as: Spline coefficient with Can be from with Calculated.
  • Score normalization based on spline interpolation can be derived using Horner's rule:
  • the diagonal line is the target curve 323 obtained by the detection algorithm in the -log10 space through normalization processing.
  • Using a method of score fusion helps to build a scalable intelligent diagnostic system. It helps to maximize the current library of detection algorithms and achieve optimal detection performance. In addition, if new technologies emerge in the future to get better algorithms, they can also be seamlessly integrated into the system, which will help improve the performance of the breast intelligent diagnostic system. This method makes the design and extension of the breast intelligent diagnosis system reach a higher level, that is, the focus is to establish an optimal algorithm library, instead of specifically improving a certain detection algorithm.
  • the structure of the breast medical image intelligent diagnosis system according to an embodiment of the present invention is shown in FIG.
  • the system for implementing intelligent diagnosis of breast medical images of the present invention comprises the following five parts: an image input module 44, an intelligent diagnosis module 40, a breast detection model library 41, a medical record archive and a pathology database 38, and an image display module 46.
  • the digital image or digitized image 42 is transmitted to the intelligent diagnostic module 40 via the image input module 44.
  • the module includes a breast lesion detection processor that provides control logic, data processing, and data storage functions for spatial conversion and contrast normalization of the breast image.
  • the normalization and appearance are normalized, feature extraction, selection and classification are performed by calling the breast detection model, and the result of the automatic detection is output to the image display module 46.
  • the breast detection model library 41 includes a deep learning model, a conventional CAD model, an expert decision system, and various other breast recognition models constructed by pattern recognition and machine learning techniques, including digital images acquired from film scanning, CR or DR devices.
  • the intelligent diagnostic module 40 includes one or more configuration files for storing parameter values for use under different conditions, and thereby providing further image processing and analysis functions to perform the workflow of the deep learning based breast diagnostic system shown in FIG. .
  • the medical record archive and the pathology database 38 can store and query the patient's medical records (such as age, family history, and medical history) as well as pathological characteristics of various lesions in order to assess and alert the patient's risk and future development.
  • the user can input commands, configure and adjust parameters through the operation interface of the image display module 46 at the console 39.
  • FIG. 5 An interface for inputting parameters to implement human-computer interaction according to an embodiment of the present invention is shown in FIG. 5.
  • Common image processing parameters include adjustment of image data, definition of initial data, and generation parameters of feature maps.
  • Tab 30 is used to select a suitable set of parameters for input or display.
  • the typical parameters listed in the example of Figure 5 include the initial values of the high-pass filter, such as adjusting the ⁇ , width, and height values of the blur filter; the parameters used to generate the localization region, including the ⁇ of the Gabor filter, ⁇ , width and height values, and the size of the library; and parameters for image smoothing, such as smoothing of the positioning area and smoothing of the feature map.
  • the user can also use tab 30 to view intermediate results (characteristic maps) and final results.
  • the breast lesion detection and diagnosis 209 implemented by the above embodiment of the present invention, and by constructing the above system, completes the marking/visualization/diagnosis report 210 of the detection result. Diagnostic results include a relative risk indicator showing one or more identified breast lesions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Primary Health Care (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种多模型融合自动检测医学图像中病变的系统及方法,包括运用包括深度学习技术在内的单个或融合检测模型对医学图像,如乳腺X光图像中的可疑病变进行分析、检测。采用本方法可以实现病变特征的自动提取,可用于检测和标注一种或多种类型的病变。

Description

一种多模型融合自动检测医学图像中病变的系统及方法 技术领域
本发明涉及一种联合深度学习技术对医疗图像进行自动检测的系统和方法,具体涉及运用包括深度学习技术在内的单个或融合检测模型对医学图像(如乳腺X光图像)中的可疑病变进行检测和评估的系统及方法。
背景技术
乳腺癌是威胁妇女健康的最常见的一种癌症。防治乳腺癌的关键是早发现、早诊断、早治疗。乳腺健康检查常见的方式包括X光、超声和核磁共振成像(MRI)。其中,乳腺X光检查被认为是最准确的检测方法,因为它可以发现早期细小的各种可疑病变(如肿块、微钙化、结构紊乱等)。目前乳腺X光图像诊断主要是由医生通过目视完成的,诊断的质量依赖于医生的经验和细致的观察。当医生缺乏经验、受时间限制而没有仔细查看或者疲惫疏忽时,就会影响诊断效果,造成乳腺病变的漏诊和误诊。
采用乳腺计算机辅助检测和诊断系统(CADe/CADx),在临床实践中可以帮助医生提高检测灵敏度、降低工作强度。传统的乳腺计算机辅助检测和诊断系统通常包括三个主要步骤:特征提取、特征选择和病变分类。这三个步骤需要分开处理,然后再整合在一起实现整体系统的性能调优。上述工作中,针对各个病症实现有效地特征提取是最重要的一个环节,这部分工作的质量决定了后续的特征选择和病变分类的效果。特征选择通常使用一些弱分类器作为标准,从一整套提取的特征中有针对性地选择一些有效特征。然后在病变分类步骤,通过使用一些基于机器学习的分类器如人工神经网络(ANN)、支持向量机(SVM)等,来进一步强化对不同病变与正常组织之间的判别能力。然而,由于在特征选择中使用的分类器与在病变分类中使用的分类器通常是不同的,在特征选择步骤所选的“有效”特征可能不是在病变分类中真正有效的特征;此外,特征提取的质量取决于在图像预处理(包括图像增强、图像分割等)过程中每个中间结果的质量,需要采用人工干预进行参数调整、手工优化、方案选择等,经过仔细设计和反复试验去找到满意的中间结果。所有这些因素都会影响诊断系统最终的性能,使得设计和优化传统的乳腺计算机辅助诊断系统困难重重。
运用深度学习的技术,可以改变传统的乳腺计算机辅助诊断系统的设计范式,并具有以下三个明显的优势:首先,深度学习可以从大量的训练数据中直接发现有效特征,因而显著缓解以往在特征提取过程中需要进行的许多有明确针对性的工作,深度学习可以补充 甚至超越传统的特征提取方法的特征识别能力。第二,深度学习提供的深层神经网络的体系结构,可以方便地实现特征交互和继承的层次架构,这样使得特征选择的过程大大简化。第三,以往的特征提取、特征选择和病变分类这三个步骤,现在可以放在同一个深度学习的体系结构来实施,这样的设计使得整体性能的优化可以按照系统化的方式进行,变得更方便了。
但是用深度学习技术完全来替代传统的计算机辅助检测技术也有其不足之处。如果采用单一深度学习策略实现计算机辅助诊断,就缺少能够对传统的各种计算机辅助检测模型以及多种检测模型联合运用的综合考虑,不一定就是最优的检测模型的方案。
发明内容
本发明的目的在于提供一种联合深度学习技术的多模型融合自动检测医学图像中病变的系统及方法。
为达到上述目的,本发明采用了以下技术方案,以乳腺病变检测为例:
实现对乳腺医学图像进行智能病变检测的系统包括以下五个部分:
1)用于获取数字或数字化的乳房影像的图像输入模块,该模块在影像输入时通过识别乳头、皮肤、胸壁肌肉实现对乳腺兴趣区的分割;
2)包含乳腺病变检测处理器和一个或多个配置文件的智能诊断模块,乳腺病变检测处理器对乳腺图像(是指分割乳腺兴趣区并下采样后的乳房影像)进行空间转换、对比度和外观归一化处理,通过调用乳腺检测模型实现特征提取、特征选择和病变分类;
3)包含深度学习模型、传统的CAD模型、专家决策系统和其他各种模式识别和机器学习技术构建的乳腺检测模型;
4)用于存储和查询患者的病历档案库和包含各种病变的病理特征的病理数据库;病历档案库用于了解患者的病史以便对现状和未来发展进行评估,病理数据库有助于根据已有的病理特征信息发现类似的病变并进行预警;
5)用于显示乳腺图像和病变相关特征的图像显示模块。
根据乳腺医学成像数据进行病变检测和诊断的方法,包括以下步骤:
1)分析图像数据,识别乳腺组织、乳头和胸肌;
2)把原始图像数据按照标准空间转换成对齐的图像数据;
3)把对齐的图像数据采用对比度归一化处理;
4)对不同设备或厂商提供的乳腺图像进行外观归一化处理,如胶片、通用电气、西门子、豪洛捷(Hologic)、柯达CR/DR等;
5)在乳腺图像中检测可疑的乳腺病变兴趣区(ROI);
6)建立和应用集成了卷积层和完全连接层的深度神经网络模型,降低假阳性;
7)建立包含深度学习模型、传统的CAD模型、专家决策系统和其他各种模式识别和机器学习技术构建的乳腺检测模型的模型库,并按需提供智能检测服务:如病变类型选择(肿块检测、微钙化检测、结构紊乱检测)、临床服务选择(快速筛查服务、精准检测服务,快速筛查通过牺牲一定的精准度而达到快速检测目的);
8)在乳腺图像上注释和显示乳腺病变的位置和轮廓。
上述的系统或方法中,均涉及一种新的用于乳腺健康诊断的系统架构,包括:
1)利用各种模式识别和机器学习技术构建的乳腺检测模型库;
2)从一个独立的算法检测空间获得的检测分值映射到标准检测空间的映射方法;
3)可以选择出一组最优的算法,使得其融合后的检测分值反映出系统具有最优的性能。
本发明的有益效果体现在:
本发明一方面通过引入深度学习技术,克服了传统计算机辅助诊断系统的不足,可以将以前分开的特征提取、特征选择和病变分类联系起来,放在一个一体化的卷积神经网络(CNN)模型里处理,实现整体系统高效和智能地运作,方便了系统调试和优化,另一方面,通过融合传统的CAD模型、专家决策系统和其他各种模式识别和机器学习技术,构成检测模型库,并采用最优的检测模型方案进行检测。据此,本发明可以提高从医学图像中发现和检测病变的准确率,帮助医生提高诊断效果,具有较大的理论价值和经济效益。
附图说明
图1是传统的乳腺计算机辅助诊断系统的工作流程图。
图2是本发明一实施例的显示基于深度学习的乳腺诊断系统的工作流程图。
图2A是本发明一实施例的实现图2中乳腺兴趣区(ROI)空间转换的示意图。
图2B是本发明一实施例的实现图2中乳房组织外观归一化的示意图。
图2C是本发明一实施例的检测和提取可疑病变ROI的示意图。
图3是本发明一实施例的典型的深度学习网络结构图。
图3A是本发明一实施例的用于生成特征向量的卷积层的工作流程图。
图3B是图3A的卷积层使用滤波器组生成多波特征图像的示意图。
图3B1是本发明一实施例的说明从图3A的卷积层提取样本特征的示意图。
图3C是本发明一实施例的说明从图3A的卷积层进行池化操作的示意图。
图3D是本发明一实施例的说明深度学习模型实施各种特征层面融合的方法的示意图。
图3E是本发明一实施例的说明深度学习模型与传统的CAD模型实施特征层面融合的示意图。
图3F是本发明一实施例的说明在深度学习模型、传统的CAD模型以及诸如专家决策系统等其他模型中实施分值层面融合的方法的示意图。
图3G是本发明一实施例的说明在分值层面融合时实现分值归一化的方法的示意图。
图4是本发明一实施例的说明乳腺医学图像诊断系统各个组成部分的示意图。
图5是本发明一实施例的用于输入各种图像处理参数实现人机交互的界面示意图。
具体实施方式
下面结合附图和实施例对本发明做进一步详细说明,所述实施例是对本发明的解释,而不是限定。
现有的乳腺CAD诊断系统的工作流程参见图1,图中每个列出的步骤在多数情况下是分别进行了优化,每一步都是把结果作为输入参数传递给后续步骤,几乎没有反馈信息。如果前面的步骤出错了,它仍然会传递给后续的步骤直到得出最终的结果。一般来讲,乳腺X光图像101需要先经过乳腺轮廓分割102、乳腺兴趣区预处理103,并检测出可疑病灶(病变)候选者104,这之后的处理,例如,特征提取和选择105对于整个系统的性能(敏感性和特异性)起着最为重要的作用。它需要进行大量的计算,因此通常需要使用一些优化假设和弱分类器(易于计算)来辅助。在此之后,所选择的最优特征子集将被输入到一些更强大的机器学习模型,如人工神经网络(ANN),来去除假阳性107,提高其判别不同目标类的能力。然而,由于在特征提取和选择105使用的弱分类器和诸如在ANN模型训练106中使用的强分类器存在差异性,所以很难保证弱分类器得出的最优特征子集在使用强分类器的机器学习中达到效果最好。
本发明一实施例中基于深度学习的乳腺诊断系统的工作流程参见图2。乳腺图像数据可以通过扫描x光胶片、CR系统或DR系统获得。乳腺图像包括头尾位(CC位)、内外斜位(MLO位)视图,均以相同方式处理。图像分割步骤用于确定乳腺轮廓、乳头和胸肌的位置。以乳腺X光图像201为例,分割乳腺组织、乳头及胸壁肌肉202有多种方法。一种实施方法是在CC位视图通过估算皮肤线的位置来确定乳腺的轮廓,在MLO视图则通过胸肌加上乳腺组织来确定。由皮肤线和胸肌围起来的区域就是乳腺兴趣区(ROI)。通过对乳腺兴趣区进行空间转换203的方法可以将不同设备供应商来源的乳腺 图像数据对应到标准化的乳腺空间。空间转换有多种方法。一种实施方法是由乳头和胸壁或胸肌组成的内部轴来确定空间的转换位置。举个例子,图2A显示了一个输入的乳房图像(左CC视图)进行空间转换的方法。输入图像21根据相对应的标记位置(乳头和胸壁中点),与参考图像22比照进行转换,得出对齐后的图像23。值得注意的是,输入图像21和参考图像22展示的乳房大小存在明显的差异,对齐后的图像23比原始输入图像21在结构上可以显示出更多的细节。对对齐后的图像23进行乳腺兴趣区对比度归一化处理204,以归一化的方式提高了输入图像的对比度。对比度归一化有多种方法。一种实施方法是使用色调曲线转换方法,把输入的原始组织衰减线性空间转换成以增强乳腺兴趣区为目的的非线性灰度空间。另一种实施方法是使用基于直方图匹配的全局性密度转换方法,把输入图像的对比度增强,使得所有输入图像中密度相似的乳腺组织具有相似的密度值。然后通过乳腺组织外观归一化处理205进一步纠正由不同供应商提供的乳腺图片存在的差异。外观归一化在图像处理、计算机视觉艺术等领域有很多种方法。一种实施方法是使用基于图像碎片的深度机器学习外观模型,把厂家提供的对比度归一化了的图像非线性转换到一个标准的外观空间37。例如,参见图2B,对于供应商1、供应商2和供应商3提供的对比度归一化图像,分别构建独立的外观转换模型(外观归一化模型1、2、3)。在本发明中,使用外观转换模型作为驱动模块实现了对不同供应商提供的图像的支持,可以方便地不断扩大供应商列表。检测可疑病变候选者206步骤用于发现乳腺兴趣区内潜在的病变。可疑病变检测技术有多种方法。一种实施方法如图2C所示,输入图像分别由4个带通滤波器和1个均值滤波来创建第5个带状图像进行增强。候选者(峰值)通过从这4个带通滤波器的带状图像中选择最大值得出,并搜索不同的角度来估算峰值的大小。然后从第5个带状图像选出一个峰值。上述所有5个带状图像产生的峰值合并后再根据预先定义的候选者数量限制进行缩减。一些峰值的大小和位置需要根据从原始图像中分割出来的兴趣区(ROI)来进行纠正。兴趣区以外的峰值将被删除。峰值的大小和位置将依据第5个带状图像的峰值进行更新。
下面是基于深度学习的机器学习步骤,我们称之为“乳腺神经网络”(MammoNet)。MammoNet主要使用的是卷积神经网络(CNN)技术,其原理部分是受人类视觉处理机制的启发而产生的,即通过多层次的滤波器内核去学习,在每一层创建表达比上一层更抽象的数据。深度一词通常是指非线性函数的多层次嵌套。实践证明,深度卷积神经网络技术在图像分析领域具有出色的表现。它的角色类似于一个虚拟的放射科医师,通过学习乳腺病例大数据积累的知识和经验,即训练CNN模型207,来判别当前图像是否存在乳腺病变, 并标识出病变的位置和病变区域的轮廓。
MammoNet的架构大致包括卷积层、池化层(pooling)和全连接层。每一个卷积层的结果会生成一个特征图,然后在池化层进行下采样(down-sample)。池化层一般采用最大池化方法,即在特征图中相邻的区域里选取最大值。池化层的缺点是在数据转换时有可能引入误差,在下采样时随着数据的减少可能导致定位精度降低。全连接层可以增加整个系统架构的性能。
Figure PCTCN2017103529-appb-000001
表示第L层的第k个特征图,是由卷积核
Figure PCTCN2017103529-appb-000002
根据下式计算出的:
Figure PCTCN2017103529-appb-000003
其中*表示卷积算子,f代表非线性激活函数,b是一个偏差项,YL-1是第L-1层的特征图。为了克服梯度消失,使用修正线性单元ReLU函数代替传统的sigmoid函数来表示a的激活函数:
f(a)=max(0,a)
实践表明这个激活函数更易于训练。CNN模型的参数Θ通常使用最大似然法来估算:
Figure PCTCN2017103529-appb-000004
其中h(X|Θ)是样本X的后验概率函数,N是总层数。为了便于计算,对它取负对数转为以下求最小值的公式,即熵损失:
Figure PCTCN2017103529-appb-000005
这里y代表类标签。这样可以使用梯度下降优化方法。对于大数据集,在内存不足或数据存在许多冗余样本时,通常使用小批量随机梯度下降法(SGD)。这样就不是对整个数据集进行梯度计算,而是分为若干个小批量进行梯度计算。随后使用标准的反向传播法来调整所有层的权重系数。
本发明一实施例的典型的深度学习网络结构(就是所述的“乳腺神经网络”)参见图3,这个基于CNN的网络结构包括五个卷积层(卷积层1~卷积层5)301、303、305、306、307,三个池化层(池化层1、池化层2、池化层5)302、304、308和三个全连接层(全连接层6~全连接层8)309、310、311,包含大约6000万个自由参数。此外,一些重要的 训练参数,如:内核数目、步幅大小和间隔大小,也显示在图中。
本发明一实施例的用于生成特征向量的卷积层工作流程参见图3A(图3中的卷积层1、2和卷积层5)。池化层生成的特征向量会传递到后续的全连接层。
图3A的卷积层使用滤波器组生成多波特征图像的过程参见图3B。滤波器组是用来捕获具有不同属性的信号。阈值和激活用来消除嘈音或无用的信号。从图3A的卷积层提取样本的具有不同属性的特征如图3B1所示。
图3A的卷积层进行池化的过程参见图3C。池化和归一化是用来生成低分辨率的有意义的特征图。经过这样的一些卷积层处理后,一组简洁而有效的特征就可以提取出来,在随后的全连接层,通过进一步增强判别能力就可以取得更好的分类效果。
在本发明中,所有卷积过滤器的内核元素都是通过学习标记过的样本,在有指导的方式下完成训练的。这比起传统的计算机辅助检测(CADe)方法具有更大的优势,因为传统的CADe方法需要人为选择特性,依赖于人的设计经验。MammoNet比传统人工训练的系统有更好的机会捕捉到影像的“核心”数据。此外,类似MammoNet的系统无需人工干预,就可以从随机的初始模型或预处理的模型参数中训练,生成的模型可检测各种不同类型的病变或癌症。这样的操作使得MammoNet可以学习在图像中空间位置保持不变的特征。这些特征经过卷积层后输入本地连接层(类似于卷积层但是没有共享权重系数),然后在全连接的神经网络层进行分类。在MammoNet里卷积层的维度越深,就可以编码越高阶的图像特征。这个神经网络系统会自我学习和处理特征并进行分类,最后为每个输入的图像提供病变分类及概率估算。
上述架构虽然功能强大,对如旋转和放缩等几何变换需要进行数据调整。在深度学习的背景下,数据调整技术通常用于从已经存在的数据产生新的样本,以解决数据匮乏和过度拟合的问题。对于乳腺x光检查,主要的挑战来自于图像旋转、图像放缩、图像转换以及组织重叠的数量。
在本发明中,为了增加训练数据的多样性,避免过度拟合,为每个病变兴趣区引入了多个观测维度是必要的。每个病变兴趣区可以在二维空间沿着一个随机向量转换Nt次。此外,每个病变兴趣区可以围绕中心按一个随机角度α=[0,...,180]旋转Nr次。这些转换和旋转后的病变兴趣区再按照不同的缩放尺度计算Ns次。这个过程会为每个病变兴趣区生成N=Ns×Nt×Nr个随机的观测维度。这样训练和测试数据集可以扩展到更大的量级上,这将增强系统的普遍性和可训练性。根据MammoNet模型,可以简单预测对于每个病 变兴趣区这些N个随机观测维度{P1(x),...,PN(x)}计算出来得到候选者的概率是:
Figure PCTCN2017103529-appb-000006
这里,Pi(x)是MammoNet为每个单独的图像碎片计算的分类概率值。在理论上,可以使用更复杂的计算方法,例如,对图像碎片进行平移和镜像等操作,但是在实践中发现简单平均值就很有效了。这个随机采样的方法可以简单而有效地提高训练数据的数量。上述计算方法通过对随机的观测维度进行平均值计算,可以进一步增加MammoNet系统的鲁棒性和稳定性。
在本发明中,检测可疑病变候选者206时,所述的候选者所在的病变兴趣区可以形状大小各异,但是在CNN训练时病变兴趣区的大小是固定的。如果病变兴趣区太小,图像分析就会缺乏足够的信息;如果太大,那么计算成本会增加,定位的精度也可能降低。因此,在深度CNN训练中,采用非均匀采样的效果会优于均匀采样。
假设Pi,j是图像I中在像素(i,j)附近一个n×n大小的非均匀采样的病变兴趣区,则有:
Figure PCTCN2017103529-appb-000007
其中,a和b是从病变兴趣区的中心偏移间隔范围内的整数,l和m是图像I中对应像素的偏移量,计算为:
Figure PCTCN2017103529-appb-000009
Figure PCTCN2017103529-appb-000010
其中α是一个控制量,表示病变兴趣区扩展的程度:α=0表示均匀采样的病变兴趣区。当远离图像碎片的中心(a和b的绝对值增加)时,需要采样的像素的x轴和y轴偏移量(l和m)呈现指数级增长。这意味着在中心进行密集采样,到外围采样的密度降低。
乳腺模型库优化208主要包括卷积神经网络的融合、深度学习与其他检测模型的融合以及各检测模型打分结果的融合。
本发明一实施例的说明在不同的深度学习模型(即CNN网络)中各种实施特征层面融合的方法,如图3D所示。假设有两个CNN网络,矩阵
Figure PCTCN2017103529-appb-000011
Figure PCTCN2017103529-appb-000012
是从每个 CNN网络最后一个卷积层提取的特征集,M和N是特征图的数量,d是特征图的大小,ai和bi是矩阵A和B的第i列元素,分别对应一个特征图。融合C的输出是:
串联(图3D中的基准A):
Figure PCTCN2017103529-appb-000013
其中U是并集(连接)操作符;
组合(图3D中的基准B):
Figure PCTCN2017103529-appb-000014
其中U是并集(连接)操作符,α和β是每个特征图的权重系数,γ和δ是偏置量。
多维融合(图3D中的基准C):
Figure PCTCN2017103529-appb-000015
其中,
Figure PCTCN2017103529-appb-000016
代表元素乘积,γ和δ是偏置量,α和β为每个特征图的权重系数,是可学习的参数。上述方法也可以扩展到全连接层的融合。与卷积层不同,对于全连接层d=1,所以A和B的维度分别是1×M和1×N。
这里α和β在每个网络扮演着重要角色,它们对重要的特征赋予较高的权重,可用于预测。K是唯一的超参数(hyper-parameter),它的大小代表着融合网络的容量大小。注意,这个方法支持网络数量的扩展,因为融合网络的大小取决于K,而不是网络的数量。
融合层用标准的反向传播和随机梯度下降法进行训练。融合层的结果可以很容易地接入到许多流行的CNN软件平台,如Caffe。
本发明一实施例的说明深度学习与传统的CAD模型实施特征层面融合,可以采用的融合方法包括在模式识别/机器学习中的常见技术,如串联、主成分分析(PCA)、线性判别分析(LDA,linear discriminant analysis)等技术。图3E中显示的是从CNN模型深度学习生成的特征(CNN特征31)与传统的乳腺CAD模型经过人工干预产生的特征(人工选择的特征32)进行融合。特征融合33可以是简单的串联,或按权重系数进行串联,然后进行PCA和LDA,PCA将减少连接特征向量的维数,LDA会增强特征判别能力并进一步降低维度。融合后的特征将被输入到传统的人工神经网络(ANN34)。这样生成的ANN模型将受益于CNN和人工指导的经验,从而取得更优的乳腺检测效果。
本发明一实施例的说明在多种深度学习模型、传统的CAD模型以及诸如专家决策系统等其他模型中实施分值层面融合的方法,参见图3F。从不同的检测算法得出的分值,如:基于若干个CNN(CNN1分值331、CNN2分值332,等等)和若干个ANN(ANN1分值334、ANN2分值335,等等)的分值,首先转换为如图3G所示的标准检测空间(目标曲线323),然后使用各种融合函数,如线性或非线性的、有或没有权重的、有或没有 分值补偿的,进行分值融合337,生成最终检测分值,据此进行分类得到检测结果338(病变或非病变组织)。
本发明一实施例的在检测分值层面融合时实现分值规范化,参见图3G,它提供了从各种不同的检测算法(无论是基于CNN、ANN或其他机器学习的模型)获得的检测分值的融合。它提供了一个高级别的校正每个检测算法检测结果的方法,假设每个算法得出的检测分值是互补的,可以得出最终的最优检测结果。由于从各个算法得出的检测分值有不同的意义,它们需要先转化到一个规范化的空间,以便相互可以比较。通常采用误判率(FAR)曲线321来进行归一化转换,在模式识别领域在-log10空间的FAR曲线322比原始的FAR曲线更有意义。
假设FAR曲线在–log10空间表示为:
Figure PCTCN2017103529-appb-000017
其中
Figure PCTCN2017103529-appb-000018
是在-log10空间的FAR曲线322上的分值(score),而
Figure PCTCN2017103529-appb-000019
是-log10空间的FAR曲线322上的-log10(FAR),n是曲线上所有的点的总数。设
Figure PCTCN2017103529-appb-000020
Figure PCTCN2017103529-appb-000021
的一阶导数,计算为:
Figure PCTCN2017103529-appb-000022
样条系数
Figure PCTCN2017103529-appb-000023
Figure PCTCN2017103529-appb-000024
可以从
Figure PCTCN2017103529-appb-000025
Figure PCTCN2017103529-appb-000026
计算出来。基于样条插值的分值归一化可以用霍纳氏法则(Horner’s rule)得出:
mappedScorei=yi+dx×(y1i+dx×(y2i+dx×y3))
其中x1≤u≤xi+1,dx=rawScorei-xi,rawScorei是初始分值。
图3F中,对角线就是检测算法在-log10空间经过归一化处理转换得到的目标曲线323
Figure PCTCN2017103529-appb-000027
使用分值融合的方法有助于构建可伸缩的智能诊断系统。它有助于最大化当前的检测算法库并获得最佳的检测性能。此外,如果未来有新技术出现得到更好的算法,那么也可以无缝集成到本系统中,这将有助于提高乳腺智能诊断系统的性能。本方法使得乳腺智能诊断系统的设计和扩展达到一个更高水平,即重点是建立最优算法库,而不是放在具体改进某个检测算法上面。
本发明一实施例的乳腺医学图像智能诊断系统的结构,参见图4。本发明实现对乳腺医学图像进行智能诊断的系统包括以下五个部分:图像输入模块44、智能诊断模块40、乳腺检测模型库41、病历档案库和病理数据库38以及图像显示模块46。数字图像或数字化图像42经过图像输入模块44传达到智能诊断模块40,该模块包含乳腺病变检测处理器,提供控制逻辑、数据处理、数据存储功能,可对乳腺图像进行空间转换、对比度归一 化和外观归一化处理,通过调用乳腺检测模型实现特征提取、选择和分类,并把自动检测的结果输出到图像显示模块46。乳腺检测模型库41包含深度学习模型、传统的CAD模型、专家决策系统和其他各种模式识别和机器学习技术构建的乳腺检测模型,数字图像包括从胶片扫描、CR或DR设备获取的图像。智能诊断模块40包括一个或多个配置文件,用于存储参数值在不同条件下使用,并据此提供进一步的图像处理和分析功能执行图2所示的基于深度学习的乳腺诊断系统的工作流程。病历档案库和病理数据库38可以存储和查询患者的病历档案(如年龄、家族史和病史)以及各种病变的病理特征信息,以便对患者罹患的风险和未来发展进行评估和预警。用户在控制台39通过图像显示模块46的操作界面可输入指令、进行配置和调整参数。
本发明一实施例的用于输入参数实现人机交互的界面,参见图5。常用的图像处理参数包括图像数据的调节,初始数据的定义和特征图的生成参数。选项卡30用来选择一组合适的参数进行输入或显示。图5的例子列出的典型参数包括高通滤波器的初始值,如调整模糊滤波器的σ、宽度和高度值;用于生成定位区域的参数,包括伽柏滤波器(Gabor filter)的σ、τ、宽度和高度值以及库的大小;以及用于图像平滑处理的参数,如定位区域的平滑处理和特征图的平滑处理。除了控制参数外,用户还可以使用选项卡30查看中间结果(特征图)和最终结果。
至此,本发明的上述实施例实现的乳腺病变检测和诊断209,并通过构造上述系统,完成检测结果的标记/可视化/诊断报告210。诊断结果包括显示识别的一种或多种乳腺病变的相对风险指标。

Claims (10)

  1. 一种多模型融合自动检测医学图像中病变的方法,其特征在于,包括以下步骤:
    对病变兴趣区进行病变种类识别,识别时所利用的病变检测模型选自基于深度学习技术、其他计算机辅助检测技术中的一种或由其中若干种模型进行融合后所形成的检测模型。
  2. 根据权利要求1所述一种多模型融合自动检测医学图像中病变的方法,其特征在于,对于原始图像数据依次进行兴趣区识别、图像规范化以及可疑病变候选者检测,从而确定病变兴趣区。
  3. 根据权利要求2所述一种多模型融合自动检测医学图像中病变的方法,其特征在于,所述兴趣区识别包括组织轮廓分割;所述可疑病变候选者检测包括在识别的兴趣区应用至少一个高通滤波操作,得到特征图像。
  4. 根据权利要求2所述一种多模型融合自动检测医学图像中病变的方法,其特征在于,所述图像规范化包括将原始图像数据按照预定义的标准空间转换成对齐的图像数据,以归一化的方式提高对齐的图像数据的对比度,然后转换到一个标准的图像外观空间。
  5. 根据权利要求1所述一种多模型融合自动检测医学图像中病变的方法,其特征在于,所述基于深度学习技术的病变检测模型是由机器学习自动生成的一体化的卷积神经网络模型,通过应用自动构建的各种滤波器,自动提取和选择特征,经过至少一个卷积层和一个完全连接层从前向后传递,来确定正常组织和各种病变组织。
  6. 根据权利要求1所述一种多模型融合自动检测医学图像中病变的方法,其特征在于,通过将独立的算法检测空间获得的检测分值映射到标准检测空间以及对由多个算法检测空间融合得到的检测空间进行分值层面的融合,计算不同检测模型的检测分值,根据分值选择出一组用于病变种类识别的最优算法集;所述最优算法集包括对CNN模型自动学习得到的特征和由其他计算机辅助检测模型得到的特征,在特征层面进行融合后进行特征筛选和病变识别的组合模型;所述独立的算法检测空间选自基于CNN模型、基于最优特征集、专家决策系统和其他的各种模式识别和机器学习技术构建的病变检测模型中的至少一种,通过分值层面的融合实现即插即用功能,使新的检测算法在加入时可以在分值层面上实现融合。
  7. 一种多模型融合自动检测医学图像中病变的系统,其特征在于,该系统包括智能诊断模块和检测模型库;所述智能诊断模块包括病变检测处理器和一个或多个用于设置病变检测处理器参数的配置文件;病变检测处理器调用检测模型库对兴趣区进行病变种类识别,所述调用是指选择基于深度学习技术、人工指导的模型中的一种或由其中若干种模型 进行融合后所形成的病变检测模型。
  8. 根据权利要求7所述一种多模型融合自动检测医学图像中病变的系统,其特征在于,所述系统还包括图像输入模块,图像输入模块获取数字或数字化的医学影像并对该影像进行兴趣区识别。
  9. 根据权利要求7所述一种多模型融合自动检测医学图像中病变的系统,其特征在于,所述系统还包括图像显示模块,图像显示模块包括用于进行参数设置以及显示病变识别中间结果和最终结果的人机交互界面。
  10. 根据权利要求7所述一种多模型融合自动检测医学图像中病变的系统,其特征在于,所述系统还包括用于存储和查询病历档案的病历档案库以及包含各种病变的病理特征信息的病理数据库。
PCT/CN2017/103529 2016-12-31 2017-09-26 一种多模型融合自动检测医学图像中病变的系统及方法 WO2018120942A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611266397.4A CN106682435B (zh) 2016-12-31 2016-12-31 一种多模型融合自动检测医学图像中病变的系统及方法
CN201611266397.4 2016-12-31

Publications (1)

Publication Number Publication Date
WO2018120942A1 true WO2018120942A1 (zh) 2018-07-05

Family

ID=58850199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/103529 WO2018120942A1 (zh) 2016-12-31 2017-09-26 一种多模型融合自动检测医学图像中病变的系统及方法

Country Status (2)

Country Link
CN (1) CN106682435B (zh)
WO (1) WO2018120942A1 (zh)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109473168A (zh) * 2018-10-09 2019-03-15 五邑大学 一种医学影像机器人及其控制、医学影像识别方法
CN109658377A (zh) * 2018-10-31 2019-04-19 泰格麦迪(北京)医疗科技有限公司 一种基于多维度信息融合的乳腺mri病变区域检测方法
CN109920538A (zh) * 2019-03-07 2019-06-21 中南大学 一种基于数据增强的零样本学习方法
CN110265141A (zh) * 2019-05-13 2019-09-20 上海大学 一种肝脏肿瘤ct影像计算机辅助诊断方法
CN110491511A (zh) * 2019-07-24 2019-11-22 广州知汇云科技有限公司 一种基于围术期危险预警的多模型互补增强机器学习方法
CN110728310A (zh) * 2019-09-27 2020-01-24 聚时科技(上海)有限公司 一种基于超参数优化的目标检测模型融合方法及融合系统
CN110827276A (zh) * 2019-11-25 2020-02-21 河南科技大学 基于深度学习的血液透析器空心纤维通透状态识别方法
CN110889835A (zh) * 2019-11-21 2020-03-17 东华大学 一种基于双视图的钼靶影像语义标签预测方法
CN111105393A (zh) * 2019-11-25 2020-05-05 长安大学 一种基于深度学习的葡萄病虫害识别方法及装置
CN111191735A (zh) * 2020-01-04 2020-05-22 西安电子科技大学 基于数据差异和多尺度特征的卷积神经网络影像分类方法
CN111369532A (zh) * 2020-03-05 2020-07-03 北京深睿博联科技有限责任公司 乳腺x射线影像的处理方法和装置
CN111524579A (zh) * 2020-04-27 2020-08-11 北京百度网讯科技有限公司 肺功能曲线检测方法、装置、设备以及存储介质
CN111755118A (zh) * 2020-03-16 2020-10-09 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111815609A (zh) * 2020-07-13 2020-10-23 北京小白世纪网络科技有限公司 基于情境感知及多模型融合的病理图像分类方法及系统
CN111855500A (zh) * 2020-07-30 2020-10-30 华北电力大学(保定) 一种基于深度学习的复合绝缘子老化程度智能检测方法
CN111899229A (zh) * 2020-07-14 2020-11-06 武汉楚精灵医疗科技有限公司 一种基于深度学习多模型融合技术的胃早癌辅助诊断方法
CN112071421A (zh) * 2020-09-01 2020-12-11 深圳高性能医疗器械国家研究院有限公司 一种深度学习预估方法及其应用
CN112489788A (zh) * 2020-11-25 2021-03-12 武汉大学中南医院 一种用于癌症诊断的多模态影像分析方法及系统
CN112652032A (zh) * 2021-01-14 2021-04-13 深圳科亚医疗科技有限公司 器官的建模方法、图像分类装置和存储介质
CN112768041A (zh) * 2021-01-07 2021-05-07 湖北公众信息产业有限责任公司 医疗云管平台
WO2021097442A1 (en) * 2019-11-14 2021-05-20 Qualcomm Incorporated Guided training of machine learning models with convolution layer feature data fusion
CN113239972A (zh) * 2021-04-19 2021-08-10 温州医科大学 一种面向医学影像的人工智能辅助诊断模型构建系统
CN113269747A (zh) * 2021-05-24 2021-08-17 浙江大学医学院附属第一医院 一种基于深度学习的病理图片肝癌扩散检测方法及系统
CN113539471A (zh) * 2021-03-26 2021-10-22 内蒙古卫数数据科技有限公司 一种基于常规检验数据的乳腺增生辅助诊断方法及系统
CN114066828A (zh) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 一种基于多功能底层算法的图像处理方法及系统
EP3975196A4 (en) * 2019-05-22 2022-10-19 Tencent Technology (Shenzhen) Company Limited MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, MEDICAL ELECTRONIC DEVICE AND STORAGE MEDIA
WO2024123311A1 (en) * 2022-12-06 2024-06-13 Google Llc Mammography device outputs for broad system compatibility
US12026877B2 (en) 2021-01-14 2024-07-02 Shenzhen Keya Medical Technology Corporation Device and method for pneumonia detection based on deep learning

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106682435B (zh) * 2016-12-31 2021-01-29 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法
CN107239666B (zh) * 2017-06-09 2020-10-09 孟群 一种对医疗影像数据进行脱敏处理的方法及系统
CN107274406A (zh) * 2017-08-07 2017-10-20 北京深睿博联科技有限责任公司 一种检测敏感区域的方法及装置
EP3451210B1 (en) 2017-08-31 2021-03-03 Siemens Healthcare GmbH Method for comparing reference values in medical imaging processes, system comprising a local medical imaging device, computer program product and computer-readable program
CN107563123A (zh) * 2017-09-27 2018-01-09 百度在线网络技术(北京)有限公司 用于标注医学图像的方法和装置
CN107665491B (zh) * 2017-10-10 2021-04-09 清华大学 病理图像的识别方法及系统
DE102017223283A1 (de) * 2017-12-19 2019-06-19 Robert Bosch Gmbh Verfahren, Vorrichtung und Computerprogramm zum Ansteuern eines Aktors und zum Ermitteln einer Anomalie
CN108364006B (zh) * 2018-01-17 2022-03-08 超凡影像科技股份有限公司 基于多模式深度学习的医学图像分类装置及其构建方法
CN108537773B (zh) * 2018-02-11 2022-06-17 中国科学院苏州生物医学工程技术研究所 针对胰腺癌与胰腺炎性疾病进行智能辅助鉴别的方法
US10878569B2 (en) 2018-03-28 2020-12-29 International Business Machines Corporation Systems and methods for automatic detection of an indication of abnormality in an anatomical image
CN108550150B (zh) * 2018-04-17 2020-11-13 上海联影医疗科技有限公司 乳腺密度的获取方法、设备及可读存储介质
CN108538390A (zh) * 2018-04-28 2018-09-14 中南大学 一种面向医学数据的增量式处理方法
CN108898160B (zh) * 2018-06-01 2022-04-08 中国人民解放军战略支援部队信息工程大学 基于cnn和影像组学特征融合的乳腺癌组织病理学分级方法
WO2019235335A1 (ja) * 2018-06-05 2019-12-12 住友化学株式会社 診断支援システム、診断支援方法及び診断支援プログラム
CN108899087A (zh) * 2018-06-22 2018-11-27 中山仰视科技有限公司 基于深度学习的x光片智能诊断方法
CN109003679B (zh) * 2018-06-28 2021-06-08 众安信息技术服务有限公司 一种脑血管出血与缺血预测方法及装置
CN108985302A (zh) * 2018-07-13 2018-12-11 东软集团股份有限公司 一种皮肤镜图像处理方法、装置及设备
CN108858201A (zh) * 2018-08-15 2018-11-23 深圳市烽焌信息科技有限公司 一种用于看护儿童的机器人及存储介质
CN110008971B (zh) * 2018-08-23 2022-08-09 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机可读存储介质及计算机设备
CN109409413B (zh) * 2018-09-28 2022-09-16 贵州大学 X射线乳腺肿块影像自动分类方法
CN109363697B (zh) * 2018-10-16 2020-10-16 杭州依图医疗技术有限公司 一种乳腺影像病灶识别的方法及装置
CN109447966A (zh) * 2018-10-26 2019-03-08 科大讯飞股份有限公司 医学图像的病灶定位识别方法、装置、设备及存储介质
US11062459B2 (en) * 2019-02-07 2021-07-13 Vysioneer INC. Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models
CN110111344B (zh) * 2019-05-13 2021-11-16 广州锟元方青医疗科技有限公司 病理切片图像分级方法、装置、计算机设备和存储介质
CN110276411B (zh) * 2019-06-28 2022-11-18 腾讯科技(深圳)有限公司 图像分类方法、装置、设备、存储介质和医疗电子设备
US11615508B2 (en) * 2020-02-07 2023-03-28 GE Precision Healthcare LLC Systems and methods for consistent presentation of medical images using deep neural networks
CN111785375B (zh) * 2020-06-18 2023-03-24 武汉互创联合科技有限公司 胚胎分裂过程分析及妊娠率智能预测方法及系统
CN111783854B (zh) * 2020-06-18 2022-06-07 武汉互创联合科技有限公司 胚胎妊娠状态智能预测方法及系统
CN112767346B (zh) * 2021-01-18 2021-10-29 北京医准智能科技有限公司 基于多影像的全卷积单阶段乳腺图像病灶检测方法及装置
CN113420655A (zh) * 2021-06-22 2021-09-21 中山仰视科技有限公司 基于能量模型的医学影像阴阳性筛查方法、系统、及设备
CN113421276B (zh) * 2021-07-02 2023-07-21 深圳大学 一种图像处理方法、装置及存储介质
CN116958018A (zh) * 2022-08-31 2023-10-27 腾讯科技(深圳)有限公司 针对病理图像的病变区域确定方法、模型训练方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473569A (zh) * 2013-09-22 2013-12-25 江苏美伦影像系统有限公司 基于svm的医学影像分类方法
CN103488977A (zh) * 2013-09-22 2014-01-01 江苏美伦影像系统有限公司 基于svm的医学影像管理系统
CN105574859A (zh) * 2015-12-14 2016-05-11 中国科学院深圳先进技术研究院 一种基于ct图像的肝脏肿瘤分割方法及装置
CN105701351A (zh) * 2016-01-15 2016-06-22 上海市第十人民医院 基于人工神经网络模型超声造影特征自动识别系统及方法
CN106682435A (zh) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104834943A (zh) * 2015-05-25 2015-08-12 电子科技大学 一种基于深度学习的脑肿瘤分类方法
CN106203488B (zh) * 2016-07-01 2019-09-13 福州大学 一种基于受限玻尔兹曼机的乳腺图像特征融合方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103473569A (zh) * 2013-09-22 2013-12-25 江苏美伦影像系统有限公司 基于svm的医学影像分类方法
CN103488977A (zh) * 2013-09-22 2014-01-01 江苏美伦影像系统有限公司 基于svm的医学影像管理系统
CN105574859A (zh) * 2015-12-14 2016-05-11 中国科学院深圳先进技术研究院 一种基于ct图像的肝脏肿瘤分割方法及装置
CN105701351A (zh) * 2016-01-15 2016-06-22 上海市第十人民医院 基于人工神经网络模型超声造影特征自动识别系统及方法
CN106682435A (zh) * 2016-12-31 2017-05-17 西安百利信息科技有限公司 一种多模型融合自动检测医学图像中病变的系统及方法

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109473168A (zh) * 2018-10-09 2019-03-15 五邑大学 一种医学影像机器人及其控制、医学影像识别方法
CN109658377A (zh) * 2018-10-31 2019-04-19 泰格麦迪(北京)医疗科技有限公司 一种基于多维度信息融合的乳腺mri病变区域检测方法
CN109658377B (zh) * 2018-10-31 2023-10-10 泰格麦迪(北京)医疗科技有限公司 一种基于多维度信息融合的乳腺mri病变区域检测方法
CN109920538A (zh) * 2019-03-07 2019-06-21 中南大学 一种基于数据增强的零样本学习方法
CN109920538B (zh) * 2019-03-07 2022-11-25 中南大学 一种基于数据增强的零样本学习方法
CN110265141A (zh) * 2019-05-13 2019-09-20 上海大学 一种肝脏肿瘤ct影像计算机辅助诊断方法
CN110265141B (zh) * 2019-05-13 2023-04-18 上海大学 一种肝脏肿瘤ct影像计算机辅助诊断方法
US11984225B2 (en) 2019-05-22 2024-05-14 Tencent Technology (Shenzhen) Company Limited Medical image processing method and apparatus, electronic medical device, and storage medium
EP3975196A4 (en) * 2019-05-22 2022-10-19 Tencent Technology (Shenzhen) Company Limited MEDICAL IMAGE PROCESSING METHOD AND APPARATUS, MEDICAL ELECTRONIC DEVICE AND STORAGE MEDIA
CN110491511A (zh) * 2019-07-24 2019-11-22 广州知汇云科技有限公司 一种基于围术期危险预警的多模型互补增强机器学习方法
CN110728310A (zh) * 2019-09-27 2020-01-24 聚时科技(上海)有限公司 一种基于超参数优化的目标检测模型融合方法及融合系统
CN110728310B (zh) * 2019-09-27 2023-09-01 聚时科技(上海)有限公司 一种基于超参数优化的目标检测模型融合方法及融合系统
WO2021097442A1 (en) * 2019-11-14 2021-05-20 Qualcomm Incorporated Guided training of machine learning models with convolution layer feature data fusion
CN110889835A (zh) * 2019-11-21 2020-03-17 东华大学 一种基于双视图的钼靶影像语义标签预测方法
CN110889835B (zh) * 2019-11-21 2023-06-23 东华大学 一种基于双视图的钼靶影像语义标签预测方法
CN110827276B (zh) * 2019-11-25 2023-03-24 河南科技大学 基于深度学习的血液透析器空心纤维通透状态识别方法
CN110827276A (zh) * 2019-11-25 2020-02-21 河南科技大学 基于深度学习的血液透析器空心纤维通透状态识别方法
CN111105393A (zh) * 2019-11-25 2020-05-05 长安大学 一种基于深度学习的葡萄病虫害识别方法及装置
CN111105393B (zh) * 2019-11-25 2023-04-18 长安大学 一种基于深度学习的葡萄病虫害识别方法及装置
CN111191735B (zh) * 2020-01-04 2023-03-24 西安电子科技大学 基于数据差异和多尺度特征的卷积神经网络影像分类方法
CN111191735A (zh) * 2020-01-04 2020-05-22 西安电子科技大学 基于数据差异和多尺度特征的卷积神经网络影像分类方法
CN111369532A (zh) * 2020-03-05 2020-07-03 北京深睿博联科技有限责任公司 乳腺x射线影像的处理方法和装置
CN111755118A (zh) * 2020-03-16 2020-10-09 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111755118B (zh) * 2020-03-16 2024-03-08 腾讯科技(深圳)有限公司 医疗信息处理方法、装置、电子设备及存储介质
CN111524579A (zh) * 2020-04-27 2020-08-11 北京百度网讯科技有限公司 肺功能曲线检测方法、装置、设备以及存储介质
CN111524579B (zh) * 2020-04-27 2023-08-29 北京百度网讯科技有限公司 肺功能曲线检测方法、装置、设备以及存储介质
CN111815609A (zh) * 2020-07-13 2020-10-23 北京小白世纪网络科技有限公司 基于情境感知及多模型融合的病理图像分类方法及系统
CN111815609B (zh) * 2020-07-13 2024-03-01 北京小白世纪网络科技有限公司 基于情境感知及多模型融合的病理图像分类方法及系统
CN111899229A (zh) * 2020-07-14 2020-11-06 武汉楚精灵医疗科技有限公司 一种基于深度学习多模型融合技术的胃早癌辅助诊断方法
CN111855500A (zh) * 2020-07-30 2020-10-30 华北电力大学(保定) 一种基于深度学习的复合绝缘子老化程度智能检测方法
CN112071421A (zh) * 2020-09-01 2020-12-11 深圳高性能医疗器械国家研究院有限公司 一种深度学习预估方法及其应用
CN112489788B (zh) * 2020-11-25 2024-06-07 武汉大学中南医院 一种用于癌症诊断的多模态影像分析方法及系统
CN112489788A (zh) * 2020-11-25 2021-03-12 武汉大学中南医院 一种用于癌症诊断的多模态影像分析方法及系统
CN112768041A (zh) * 2021-01-07 2021-05-07 湖北公众信息产业有限责任公司 医疗云管平台
CN112652032A (zh) * 2021-01-14 2021-04-13 深圳科亚医疗科技有限公司 器官的建模方法、图像分类装置和存储介质
US12026877B2 (en) 2021-01-14 2024-07-02 Shenzhen Keya Medical Technology Corporation Device and method for pneumonia detection based on deep learning
CN113539471A (zh) * 2021-03-26 2021-10-22 内蒙古卫数数据科技有限公司 一种基于常规检验数据的乳腺增生辅助诊断方法及系统
CN113239972A (zh) * 2021-04-19 2021-08-10 温州医科大学 一种面向医学影像的人工智能辅助诊断模型构建系统
CN113269747B (zh) * 2021-05-24 2023-06-13 浙江大学医学院附属第一医院 一种基于深度学习的病理图片肝癌扩散检测方法及系统
CN113269747A (zh) * 2021-05-24 2021-08-17 浙江大学医学院附属第一医院 一种基于深度学习的病理图片肝癌扩散检测方法及系统
CN114066828A (zh) * 2021-11-03 2022-02-18 深圳市创科自动化控制技术有限公司 一种基于多功能底层算法的图像处理方法及系统
WO2024123311A1 (en) * 2022-12-06 2024-06-13 Google Llc Mammography device outputs for broad system compatibility

Also Published As

Publication number Publication date
CN106682435B (zh) 2021-01-29
CN106682435A (zh) 2017-05-17

Similar Documents

Publication Publication Date Title
WO2018120942A1 (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
CN110060774B (zh) 一种基于生成式对抗网络的甲状腺结节识别方法
Kim et al. Automatic detection and segmentation of lumbar vertebrae from X-ray images for compression fracture evaluation
US20200160997A1 (en) Method for detection and diagnosis of lung and pancreatic cancers from imaging scans
US20220230302A1 (en) Three-dimensional automatic location system for epileptogenic focus based on deep learning
Sridar et al. Decision fusion-based fetal ultrasound image plane classification using convolutional neural networks
Murakami et al. Automatic identification of bone erosions in rheumatoid arthritis from hand radiographs based on deep convolutional neural network
CN111028206A (zh) 一种基于深度学习前列腺癌自动检测和分类系统
Nurmaini et al. Accurate detection of septal defects with fetal ultrasonography images using deep learning-based multiclass instance segmentation
Hussein et al. Fully automatic segmentation of gynaecological abnormality using a new viola–jones model
US20120099771A1 (en) Computer aided detection of architectural distortion in mammography
KR20230059799A (ko) 병변 검출을 위해 공동 훈련을 이용하는 연결형 머신 러닝 모델
US20220012875A1 (en) Systems and Methods for Medical Image Diagnosis Using Machine Learning
Lan et al. Run: Residual u-net for computer-aided detection of pulmonary nodules without candidate selection
WO2022110525A1 (zh) 一种癌变区域综合检测装置及方法
JP2023517058A (ja) 画像処理に基づく腫瘍の自動検出
Yue et al. Automatic acetowhite lesion segmentation via specular reflection removal and deep attention network
CN112767374A (zh) 基于mri的阿尔茨海默症病灶区域语义分割算法
CN116664911A (zh) 一种基于可解释深度学习的乳腺肿瘤图像分类方法
Hu et al. A multi-instance networks with multiple views for classification of mammograms
Nurmaini et al. An improved semantic segmentation with region proposal network for cardiac defect interpretation
Iqbal et al. AMIAC: adaptive medical image analyzes and classification, a robust self-learning framework
Kumar et al. A Novel Approach for Breast Cancer Detection by Mammograms
CN114332910A (zh) 一种面向远红外图像的相似特征计算的人体部位分割方法
Arnold et al. Indistinct frame detection in colonoscopy videos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17888663

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13/11/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17888663

Country of ref document: EP

Kind code of ref document: A1