WO2020019740A1 - 左心室心肌分割方法、装置及计算机可读存储介质 - Google Patents

左心室心肌分割方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020019740A1
WO2020019740A1 PCT/CN2019/078892 CN2019078892W WO2020019740A1 WO 2020019740 A1 WO2020019740 A1 WO 2020019740A1 CN 2019078892 W CN2019078892 W CN 2019078892W WO 2020019740 A1 WO2020019740 A1 WO 2020019740A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature information
extracting
feature
convolution
convolution processing
Prior art date
Application number
PCT/CN2019/078892
Other languages
English (en)
French (fr)
Inventor
郑海荣
刘新
胡战利
吴垠
梁栋
杨永峰
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020019740A1 publication Critical patent/WO2020019740A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30048Heart; Cardiac

Definitions

  • the present application relates to the field of biomedicine, and in particular, to a left ventricular myocardial segmentation method, device, and computer-readable storage medium.
  • Left ventricular myocardial segmentation is the main step of cardiac function analysis, and cardiac function analysis parameters such as left ventricular myocardial wall thickness and ejection volume are based on accurate segmentation of left ventricular myocardium.
  • the left ventricular myocardium is similar to the background gray.
  • the left ventricular papillary muscles, trabeculae, etc. belong to the same substance as the myocardium, and there are also artifacts. Therefore, left ventricular myocardial segmentation is a difficult and important task.
  • the method of left ventricular myocardial segmentation mainly uses magnetic resonance imaging (MRI) technology to obtain cardiac images, and specially trained medical staff or medical experts perform manual left ventricular myocardial segmentation on the cardiac images.
  • Manual segmentation requires expert knowledge and experience and is time consuming.
  • the present application provides a left ventricular myocardial segmentation method, device, and computer-readable storage medium, which can be used to improve the efficiency of left ventricular myocardial segmentation.
  • a first aspect of the present application provides a left ventricular myocardial segmentation method, including:
  • the output object of the latest down-sampling convolution process is subjected to down-sampling convolution processing based on the currently extracted first feature information, and then the extracting the first feature is performed iteratively Information steps
  • the iterative process of extracting the first feature information is performed N times, based on the first feature information extracted at the Nth time, the output object of the Nth time downsampling convolution processing is subjected to upsampling convolution processing;
  • the left ventricle myocardial segmentation is performed on the output object of the latest upsampling convolution processing
  • the N is not less than 2.
  • a second aspect of the present application provides a left ventricular myocardial segmentation device, including:
  • An acquisition unit a first feature extraction unit, a down-sampled convolution processing unit, a second feature extraction unit, an up-sampled convolution processing unit, and a segmentation unit;
  • the obtaining unit is configured to obtain a heart image
  • the down-sampling convolution processing unit is configured to trigger the first feature extraction unit after performing down-sampling convolution processing on the heart image; when the iterative process of extracting the first feature information is not completed N times, based on the current
  • the first feature information extracted by the first feature extraction unit performs down-sampling convolution processing on the latest output object of the down-sampling convolution processing unit, and then triggers the first feature extraction unit;
  • the first feature extraction unit is configured to extract first feature information, where the first feature information is feature information of an output object of a recent down-sampling convolution process;
  • the up-sampling convolution processing unit is configured to: when completing the N iterative process of extracting the first feature information, perform convolution on the down-sampling based on the first feature information extracted by the first feature extraction unit for the Nth time.
  • the object output by the processing unit for the Nth time is subjected to upsampling convolution processing, and then the second feature extraction unit is triggered; when the iterative process of extracting the second feature information is not completed N times, based on the current extraction by the second feature extraction unit
  • the obtained second feature information performs upsampling convolution processing on the object output by the upsampling convolution processing unit last time, and then triggers the second feature extraction unit;
  • the second feature extraction unit is configured to: extract second feature information, wherein the second feature information is feature information of a last output object of the upsampling convolution processing unit;
  • the segmentation unit is used for: when completing the iterative process of extracting the second feature information for N times, based on the pre-trained classifier and the second feature information extracted for the Nth time, the output object of the latest upsampling convolution processing is left Segmentation of ventricular myocardium;
  • the N is not less than 2.
  • a third aspect of the present application provides a left ventricular myocardial segmentation device including a memory, a processor, and a computer program stored on the memory and executable on the processor.
  • the processor executes the computer program, Implement the left ventricular myocardial segmentation method provided by the first aspect of the present application.
  • a fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the left ventricular myocardial segmentation method provided by the first aspect of the present application is implemented.
  • the scheme of the present application realizes automatic segmentation of the left ventricular myocardium in the heart image by extracting feature information (such as second feature information) of the heart image and inputting a pre-trained classifier for recognition.
  • the left ventricular myocardium is automatically segmented from the heart image by a machine. Therefore, compared with the traditional manual segmentation method by medical staff or medical experts, the proposed solution can effectively improve the efficiency of left ventricular myocardial segmentation.
  • the second feature information of the input classifier in the application scheme is obtained through multiple feature extraction, down-sampling convolution processing, and up-sampling convolution processing. Therefore, the second feature information can better characterize deeper features in the heart image Therefore, the result of left ventricular myocardial segmentation is more accurate.
  • 1-a is a schematic flowchart of an embodiment of a left ventricular myocardial segmentation method provided by the present application
  • FIG. 1-b is a schematic diagram of a Dense network structure provided by this application.
  • FIG. 2 is a schematic diagram of a network structure for implementing a left ventricular myocardial segmentation method in an application scenario provided by this application;
  • FIG. 3 is a schematic diagram of segmentation results obtained by segmenting one of the heart images of a test patient in the application scenario shown in FIG. 2;
  • FIG. 4 is a schematic diagram of a comparison between the segmentation result of FIG. 3 and the heart labeling data
  • FIG. 5 is a schematic diagram of a linear analysis of the segmentation result and the heart annotation data of FIG. 3;
  • FIG. 6 is a schematic structural diagram of an embodiment of a left ventricular myocardial segmentation device provided by the present application.
  • FIG. 7 is a schematic structural diagram of another embodiment of a left ventricular myocardial segmentation device provided by the present application.
  • a left ventricular myocardial segmentation method in the embodiment of the present application includes:
  • Step 101 Obtain a heart image
  • step 101 may be represented as: acquiring a heart image by using Magnetic Resonance Imaging (MRI) technology, and the acquired heart image at this time is an MRI image.
  • MRI technology is a technology that obtains an image of the internal structure of the human body through a magnetic field. It has the advantage of non-trauma, so the patient can be well protected during the examination.
  • a heart image of a human body can be acquired by MRI technology.
  • a cardiac image may also be obtained by an ultrasound diagnostic method (for example, B-ultrasound).
  • a heart image to be segmented may also be obtained (for example, imported) from an existing heart image database, which is not limited herein.
  • Step 102 Perform down-sampling convolution processing on the heart image
  • step 102 the above-mentioned heart image (the original heart image or the normalized heart image) is subjected to down-sampling convolution processing.
  • step 102 includes: extracting feature information in the heart image, and performing down-sampling convolution processing on the heart image based on the extracted feature information.
  • the feature information in the above-mentioned heart image is extracted through a convolution process, and the formula applied to the convolution process can be expressed as:
  • i, j are the pixel positions of the image
  • I, K respectively represent the image and the convolution kernel
  • m, n are the width and height of the convolution kernel, respectively.
  • the feature information in the above-mentioned heart image may also be extracted based on a Dense network or other image feature extraction algorithms, which is not limited herein.
  • the method may further include: normalizing the image size of the acquired heart image to obtain a heart image of a preset size.
  • step 102 may be specifically embodied as: performing down-sampling convolution processing on the heart image of the preset size.
  • the preset size can be set to 128 * 128, for example.
  • the size can also be preset to other sizes, which is not limited here.
  • Step 103 Extract first feature information
  • the first feature information is the feature information of the output object of the latest down-sampling convolution process.
  • the feature information (that is, the first feature information) in the output object of the latest down-sampling convolution processing may be extracted based on the image feature extraction technology.
  • the first feature information is extracted based on the Dense network.
  • the structure diagram of the Dense network can be shown in Figure 1-b.
  • [x 0 , x 1 , ..., x l-1 ] represents the superposition of the output of layers 0 to l-1
  • H l represents a non-linear transformation.
  • the first feature information may also be extracted based on other neural networks or image feature extraction algorithms, which is not limited here.
  • Step 104 If the iterative process of extracting the first feature information is not completed N times, perform down-sampling convolution processing on the output object of the latest down-sampling convolution processing based on the currently extracted first feature information;
  • step 104 based on the first feature information extracted in step 103, downsampling convolution processing is performed on the output object of the latest downsampling convolution processing, and the resolution of the image is reduced so that deeper layers in the image can be extracted in subsequent steps.
  • the characteristic information is based on the first feature information extracted in step 103.
  • the output object of the latest down-sampling convolution process and the currently extracted first feature information may be input to a down-sampling layer (which can be understood as a pooling layer) for down-sampling convolution processing, and the output of the down-sampling layer It is the output object of the down-sampled convolution processing.
  • a down-sampling layer which can be understood as a pooling layer
  • step 103 After performing down-sampling convolution processing on the output object of the latest down-sampling convolution processing based on the currently extracted first feature information, return to step 103 to perform step 103 iteratively. Through this iterative process, deep feature information in the heart image can be gradually extracted.
  • the N is a preset value not less than 2.
  • N is 4.
  • Step 105 If the iterative process of extracting the first feature information is completed N times, based on the first feature information extracted the Nth time, perform an upsampling convolution process on the output object of the latest downsampling convolution process;
  • the image is compressed after being down-sampled and convolved. Therefore, in the embodiment of the present application, after completing the N iterative process of extracting the first feature information, the compressed The image is restored.
  • the process of this restoration can be understood as the reverse operation of the aforementioned compression process.
  • the output object of the latest (i.e., Nth) downsampling convolution processing is up-sampled and rolled.
  • Product processing to gradually restore the resolution of the image.
  • step 105 the output object of the latest down-sampling convolution processing and the N-th extracted first feature information may be input to an up-sampling layer for up-sampling convolution processing, and the output of the up-sampling layer is the current upsampling Output object for convolution processing.
  • Step 106 Extract second feature information
  • the second feature information is the feature information of the output object of the latest up-sampling convolution process.
  • the second feature information is extracted based on the Dense network.
  • the Dense network Specifically, for a description of the Dense network, reference may be made to the description in step 103, and details are not described herein again.
  • the second feature information may also be extracted based on other neural networks or image feature extraction algorithms, which is not limited here.
  • Step 107 If the iterative process of extracting the second feature information is not completed N times, perform upsampling convolution processing on the output object of the latest upsampling convolution processing based on the currently extracted second feature information, and then return to step 106;
  • the iterative process of extracting the second feature information when the iterative process of extracting the second feature information is not completed N times (abbreviated as the incomplete iterative process in FIG. 1-a), it indicates that the currently compressed heart image needs to be restored. At this time, steps are performed. 107. Through this iterative process, the heart image can be restored step by step.
  • step 107 the currently extracted second feature information and the output object of the latest upsampling convolution processing may be input to the upsampling layer for upsampling convolution processing, and the output of the upsampling layer is the current upsampling volume.
  • the output object of the product processing is the current upsampling volume.
  • Step 108 If the iterative process of extracting the second feature information is performed N times, the left ventricular myocardium is performed on the output object of the latest up-sampling convolution processing based on the pre-trained classifier and the second feature information extracted the nth time. Division
  • the iterative process of extracting the second feature information for N times When the iterative process of extracting the second feature information for N times is completed, it indicates that the current feature extraction process for the above-mentioned heart image has been completed.
  • the second feature information extracted for the Nth time and the output object of the latest upsampling convolution processing (That is, the restored heart image) is input to a pre-trained classifier (such as a softmax classifier) to segment the left ventricular myocardium, that is, the left ventricular myocardium and the background information in the output object are separated.
  • a pre-trained classifier such as a softmax classifier
  • each pixel in the output object may be classified as foreground information (such as left ventricular myocardium) or background. Information to separate the left ventricular myocardium from the background in the heart image.
  • the segmentation network may be the aforementioned Dense network, downsampling layer, upsampling layer, and classification Device, etc.
  • the segmentation network can be trained by acquiring multiple heart images used to train the segmentation network, and the segmentation network can be optimized based on the Adam optimization algorithm.
  • the process of optimizing the segmented network based on the Adam optimization algorithm can be implemented by referring to the existing technology, and is not repeated here.
  • the accuracy of the above segmented network can also be evaluated using the Dice coefficient, and its formula can be expressed as the following formula:
  • D represents the Dice coefficient, and this coefficient measures the segmentation result by comparing the similarity of the segmentation area, p i represents the segmentation result, and g i represents the labeled segmentation result.
  • the Dice coefficient is in the range of 0 to 1. A larger value indicates a higher accuracy of the segmentation result.
  • the first feature information and the second feature information may be extracted based on the Dense network.
  • the following constraints can be set: 1.
  • the number of convolution kernels of the Dense network used to extract the first feature information for the n + 1th time is the first Double the number of convolution kernels of the Dense network used to extract the first feature information n times; and in the above iterative process of extracting the second feature information, the Dense used to extract the second feature information n + 1 times
  • the number of convolution kernels of the network is one half of the number of convolution kernels of the Dense network used for extracting the second feature information for the nth time; and the volume of the Dense network used for extracting the first feature information for the nth time.
  • the number of kernels is equal to the number of kernels of the Dense network used for the first extraction of the second feature information, where n ⁇ [1, N).
  • feature information (such as second feature information) of a heart image is extracted and a pre-trained classifier is input for recognition, so as to realize automatic segmentation of the left ventricular myocardium in the heart image.
  • the scheme of the present application can effectively improve the efficiency of left ventricular myocardial segmentation compared to the traditional manual segmentation method by medical staff or medical experts; on the other hand, Since the second feature information of the input classifier in the solution of this application is obtained through multiple feature extraction, down-sampling convolution processing and up-sampling convolution processing, the second feature information can better characterize the deeper layers in the heart image The characteristics of the left ventricle myocardial segmentation are more accurate.
  • the above left ventricular myocardial segmentation method is described in a specific application scenario below.
  • the schematic diagram of the network structure in this application scenario can be shown in Figure 2.
  • the segmentation network in this application scenario includes two parts: compression and extraction features and decompression image restoration. The two parts are completely symmetrical to ensure The divided image is the same size as the original image.
  • the Dense network (refer to the structure and related description of the Dense network in Figure 1-b), it is input to the compression and extraction feature part for processing.
  • the compression extraction feature part and the decompressed image restoration part both include four segments of processing.
  • each segment is composed of the downsampling layer and the Dense network (refer to the structure and related description of the Dense network in Figure 1-b). )
  • the recovery part of the decompressed image also includes four segments of processing (that is, the aforementioned N is taken as 4), and each segment is composed of an upsampling layer and a Dense network to gradually restore the image.
  • the size of the convolution kernel of the Dense network used in each processing is 5 * 5, 5 * 5, 5 * 5, and 5 * 5, respectively.
  • the number of convolution kernels is 32, 64, 128, and 256.
  • the size of the first input Dense network in the compression extraction feature is 128 * 128, and the size of the image output by compression extraction is 8 * 8.
  • the size of the convolution kernel of the Dense network used for each segment of processing is 5 * 5, 5 * 5, 5 * 5, and 5 * 5; the size of the Dense network used for each segment of processing
  • the number of convolution kernels are 256, 128, 64, and 32 respectively.
  • the size of the first input image to the Dense network is 128 * 128, and the size of the image output from the compression and extraction feature part is 8 * 8.
  • the size of the image input to the Dense network for the first time in the decompressed image recovery part is 8 * 8, and the size of the image output from the compression extraction feature part is 128 * 128.
  • the output of the decompressed image restoration part is input to the softmax classifier.
  • the softmax classifier performs left ventricular myocardial segmentation on the heart image, that is, the left ventricular myocardium in the image is separated from the background That is, output the segmentation result).
  • FIG. 3 is a schematic diagram of the segmentation result obtained by segmenting one of the heart images of the test patient in the application scenario shown in FIG. 2. As can be seen from FIG. 3, the left ventricular myocardium can be automatically segmented through the solution of the present application. .
  • FIG. 4 is a schematic diagram of the comparison between the segmentation result of FIG. 3 and the heart annotation data.
  • FIG. 5 is a schematic diagram of the linear analysis of the segmentation result of FIG. 3 and the heart annotation data. It can be seen from the combination of FIG. 4 and FIG. Correlation between left ventricular cardiac muscle and cardiac annotation data.
  • FIG. 6 provides a left ventricular myocardial segmentation device according to an embodiment of the present application.
  • the left ventricular myocardial segmentation device mainly includes an acquisition unit 301, a first feature extraction unit 302, a down-sampling convolution processing unit 303, a second feature extraction unit 304, an up-sampling convolution processing unit 305, and segmentation Unit 306.
  • the obtaining unit 301 is configured to: obtain a heart image
  • the down-sampling convolution processing unit 303 is configured to trigger the first feature extraction unit 302 after performing down-sampling convolution processing on the heart image acquired by the obtaining unit 301; when the iterative process of extracting the first feature information is not completed N times, based on The first feature information extracted by the current first feature extraction unit 302 performs down-sampling convolution processing on the latest output object of the down-sampling convolution processing unit 303, and then triggers the first feature extraction unit 302;
  • the first feature extraction unit 302 is configured to extract first feature information, where the first feature information is feature information of an output object of a latest down-sampling convolution process;
  • the upsampling convolution processing unit 305 is configured to: when the iteration process of extracting the first feature information is performed N times, based on the first feature information extracted by the first feature extraction unit 302 Nth time, the downsampling convolution processing unit 303 is the Nth time The current output object is subjected to up-sampling convolution processing, and then the second feature extraction unit 304 is triggered; when the iterative process of extracting the second feature information is not completed N times, based on the The second feature information performs upsampling convolution processing on the object that was last output by the upsampling convolution processing unit 305, and then triggers the second feature extraction unit 304;
  • the second feature extraction unit 304 is configured to extract second feature information, where the second feature information is feature information of the latest output object of the upsampling convolution processing unit 305;
  • the segmentation unit 306 is configured to: when completing the iterative process of extracting the second feature information N times, based on the pre-trained classifier and the second feature information extracted the Nth time, perform an output object on the latest upsampling convolution processing Left ventricular myocardial segmentation;
  • N is not less than 2, preferably, N is taken as 4.
  • the first feature extraction unit 302 is specifically configured to: extract the first feature information based on the Dense network; the second feature extraction unit 304 is specifically configured to: extract the second feature information based on the Dense network.
  • the number of convolution kernels of the Dense network used by the first feature extraction unit 302 to extract the first feature information for the n + 1th time is the Dense network used to extract the first feature information for the nth time.
  • the number of convolution kernels of the Dense network used by the second feature extraction unit 304 to extract the second feature information n + 1 times is used to extract the second feature information of the nth time.
  • One half of the number of convolution kernels of the Dense network; and the number of convolution kernels of the Dense network used by the first feature extraction unit 302 to extract the first feature information for the nth time is equal to that of the second feature extraction unit 304
  • the number of convolution kernels of the Dense network used to extract the second feature information at a time where n ⁇ [1, N).
  • the left ventricular myocardial segmentation device further includes a normalization unit for performing normalization processing on the image size of the heart image acquired by the acquisition unit 301 to obtain a heart image of a preset size.
  • the down-sampling convolution processing unit 303 is specifically configured to perform down-sampling convolution processing on the heart image obtained by the normalization unit.
  • the left ventricular myocardial segmentation device may be used to implement the left ventricular myocardial segmentation method provided by the foregoing method embodiment.
  • the division of each functional module is merely an example.
  • the above-mentioned functions may be assigned differently according to needs, such as the configuration requirements of the corresponding hardware or the convenience of software implementation.
  • the functional modules are completed, that is, the internal structure of the left ventricular myocardial segmentation device is divided into different functional modules to complete all or part of the functions described above.
  • the corresponding functional modules in this embodiment may be implemented by corresponding hardware, or may be completed by corresponding hardware executing corresponding software.
  • the embodiments described in this specification can apply the above-mentioned description principles, which will not be described in detail below.
  • the embodiment of the present application realizes automatic segmentation of the left ventricular myocardium in the heart image by extracting feature information (such as the second feature information) of the heart image and inputting a pre-trained classifier for recognition.
  • the machine automatically segments the left ventricular myocardium from the heart image. Therefore, compared with the traditional manual segmentation method by medical staff or medical experts, the proposed solution can effectively improve the efficiency of left ventricular myocardial segmentation.
  • the second feature information of the input classifier is obtained through multiple feature extraction, down-sampling convolution processing, and up-sampling convolution processing. Therefore, the second feature information can better characterize deeper features in the heart image, thereby Make the left ventricular myocardial segmentation result more accurate.
  • the left ventricular myocardial segmentation device includes:
  • the processor 42 executes the computer program, the left ventricular myocardial segmentation method described in the foregoing method embodiment is implemented.
  • the left ventricular myocardial segmentation device further includes:
  • At least one input device 43 and at least one output device 44 are provided.
  • the memory 41, the processor 42, the input device 43, and the output device 44 are connected via a bus 45.
  • the input device 43 and the output device 44 may be antennas.
  • the memory 41 may be a high-speed random access memory (RAM, Random Access Memory) memory, or may be a non-volatile memory (non-volatile memory), such as a magnetic disk memory.
  • RAM Random Access Memory
  • non-volatile memory such as a magnetic disk memory.
  • the memory 41 is configured to store a set of executable program code, and the processor 42 is coupled to the memory 41.
  • an embodiment of the present application further provides a computer-readable storage medium.
  • the computer-readable storage medium may be a left ventricular myocardial segmentation device provided in the foregoing embodiments.
  • the computer-readable storage medium may be the foregoing.
  • a computer program is stored on the computer-readable storage medium, and when the program is executed by a processor, the power allocation method described in the foregoing method embodiment is implemented.
  • the computer-storable medium may also be various media that can store program codes, such as a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), a RAM, a magnetic disk, or an optical disk.
  • the disclosed apparatus and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules is only a logical function division.
  • multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or modules, which may be electrical, mechanical or other forms.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.
  • each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist separately physically, or two or more modules may be integrated into one module.
  • the above integrated modules may be implemented in the form of hardware or software functional modules.
  • the integrated module When the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially a part that contributes to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a readable storage
  • the medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing readable storage medium includes: various media that can store program codes, such as a U disk, a mobile hard disk, a ROM, a RAM, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种左心室心肌分割方法、装置及计算机可读存储介质,其中,该左心室心肌分割方法包括:获取心脏图像;针对该心脏图像,迭代执行N次下采样卷积处理和第一特征信息的提取;若完成N次提取第一特征信息的迭代过程,则基于第N次提取的第一特征信息,针对第N次下采样卷积处理的输出对象迭代执行N次上采样卷积处理和第二特征信息的提取;若完成N次提取第二特征信息的迭代过程,则基于预先训练好的分类器对心脏图像进行左心室心肌的自动分割。上述方法有效提高左心室心肌分割的效率。

Description

左心室心肌分割方法、装置及计算机可读存储介质 技术领域
本申请涉及生物医学领域,尤其涉及一种左心室心肌分割方法、装置及计算机可读存储介质。
背景技术
左心室心肌分割是心功能分析的主要步骤,左心室心肌壁厚、射血量等心功能分析参数均基于左心室心肌的准确分割。左心室心肌和背景灰度类似,左心室内乳头肌、小梁等和心肌属于同一物质,还存在伪影。因此,左心室心肌分割是一个困难而又重要的任务。
目前,左心室心肌分割的方法主要采用磁共振成像(Magnetic Resonance Imaging,MRI)技术获得心脏图像,由经过特殊培训的医务人员或医学专家对心脏图像手工进行左心室心肌的分割处理。手工分割对专家知识和经验要求很高,而且耗时。
发明内容
本申请提供一种左心室心肌分割方法、装置及计算机可读存储介质,可用以提高左心室心肌分割的效率。
本申请第一方面提供一种左心室心肌分割方法,包括:
获取心脏图像;
对所述心脏图像进行下采样卷积处理;
提取第一特征信息,其中,所述第一特征信息为最近一次下采样卷积处理的输出对象的特征信息;
若未完成N次提取第一特征信息的迭代过程,则基于当前提取的第一特征信息对最近一次下采样卷积处理的输出对象进行下采样卷积处理,之后迭代执 行所述提取第一特征信息的步骤;
若完成N次提取第一特征信息的迭代过程,则基于第N次提取的第一特征信息,对第N次下采样卷积处理的输出对象进行上采样卷积处理;
提取第二特征信息,其中,所述第二特征信息为最近一次上采样卷积处理的输出对象的特征信息;
若未完成N次提取第二特征信息的迭代过程,则基于当前提取到的第二特征信息对最近一次上采样卷积处理的输出对象进行上采样卷积处理,之后迭代执行所述提取第二特征信息的步骤;
若完成N次提取第二特征信息的迭代过程,则基于预先训练好的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
其中,所述N不小于2。
本申请第二方面提供一种左心室心肌分割装置,包括:
获取单元、第一特征提取单元、下采样卷积处理单元、第二特征提取单元、上采样卷积处理单元以及分割单元;
所述获取单元用于:获取心脏图像;
所述下采样卷积处理单元用于:对所述心脏图像进行下采样卷积处理后触发所述第一特征提取单元;在未完成N次提取第一特征信息的迭代过程时,基于当前所述第一特征提取单元提取的第一特征信息对所述下采样卷积处理单元最近一次的输出对象进行下采样卷积处理,之后触发所述第一特征提取单元;
所述第一特征提取单元用于:提取第一特征信息,其中,所述第一特征信息为最近一次下采样卷积处理的输出对象的特征信息;
所述上采样卷积处理单元用于:在完成N次提取第一特征信息的迭代过程时,基于所述第一特征提取单元第N次提取的第一特征信息,对所述下采样卷积处理单元第N次输出的对象进行上采样卷积处理,之后触发所述第二特征提 取单元;在未完成N次提取第二特征信息的迭代过程时,基于所述第二特征提取单元当前提取到的第二特征信息对所述上采样卷积处理单元最近一次输出的对象进行上采样卷积处理,之后触发所述第二特征提取单元;
所述第二特征提取单元用于:提取第二特征信息,其中,所述第二特征信息为所述上采样卷积处理单元最近一次的输出对象的特征信息;
分割单元用于:在完成N次提取第二特征信息的迭代过程时,基于预先训练好的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
其中,所述N不小于2。
本申请第三方面提供一种左心室心肌分割装置,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述本申请第一方面提供的左心室心肌分割方法。
本申请第四方面提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时,实现上述本申请第一方面提供的左心室心肌分割方法。
由上可见,一方面,本申请方案通过提取心脏图像的特征信息(如第二特征信息)并输入预先训练好的分类器进行识别,以此实现对心脏图像中左心室心肌的自动分割,由于是通过机器自动对心脏图像进行左心室心肌的分割,因此,相对于传统的由医务人员或医学专家手动分割的方法,本申请方案能够有效提高左心室心肌分割的效率;另一方面,由于本申请方案中输入分类器的第二特征信息是通过多次的特征提取、下采样卷积处理和上采样卷积处理得到,因此,第二特征信息能够较好地表征心脏图像中更深层的特征,从而使得左心室心肌分割的结果更为准确。
附图说明
图1-a为本申请提供的左心室心肌分割方法一个实施例流程示意图;
图1-b为本申请提供的一种Dense网络结构示意图;
图2为本申请提供的一种应用场景下用以实现左心室心肌分割方法的网络结构示意图;
图3为在图2所示的应用场景下,对测试病人的其中一张心脏图像进行分割后得到的分割结果示意图;
图4为图3的分割结果与心脏标注数据的对比示意图;
图5为图3的分割结果与心脏标注数据的线性分析示意图;
图6为本申请提供的左心室心肌分割装置一个实施例结构示意图;
图7为本申请提供的左心室心肌分割装置另一个实施例结构示意图。
具体实施方式
为使得本申请的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而非全部实施例。基于本申请中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
如图1-a所示,本申请实施例中一种左心室心肌分割方法包括:
步骤101、获取心脏图像;
在一种应用场景中,步骤101可以表现为:通过磁共振成像(MagneticResonanceImaging,MRI)技术获取心脏图像,此时获取到的心脏图像即为MRI图像。以下对MRI技术进行说明:MRI技术是一种通过磁场来获取人体内部结构图像的技术,具有无创伤的优点,因此病人在接受检查的时候能受到良好的保护。在本实施例中,可以通过MRI技术获取人体的心脏图像。
在另一种应用场景中,也可以通过超声诊断方式(例如B超)获得心脏图像。或者,也可以从已有的心脏图像数据库中获取(例如导入)待分割的心脏图像,此处不做限定。
步骤102、对上述心脏图像进行下采样卷积处理;
在步骤102中,对上述心脏图像(原始心脏图像或经归一化处理后的心脏图像)进行下采样卷积处理。
具体的,步骤102包括:提取上述心脏图像中的特征信息,基于提取的特征信息对上述心脏图像进行下采样卷积处理。
可选的,通过一次卷积处理提取上述心脏图像中的特征信息,该卷积处理所应用的公式可以表示为:
Figure PCTCN2019078892-appb-000001
其中,i,j为图像的像素位置,I,K分别表示图像和卷积核,m,n分别为卷积核的宽与高。
或者,也可以基于Dense网络或其它图像特征提取算法提取上述心脏图像中的特征信息,此处不做限定。
由于获取到的心脏图像的格式可能不全相同(例如基于MRI技术获取到的心脏图像有两种不同格式:176*132、132*176),为了方便网络的训练,减少冗余的参数,可以在获取到心脏图像后,对获取到的心脏图像进行尺寸的归一化处理,使得归一化处理后的心脏图像的尺寸为统一尺寸。可选的,步骤101之后且步骤102之前还可以包括:对获取到的心脏图像进行图像尺寸的归一化处理,得到预设尺寸的心脏图像。则步骤102可具体表现为:对上述预设尺寸的心脏图像进行下采样卷积处理。
具体的,预设尺寸例如可以设为128*128。当然,也可以将尺寸预设为其它大小,此处不做限定。
步骤103、提取第一特征信息;
其中,上述第一特征信息为最近一次下采样卷积处理的输出对象的特征信息。
本申请实施例中,可基于图像特征提取技术提取最近一次下采样卷积处理的输出对象中的特征信息(即第一特征信息)。
可选的,本申请实施例中基于Dense网络提取第一特征信息。具体的,该 Dense网络的结构示意图可以如图1-b所示,Dense网络包含卷积层和Elu非线性激活函数,当输入对象经过卷积层和Elu非线性函数的处理后(该处理在图1-b表现为“卷积+Elu”),得到的输出需要与输入叠加,即每一层的输入来自于前面所有层的输出,此过程用函数表示的话可以表示为:x l=H l([x 0,x 1,...,x l-1])。其中,[x 0,x 1,...,x l-1]表示0到l-1层输出的叠加,H l表示一个非线性变换。通过使用Dense网络,加强了特征的传递,可更加有效地利用了特征信息,减轻了梯度消失,并在一定程度上减少了参数数量。Dense网络输入输出图片大小相同。本Dense网络选用了Elu非线性激活函数,Elu非线性激活函数的正值特性缓解了梯度消失问题,相比较传统的relu激活函数,负值降低了计算复杂度,满足零均值化的要求,减少了计算偏差。
当然,在步骤103中,也可以基于其它神经网络或图像特征提取算法提取第一特征信息,此处不做限定。
步骤104、若未完成N次提取第一特征信息的迭代过程,则基于当前提取的第一特征信息对最近一次下采样卷积处理的输出对象进行下采样卷积处理;
在步骤104中,基于步骤103提取的第一特征信息对最近一次下采样卷积处理的输出对象进行下采样卷积处理,通过缩小图像的分辨率,以便在后续步骤中能够提取图像中更深层的特征信息。
在步骤104中,可以将最近一次下采样卷积处理的输出对象和当前提取的第一特征信息输入下采样层(可理解为池化层)进行下采样卷积处理,该下采样层的输出即为下采样卷积处理的输出对象。
在基于当前提取的第一特征信息对最近一次下采样卷积处理的输出对象进行下采样卷积处理之后,返回步骤103,以迭代执行步骤103。通过该迭代过程,可以逐步提取心脏图像中深层的特征信息。
其中,上述N为不小于2的预设值。可选的,N取4。
步骤105、若完成N次提取第一特征信息的迭代过程,则基于第N次提取的第一特征信息,对最近一次下采样卷积处理的输出对象进行上采样卷积处理;
由于在提取第一特征信息的迭代过程中,图像经下采样卷积处理后被压缩,因此,本申请实施例中,在完成N次提取第一特征信息的迭代过程后,开始对被压缩的图像进行还原,此还原的过程可理解为前述压缩的过程反向操作过程。
具体的,当完成N次提取第一特征信息的迭代过程时,基于第N次提取的第一特征信息,对最近一次(也即第N次)下采样卷积处理的输出对象进行上采样卷积处理,以便逐步还原图像的分辨率。
在步骤105中,可以将最近一次下采样卷积处理的输出对象和第N次提取的第一特征信息输入上采样层进行上采样卷积处理,该上采样层的输出即为当次上采样卷积处理的输出对象。
步骤106、提取第二特征信息;
其中,上述第二特征信息为最近一次上采样卷积处理的输出对象的特征信息。
可选的,本申请实施例中基于Dense网络提取第二特征信息。具体的,关于该Dense网络的描述可以参照步骤103中的描述,此处不再赘述。
当然,在步骤106中,也可以基于其它神经网络或图像特征提取算法提取第二特征信息,此处不做限定。
步骤107、若未完成N次提取第二特征信息的迭代过程,则基于当前提取到的第二特征信息对最近一次上采样卷积处理的输出对象进行上采样卷积处理,之后返回步骤106;
本申请实施例中,当未完成N次提取第二特征信息的迭代过程(图1-a中简写为未完成迭代过程)时,表明当前被压缩的心脏图像还需要继续还原,此时执行步骤107。通过该迭代过程,可以逐步还原心脏图像。
在步骤107中,可以将当前提取到的第二特征信息和最近一次上采样卷积处理的输出对象输入上采样层进行上采样卷积处理,该上采样层的输出即为当次上采样卷积处理的输出对象。
步骤108、若完成N次提取第二特征信息的迭代过程,则基于预先训练好 的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
当完成N次提取第二特征信息的迭代过程,表明当前针对上述心脏图像的特征提取过程已经完成,此时将第N次提取的第二特征信息和最近一次上采样卷积处理的输出对象(即还原后的心脏图像)输入预先训练好的分类器(例如softmax分类器)进行左心室心肌的分割,即分离出该输出对象中的左心室心肌与背景信息。具体的,基于第N次提取的第二特征信息、最近一次上采样卷积处理的输出对象和上述分类器,可以将该输出对象中每一个像素分类为前景信息(例如左心室心肌)或背景信息,从而实现将心脏图像中的左心室心肌和背景分离。
需要说明的是,对于本申请实施例中用以自动分割左心室心肌的网络(后面统称为分割网络,例如该分割网络可以是由前述提及的Dense网络、下采样层、上采样层和分类器等),可以通过预先训练的方式得到。在实际应用中,可以通过获取多个用以训练上述分割网络的心脏图像对上述分割网络进行训练,并可以基于Adam优化算法优化该分割网络。具体的,基于Adam优化算法对该分割网络进行优化的过程可以参照已有技术实现,此处不再赘述。进一步,还可使用Dice系数评估上述分割网络的精确度,其公式可表现为如下公式:
Figure PCTCN2019078892-appb-000002
其中,D表示Dice系数,此系数通过比较分割面积的相似度来衡量分割结果,p i代表分割结果,g i代表标注的分割结果。Dice系数处于区间0~1内,值越大表示分割结果的精确度越高。
前面提到,在步骤103和步骤106中,可以基于Dense网络提取第一特征信息和第二特征信息。在此应用场景下,可以设定如下约束条件:1、在上述提取第一特征信息的迭代过程中,第n+1次提取第一特征信息所使用的Dense网络的卷积核个数为第n次提取第一特征信息所使用的Dense网络的卷积核个数的一倍;且,在上述提取第二特征信息的迭代过程中,第n+1次提取第二特征 信息所使用的Dense网络的卷积核个数为第n次提取第二特征信息所使用的Dense网络的卷积核个数的二分之一;且,第n次提取第一特征信息所使用的Dense网络的卷积核个数等于第1次提取第二特征信息所使用的Dense网络的卷积核个数,其中,n∈[1,N)。
由上可见,一方面,本申请实施例中通过提取心脏图像的特征信息(如第二特征信息)并输入预先训练好的分类器进行识别,以此实现对心脏图像中左心室心肌的自动分割,由于是通过机器自动对心脏图像进行左心室心肌的分割,因此,相对于传统的由医务人员或医学专家手动分割的方法,本申请方案能够有效提高左心室心肌分割的效率;另一方面,由于本申请方案中输入分类器的第二特征信息是通过多次的特征提取、下采样卷积处理和上采样卷积处理得到,因此,第二特征信息能够较好地表征心脏图像中更深层的特征,从而使得左心室心肌分割的结果更为准确。
为便于更好地理解图1-a所示实施例中的左心室心肌分割方法,下面以一具体应用场景对上述左心室心肌分割方法进行描述。本应用场景中的网络结构示意图可以如图2所示,由图2可见,本应用场景中的分割网络包括压缩抽取特征与解压缩图像恢复两个部分,两个部分完全对称,以保证经特征分割后的图像与原图像大小一致。心脏图像经Dense网络(可以参照图1-b中Dense网络的结构和相关描述)处理后输入压缩抽取特征部分进行处理。
由图2可见,压缩抽取特征部分和解压缩图像恢复部分均包含四段处理,对于压缩抽取特征部分,每一段由下采样层和Dense网络(可以参照图1-b中Dense网络的结构和相关描述)组成,以逐步提取图像更深层的特征信息。同理,对于解压缩图像恢复部分同样包含四段处理(即前述N取4),每一段由上采样层与Dense网络组成,以逐步还原图像。
可选的,压缩抽取特征部分中,每段处理所使用的Dense网络的卷积核大小分别为5*5、5*5、5*5和5*5,每段处理所使用的Dense网络的卷积核个数分别为32、64、128、256,压缩抽取特征部分中首次输入Dense网络的图片大 小为128*128,而压缩抽取特征部分输出的图像大小为8*8。
相应的,解压缩图像恢复部分中,每段处理所使用的Dense网络的卷积核大小分别为5*5、5*5、5*5和5*5;每段处理所使用的Dense网络的卷积核个数分别为256、128、64、32,首次输入Dense网络的图片大小为128*128,而压缩抽取特征部分输出的图像大小为8*8。解压缩图像恢复部分中首次输入Dense网络的图片大小为8*8,而压缩抽取特征部分输出的图像大小为128*128。
在解压缩图像恢复部分处理完后,将解压缩图像恢复部分的输出输入softmax分类器,由softmax分类器对心脏图像进行左心室心肌的分割,即实现对图像中左心室心肌与背景的分离(即输出分割结果)。
图3为在图2所示的应用场景下,对测试病人的其中一张心脏图像进行分割后得到的分割结果示意图,由图3可以看到,通过本申请方案能够实现左心室心肌的自动分割。
图4为图3的分割结果与心脏标注数据的对比示意图,图5为图3的分割结果与心脏标注数据的线性分析示意图,结合图4和图5可以看出,基于本申请方案分割得出的左心室心肌肉与心脏标注数据存在相关性。
图6为本申请实施例提供一种左心室心肌分割装置。如图6所示,该左心室心肌分割装置主要包括:获取单元301、第一特征提取单元302、下采样卷积处理单元303、第二特征提取单元304、上采样卷积处理单元305以及分割单元306。
获取单元301用于:获取心脏图像;
下采样卷积处理单元303用于:对获取单元301获取到的心脏图像进行下采样卷积处理后触发第一特征提取单元302;在未完成N次提取第一特征信息的迭代过程时,基于当前第一特征提取单元302提取的第一特征信息对下采样卷积处理单元303最近一次的输出对象进行下采样卷积处理,之后触发第一特征提取单元302;
第一特征提取单元302用于:提取第一特征信息,其中,上述第一特征信 息为最近一次下采样卷积处理的输出对象的特征信息;
上采样卷积处理单元305用于:完成N次提取第一特征信息的迭代过程时,基于第一特征提取单元302第N次提取的第一特征信息,对下采样卷积处理单元303第N次输出的对象当前输入对象进行上采样卷积处理,之后触发第二特征提取单元304;在未完成N次提取第二特征信息的迭代过程时,基于第二特征提取单元304当前提取到的第二特征信息对上采样卷积处理单元305最近一次输出的对象进行上采样卷积处理,之后触发第二特征提取单元304;
第二特征提取单元304用于:提取第二特征信息,其中,所述第二特征信息为上采样卷积处理单元305最近一次的输出对象的特征信息;
分割单元306用于:在完成N次提取第二特征信息的迭代过程时,基于预先训练好的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
其中,上述N不小于2,优选地,N取4。
可选的,第一特征提取单元302具体用于:基于Dense网络提取第一特征信息;第二特征提取单元304具体用于:基于Dense网络提取第二特征信息。
可选的,针对上述心脏图像,第一特征提取单元302第n+1次提取第一特征信息所使用的Dense网络的卷积核个数为第n次提取第一特征信息所使用的Dense网络的卷积核个数的一倍;且,第二特征提取单元304第n+1次提取第二特征信息所使用的Dense网络的卷积核个数为第n次提取第二特征信息所使用的Dense网络的卷积核个数的二分之一;且,第一特征提取单元302第n次提取第一特征信息所使用的Dense网络的卷积核个数等于第二特征提取单元304第1次提取第二特征信息所使用的Dense网络的卷积核个数;其中,n∈[1,N)。
可选的,左心室心肌分割装置还包括:归一化单元,用于对获取单元301获取到的心脏图像进行图像尺寸的归一化处理,得到预设尺寸的心脏图像。下 采样卷积处理单元303具体用于:对上述归一化单元得到的心脏图像进行下采样卷积处理。
需要说明的是,该左心室心肌分割装置可用于实现上述方法实施例提供的左心室心肌分割方法。在图6示例的左心室心肌分割装置中,各功能模块的划分仅是举例说明,实际应用中可以根据需要,例如相应硬件的配置要求或者软件的实现的便利考虑,而将上述功能分配由不同的功能模块完成,即将左心室心肌分割装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。而且,在实际应用中,本实施例中的相应的功能模块可以是由相应的硬件实现,也可以由相应的硬件执行相应的软件完成。本说明书提供的各个实施例都可应用上述描述原则,以下不再赘述。
由上可见,本申请实施例通过提取心脏图像的特征信息(如第二特征信息)并输入预先训练好的分类器进行识别,以此实现对心脏图像中左心室心肌的自动分割,由于是通过机器自动对心脏图像进行左心室心肌的分割,因此,相对于传统的由医务人员或医学专家手动分割的方法,本申请方案能够有效提高左心室心肌分割的效率;另一方面,由于本申请方案中输入分类器的第二特征信息是通过多次的特征提取、下采样卷积处理和上采样卷积处理得到,因此,第二特征信息能够较好地表征心脏图像中更深层的特征,从而使得左心室心肌分割的结果更为准确。
本申请实施例提供一种左心室心肌分割装置,请参阅图7,该左心室心肌分割装置包括:
存储器41、处理器42及存储在存储器41上并可在处理器42上运行的计算机程序,处理器42执行该计算机程序时,实现前述方法实施例中描述的左心室心肌分割方法。
进一步的,该左心室心肌分割装置还包括:
至少一个输入设备43以及至少一个输出设备44。
上述存储器41、处理器42、输入设备43以及输出设备44,通过总线45连接。
其中,输入设备43和输出设备44具体可为天线。
存储器41可以是高速随机存取记忆体(RAM,Random Access Memory)存储器,也可为非不稳定的存储器(non-volatile memory),例如磁盘存储器。存储器41用于存储一组可执行程序代码,处理器42与存储器41耦合。
进一步的,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质可以是设置于上述各实施例中的左心室心肌分割装置中,该计算机可读存储介质可以是前述图7所示实施例中的存储器。该计算机可读存储介质上存储有计算机程序,该程序被处理器执行时实现前述方法实施例中描述的功率分配方法。进一步的,该计算机可存储介质还可以是U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块 中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个可读存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的可读存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本申请所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本申请所提供的左心室心肌分割方法、装置及计算机可读存储介质的描述,对于本领域的技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本申请的限制。

Claims (10)

  1. 一种左心室心肌分割方法,其特征在于,包括:
    获取心脏图像;
    对所述心脏图像进行下采样卷积处理;
    提取第一特征信息,其中,所述第一特征信息为最近一次下采样卷积处理的输出对象的特征信息;
    若未完成N次提取第一特征信息的迭代过程,则基于当前提取的第一特征信息对最近一次下采样卷积处理的输出对象进行下采样卷积处理,之后迭代执行所述提取第一特征信息的步骤;
    若完成N次提取第一特征信息的迭代过程,则基于第N次提取的第一特征信息,对第N次下采样卷积处理的输出对象进行上采样卷积处理;
    提取第二特征信息,其中,所述第二特征信息为最近一次上采样卷积处理的输出对象的特征信息;
    若未完成N次提取第二特征信息的迭代过程,则基于当前提取到的第二特征信息对最近一次上采样卷积处理的输出对象进行上采样卷积处理,之后迭代执行所述提取第二特征信息的步骤;
    若完成N次提取第二特征信息的迭代过程,则基于预先训练好的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
    其中,所述N不小于2。
  2. 根据权利要求1所述的左心室心肌分割方法,其特征在于,所述提取第一特征信息为:基于Dense网络提取第一特征信息;
    所述提取第二特征信息为:基于Dense网络提取第二特征信息。
  3. 根据权利要求2所述的左心室心肌分割方法,其特征在于,
    在所述提取第一特征信息的迭代过程中,第n+1次提取第一特征信息所使 用的Dense网络的卷积核个数为第n次提取第一特征信息所使用的Dense网络的卷积核个数的一倍;
    且,在所述提取第二特征信息的迭代过程中,第n+1次提取第二特征信息所使用的Dense网络的卷积核个数为第n次提取第二特征信息所使用的Dense网络的卷积核个数的二分之一;
    且,第n次提取第一特征信息所使用的Dense网络的卷积核个数等于第1次提取第二特征信息所使用的Dense网络的卷积核个数;
    其中,n∈[1,N)。
  4. 根据权利要求1至3任一项所述的左心室心肌分割方法,其特征在于,所述获取心脏图像,之后还包括:
    对获取到的心脏图像进行图像尺寸的归一化处理,得到预设尺寸的心脏图像;
    所述对所述心脏图像进行下采样卷积处理为:
    对所述预设尺寸的心脏图像进行下采样卷积处理。
  5. 根据权利要求1至3任一项所述的左心室心肌分割方法,其特征在于,所述N取4。
  6. 一种左心室心肌分割装置,其特征在于,包括:获取单元、第一特征提取单元、下采样卷积处理单元、第二特征提取单元、上采样卷积处理单元以及分割单元;
    所述获取单元用于:获取心脏图像;
    所述下采样卷积处理单元用于:对所述心脏图像进行下采样卷积处理后触发所述第一特征提取单元;在未完成N次提取第一特征信息的迭代过程时,基于当前所述第一特征提取单元提取的第一特征信息对所述下采样卷积处理单元最近一次的输出对象进行下采样卷积处理,之后触发所述第一特征提取单元;
    所述第一特征提取单元用于:提取第一特征信息,其中,所述第一特征信 息为最近一次下采样卷积处理的输出对象的特征信息;
    所述上采样卷积处理单元用于:在完成N次提取第一特征信息的迭代过程时,基于所述第一特征提取单元第N次提取的第一特征信息,对所述下采样卷积处理单元第N次输出的对象进行上采样卷积处理,之后触发所述第二特征提取单元;在未完成N次提取第二特征信息的迭代过程时,基于所述第二特征提取单元当前提取到的第二特征信息对所述上采样卷积处理单元最近一次输出的对象进行上采样卷积处理,之后触发所述第二特征提取单元;
    所述第二特征提取单元用于:提取第二特征信息,其中,所述第二特征信息为所述上采样卷积处理单元最近一次的输出对象的特征信息;
    分割单元用于:在完成N次提取第二特征信息的迭代过程时,基于预先训练好的分类器和第N次提取的第二特征信息,对最近一次上采样卷积处理的输出对象进行左心室心肌的分割;
    其中,所述N不小于2。
  7. 根据权利要求6所述的左心室心肌分割装置,其特征在于,所述第一特征提取单元具体用于:基于Dense网络提取第一特征信息;
    所述第二特征提取单元具体用于:基于Dense网络提取第二特征信息。
  8. 根据权利要求7所述的左心室心肌分割装置,其特征在于,针对所述心脏图像,所述第一特征提取单元第n+1次提取第一特征信息所使用的Dense网络的卷积核个数为第n次提取第一特征信息所使用的Dense网络的卷积核个数的一倍;
    且,所述第二特征提取单元第n+1次提取第二特征信息所使用的Dense网络的卷积核个数为第n次提取第二特征信息所使用的Dense网络的卷积核个数的二分之一;
    且,所述第一特征提取单元第n次提取第一特征信息所使用的Dense网络的卷积核个数等于所述第二特征提取单元第1次提取第二特征信息所使用的 Dense网络的卷积核个数;
    其中,n∈[1,N)。
  9. 一种左心室心肌分割装置,其特征在于,包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至5中任意一项所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,实现权利要求1至5中任意一项所述的方法。
PCT/CN2019/078892 2018-07-24 2019-03-20 左心室心肌分割方法、装置及计算机可读存储介质 WO2020019740A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810816875.7 2018-07-24
CN201810816875.7A CN109285157A (zh) 2018-07-24 2018-07-24 左心室心肌分割方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020019740A1 true WO2020019740A1 (zh) 2020-01-30

Family

ID=65183108

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078892 WO2020019740A1 (zh) 2018-07-24 2019-03-20 左心室心肌分割方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN109285157A (zh)
WO (1) WO2020019740A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798385A (zh) * 2020-06-10 2020-10-20 Oppo广东移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备
WO2022161192A1 (zh) * 2021-02-01 2022-08-04 之江实验室 一种spect三维重建图像左心室自动分割的方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109285157A (zh) * 2018-07-24 2019-01-29 深圳先进技术研究院 左心室心肌分割方法、装置及计算机可读存储介质
CN112418240A (zh) * 2019-08-21 2021-02-26 上海商汤临港智能科技有限公司 图像处理方法、装置、设备和存储介质
CN110731777B (zh) * 2019-09-16 2023-07-25 平安科技(深圳)有限公司 基于图像识别的左心室测量方法、装置以及计算机设备
CN111311609B (zh) * 2020-02-14 2021-07-02 推想医疗科技股份有限公司 一种图像分割方法、装置、电子设备及存储介质
CN111402274B (zh) * 2020-04-14 2023-05-26 上海交通大学医学院附属上海儿童医学中心 一种磁共振左心室图像分割的处理方法、模型及训练方法
CN111814833B (zh) * 2020-06-11 2024-06-07 浙江大华技术股份有限公司 票据处理模型的训练方法及图像处理方法、图像处理设备
CN112932535B (zh) * 2021-02-01 2022-10-18 杜国庆 一种医学图像分割及检测方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203327A (zh) * 2016-07-08 2016-12-07 清华大学 基于卷积神经网络的肺部肿瘤识别系统及方法
CN106920227A (zh) * 2016-12-27 2017-07-04 北京工业大学 基于深度学习与传统方法相结合的视网膜血管分割方法
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
CN109285158A (zh) * 2018-07-24 2019-01-29 深圳先进技术研究院 血管壁斑块分割方法、装置及计算机可读存储介质
CN109285157A (zh) * 2018-07-24 2019-01-29 深圳先进技术研究院 左心室心肌分割方法、装置及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017091833A1 (en) * 2015-11-29 2017-06-01 Arterys Inc. Automated cardiac volume segmentation
US10453200B2 (en) * 2016-11-02 2019-10-22 General Electric Company Automated segmentation using deep learned priors
CN107909026B (zh) * 2016-11-30 2021-08-13 深圳奥瞳科技有限责任公司 基于小规模卷积神经网络年龄和/或性别评估方法及系统
CN107240102A (zh) * 2017-04-20 2017-10-10 合肥工业大学 基于深度学习算法的恶性肿瘤计算机辅助早期诊断方法
CN108154468B (zh) * 2018-01-12 2022-03-01 平安科技(深圳)有限公司 肺结节探测方法、应用服务器及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203327A (zh) * 2016-07-08 2016-12-07 清华大学 基于卷积神经网络的肺部肿瘤识别系统及方法
US20180150684A1 (en) * 2016-11-30 2018-05-31 Shenzhen AltumView Technology Co., Ltd. Age and gender estimation using small-scale convolutional neural network (cnn) modules for embedded systems
CN106920227A (zh) * 2016-12-27 2017-07-04 北京工业大学 基于深度学习与传统方法相结合的视网膜血管分割方法
CN109285158A (zh) * 2018-07-24 2019-01-29 深圳先进技术研究院 血管壁斑块分割方法、装置及计算机可读存储介质
CN109285157A (zh) * 2018-07-24 2019-01-29 深圳先进技术研究院 左心室心肌分割方法、装置及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111798385A (zh) * 2020-06-10 2020-10-20 Oppo广东移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备
CN111798385B (zh) * 2020-06-10 2023-09-15 Oppo广东移动通信有限公司 图像处理方法及装置、计算机可读介质和电子设备
WO2022161192A1 (zh) * 2021-02-01 2022-08-04 之江实验室 一种spect三维重建图像左心室自动分割的方法

Also Published As

Publication number Publication date
CN109285157A (zh) 2019-01-29

Similar Documents

Publication Publication Date Title
WO2020019740A1 (zh) 左心室心肌分割方法、装置及计算机可读存储介质
Özyurt et al. A novel liver image classification method using perceptual hash-based convolutional neural network
CN109166130B (zh) 一种图像处理方法及图像处理装置
CN106529447B (zh) 一种小样本人脸识别方法
WO2020019739A1 (zh) 血管壁斑块分割方法、装置及计算机可读存储介质
WO2017215284A1 (zh) 基于卷积神经网络的胃肠道肿瘤显微高光谱图像处理方法
WO2020215676A1 (zh) 基于残差网络的图像识别方法、装置、设备及存储介质
CN109977955B (zh) 一种基于深度学习的宫颈癌前病变识别的方法
Mahapatra et al. Progressive generative adversarial networks for medical image super resolution
WO2022000183A1 (zh) 一种ct图像降噪系统及方法
WO2021017006A1 (zh) 图像处理方法及装置、神经网络及训练方法、存储介质
CN107437252B (zh) 用于黄斑病变区域分割的分类模型构建方法和设备
Msonda et al. Spatial pyramid pooling in deep convolutional networks for automatic tuberculosis diagnosis
CN113012173A (zh) 基于心脏mri的心脏分割模型和病理分类模型训练、心脏分割、病理分类方法及装置
CN110211205B (zh) 图像处理方法、装置、设备和存储介质
CN110570394B (zh) 医学图像分割方法、装置、设备及存储介质
WO2021159811A1 (zh) 青光眼辅助诊断装置、方法及存储介质
US20230394670A1 (en) Anatomically-informed deep learning on contrast-enhanced cardiac mri for scar segmentation and clinical feature extraction
Ribeiro et al. Exploring deep learning image super-resolution for iris recognition
CN111951281A (zh) 图像分割方法、装置、设备及存储介质
Ciurte et al. A semi-supervised patch-based approach for segmentation of fetal ultrasound imaging
CN111369564B (zh) 一种图像处理的方法、模型训练的方法及装置
CN116188435B (zh) 一种基于模糊逻辑的医学图像深度分割方法
Wu et al. COVID-19 diagnosis utilizing wavelet-based contrastive learning with chest CT images
WO2020118826A1 (zh) 一种左心室图像分割方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19840169

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12.05.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19840169

Country of ref document: EP

Kind code of ref document: A1