WO2019223121A1 - 病变部位识别方法及装置、计算机装置及可读存储介质 - Google Patents

病变部位识别方法及装置、计算机装置及可读存储介质 Download PDF

Info

Publication number
WO2019223121A1
WO2019223121A1 PCT/CN2018/099614 CN2018099614W WO2019223121A1 WO 2019223121 A1 WO2019223121 A1 WO 2019223121A1 CN 2018099614 W CN2018099614 W CN 2018099614W WO 2019223121 A1 WO2019223121 A1 WO 2019223121A1
Authority
WO
WIPO (PCT)
Prior art keywords
magnetic resonance
resonance image
image
preset
probability
Prior art date
Application number
PCT/CN2018/099614
Other languages
English (en)
French (fr)
Inventor
王健宗
吴天博
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019223121A1 publication Critical patent/WO2019223121A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • the present application relates to the field of image processing technology, and in particular, to a method and device for identifying a lesion part in a magnetic resonance image, a computer device, and a readable storage medium.
  • Rectal cancer refers to the cancer from the dentate line to the junction of the rectal sigmoid colon. It is one of the most common malignant tumors of the digestive tract, and its incidence is gradually increasing in adolescents.
  • the main diagnostic method of rectal cancer is that doctors diagnose by analyzing Magnetic Resonance MRI (Magnetic Resonance Imaging) images.
  • the doctor's diagnosis will take a lot of manpower and material resources, and the diagnosis results largely depend on the professional level of the doctor.
  • deep learning has made rapid progress in various fields. How to use deep learning to realize high-accuracy lesion recognition has become an urgent problem.
  • a first aspect of the present application provides a method for identifying a lesion, the method including:
  • the pre-processed first magnetic resonance image as the first component With the pre-processed first magnetic resonance image as the first component, the pre-processed second magnetic resonance image as the second component, and the pre-processed third magnetic resonance image as the third component, the pre-processed The first magnetic resonance image, the second magnetic resonance image and the third magnetic resonance image are fused into a color image;
  • the trained convolutional neural network model is used to predict each block of the color image to obtain the lesion probability of the center point of each block, wherein the convolutional neural network model is performed using an image labeled with a diseased area. training;
  • a second aspect of the present application provides a device for identifying a lesion, the device including:
  • An acquiring unit configured to acquire a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing magnetic resonance scanning on a preset part of a human body by applying different magnetic resonance scanning sequences;
  • a preprocessing unit configured to preprocess the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image
  • a fusion unit configured to use a preprocessed first magnetic resonance image as a first component, a preprocessed second magnetic resonance image as a second component, and a preprocessed third magnetic resonance image as a third component, Fusing the pre-processed first magnetic resonance image, the second magnetic resonance image and the third magnetic resonance image into a color image;
  • a division unit configured to divide the color image into a plurality of blocks of a preset size
  • a prediction unit configured to predict each block of the color image by using a trained convolutional neural network model to obtain a lesion probability at a center point of each block, wherein the convolutional neural network model uses Image of the lesion area for training;
  • the judging unit is configured to judge whether the preset part is a diseased part and determine a diseased position according to a diseased probability of a center point of each block in the color image.
  • a third aspect of the present application provides a computer device, the computer device includes a memory and a processor, the memory stores at least one computer-readable instruction, and the processor implements all the instructions when the processor executes the at least one computer-readable instruction. Describe the method of identifying lesions.
  • a fourth aspect of the present application provides a non-volatile readable storage medium on which at least one computer-readable instruction is stored, and the at least one computer-readable instruction implements the method for identifying a lesion site when executed by a processor.
  • This application acquires a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing magnetic resonance scanning on a preset part of a human body by applying different magnetic resonance scanning sequences;
  • the second magnetic resonance image and the third magnetic resonance image are pre-processed;
  • the pre-processed first magnetic resonance image is used as the first component,
  • the pre-processed second magnetic resonance image is used as the second component, and the pre-processed first magnetic resonance image is used as the second component;
  • the three magnetic resonance image is a third component, and the preprocessed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into a color image;
  • the color image is divided into a plurality of regions of a preset size Block; using the trained convolutional neural network model to predict each block of the color image to obtain the lesion probability of the center point of each block, wherein the convolutional neural network model uses the Training on the image; judging whether the preset part is a diseased part
  • This application uses different sequence images (that is, the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image obtained by scanning different magnetic resonance scanning sequences) to identify the lesion, and uses a single sequence image (that is, a single scanning sequence scan). Compared with the recognition of a lesion site, the present application improves the accuracy of identifying a lesion site. In addition, the convolutional neural network model of the present application predicts a lesion probability of a block center point according to each block of the fused color image, and compared with predicting a lesion probability of a single pixel in the image, the present application improves detection efficiency. Therefore, the present application realizes fast and accurate identification of a lesion site.
  • FIG. 1 is a flowchart of a method for identifying a lesion site according to a first embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a convolutional neural network model used in the present application.
  • FIG. 3 is a structural diagram of a lesion site identification device provided in Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram of a computer device according to a third embodiment of the present application.
  • the method for identifying a lesion site of the present application is applied in one or more computer devices.
  • the computer device is a device capable of automatically performing numerical calculation and / or information processing in accordance with an instruction set or stored in advance.
  • the hardware includes, but is not limited to, a microprocessor and an Application Specific Integrated Circuit (ASIC). , Programmable Gate Array (Field-Programmable Gate Array, FPGA), Digital Processor (Digital Signal Processor, DSP), Embedded Equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • FIG. 1 is a flowchart of a method for identifying a lesion site according to a first embodiment of the present application.
  • the method for identifying a lesion site is applied to a computer device.
  • the method for recognizing a diseased part recognizes the diseased part according to different sequences of magnetic resonance images, determines whether the preset part is a diseased part and determines the position of the diseased part.
  • the method for identifying a lesion includes the following steps:
  • Step 101 Obtain a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing magnetic resonance scanning on a preset part of a human body by applying different magnetic resonance scanning sequences.
  • Magnetic Resonance Imaging Magnetic resonance imaging
  • MRI imaging is a type of tomography. It uses magnetic resonance phenomena to obtain electromagnetic signals from the human body and reconstruct human information to obtain MRI images. .
  • the method of identifying a lesion site may be used to detect colorectal cancer (which may be rectal cancer or colon cancer) and locate the cancerous site of the large intestine.
  • the preset site is the large intestine.
  • the preset part may be other parts or organs of the human body, and the lesion part identification may be applied to detect the lesions of other parts or organs of the human body.
  • MRI is a kind of multi-parameter imaging.
  • the contrast of the image is related to the number of hydrogen protons contained in the tissue (ie, human tissue), the T1 and T2 time of the tissue, and the fluid flow speed. Different factors can be used to reflect these factors. Focused image. Different images obtained from the same anatomical location (ie, the same layer) using different magnetic resonance scan sequences can provide different parameter information of the tissue and can be used to identify the lesion site.
  • the first magnetic resonance image may be a T2w (T2weighted) image
  • the second magnetic resonance image may be a diffusion-weighted (DWI) image with a first dispersion sensitivity coefficient.
  • T2w T2weighted
  • DWI diffusion-weighted
  • the third magnetic resonance image may be a DWI image with a second diffusion sensitivity coefficient. It should be noted that the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are images obtained by scanning the same anatomical position (that is, the same slice) of the preset part.
  • T2w imaging reflects the difference in T2 relaxation (lateral relaxation) between tissues. The longer the T2 of the tissue, the slower the recovery, the stronger the signal (the image becomes white), and the shorter the T2 of the tissue, the faster the recovery, the weaker the signal (the image becomes black). According to the T2w image, it can be judged what substance is in different positions in the image.
  • DWI is an imaging method based on the flow-air effect, one of the MR imaging elements, and reflects the microscopic motion of water molecules in living tissues in macroscopic images. Diffusion-weighted imaging observes the microscopic diffusion of water molecules.
  • the diffusion sensitivity coefficient is also called b value, which represents the time, amplitude, and shape of the gradient magnetic field applied in magnetic resonance scanning. Magnetic resonance scanning equipment can simultaneously obtain multiple DWI images with different b-values on one level.
  • the first dispersion sensitivity coefficient may be a high dispersion sensitivity coefficient
  • the second dispersion sensitivity coefficient may be a low dispersion sensitivity coefficient.
  • the first dispersion sensitivity coefficient is 1000
  • the second dispersion sensitivity coefficient is 0,
  • the unit of the dispersion sensitivity coefficient is mm 2 / s.
  • first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be images obtained by performing magnetic resonance scanning on a preset part by using other magnetic resonance scanning sequences.
  • a computer device to which the method for identifying a lesion is applied may receive the first magnetic resonance image, the first magnetic resonance image, and the first magnetic resonance image from another computing device (for example, a server that stores the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image in advance). Two magnetic resonance images and a third magnetic resonance image.
  • the computer device applying the method for identifying a diseased part may control a magnetic resonance device to scan a predetermined part of a human body to obtain a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be stored in a memory of a computer device to which the method for identifying a lesion is applied, and the computer device reads the first magnetic resonance image from the memory.
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be stored in a memory of a computer device to which the method for identifying a lesion is applied, and the computer device reads the first magnetic resonance image from the memory.
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be stored in a memory of a computer device to which the method for identifying a lesion is applied, and the computer device reads the first magnetic resonance image from the memory.
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be stored in a memory of a computer device to which the method for identifying a lesion is applied, and the computer device reads the first magnetic resonance image from the memory.
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image
  • Step 102 Preprocess the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image.
  • the pre-processing of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may include standardizing the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image, and the first magnetic resonance image. Image registration, second magnetic resonance image and third magnetic resonance image.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be based on the mean and standard deviation of the pixel values of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image.
  • two images may be calculated.
  • the mutual information of the two images maximizes the mutual information of the two images, thereby realizing the image registration of the two images.
  • a and b indicate the range of pixel values (usually gray values) in image A and image B
  • #a indicates the number of pixels in image A that belong to the range a
  • #b indicates pixels in image B.
  • the value belongs to the number of pixels in the range b
  • #A and #B respectively represent the number of pixels of image A and image B
  • p (a) represents the probability of occurrence of pixels in image A whose pixel values belong to the range a
  • p (b ) Represents the probability that a pixel value in the image B falls within the range b.
  • Registration can include:
  • a first reference point is selected on the first magnetic resonance image
  • a second reference point is selected on the second magnetic resonance image
  • a third reference point is selected on the third magnetic resonance image.
  • the first reference point and the second reference point And the third reference point is a point at the same position as the preset part;
  • one of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be selected as a reference. Align the image not selected as the reference with the image selected as the reference.
  • the first magnetic resonance image is a T2w image
  • the second magnetic resonance image and the third magnetic resonance image are DWI images with different dispersion sensitivity coefficients.
  • the T2w image can be selected as a reference to make the two DWI images to T2w image alignment.
  • DWI images with different dispersion sensitivity coefficients can be scanned at the same time. Therefore, when aligning two DWI images with different dispersion sensitivity coefficients to the T2w image, it is only necessary to align one DWI image to the T2w image and the other DWI image. The same alignment is sufficient.
  • the image not selected as the reference may be gradually deformed, and the image not selected as the reference may be gradually aligned with the image selected as the reference.
  • Deforming the image not selected as the reference may include enlarging or reducing the image not selected as the reference, stretching the image not selected as the reference in a preset direction, and rotating the image not selected as the reference by a preset angle.
  • the resolution of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image can be reduced.
  • the resolutions of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be reduced by one, two, and four times, respectively, and the resolutions may be reduced by one, two, and four, respectively. Four times the first magnetic resonance image, the second magnetic resonance image and the third magnetic resonance image are image registered.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image after the resolution is doubled are image-registered, and the first magnetic resonance image, Image registration is performed on the second magnetic resonance image and the third magnetic resonance image, and image registration is performed on the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image after the resolution is reduced by four times.
  • the final configuration result is obtained according to the registration results of the three low-resolution images (for example, taking an average value).
  • step 103 the pre-processed first magnetic resonance image is used as the first component, the pre-processed second magnetic resonance image is used as the second component, and the pre-processed third magnetic resonance image is used as the third component.
  • the processed first magnetic resonance image, second magnetic resonance image and third magnetic resonance image are fused into a color image.
  • the pre-processed first magnetic resonance image is taken as the R component (that is, the red component), and the pre-processed second magnetic resonance image is taken as the G component (that is, the green component).
  • the three magnetic resonance images are used as the B component (ie, the blue component), and the preprocessed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into an RGB color image.
  • the pre-processed first magnetic resonance image is used as the Y component (that is, brightness), and the pre-processed second magnetic resonance image is used as the U component (that is, the first chroma).
  • the third magnetic resonance image is used as the V component (that is, the second chromaticity), and the pre-processed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into a YUV color image.
  • Step 104 Divide the color image into a plurality of blocks of a preset size.
  • Color images can be divided in a preset direction. For example, color images are segmented in order from top to bottom and left to right.
  • Each block obtained by the segmentation has a preset size (that is, the size of the image received by the convolutional neural network model in step 105), for example, 21 * 21.
  • the divided blocks do not overlap each other.
  • the color image size is 168 * 168
  • the color image is divided into 64 non-overlapping blocks, each block is 21 * 21.
  • Step 105 Use the trained convolutional neural network model to predict each block of the color image, and obtain the lesion probability of the center point of each block, wherein the convolutional neural network model uses a labeled lesion area Image for training.
  • the convolutional neural network model may include a convolutional layer, a maximum pooling layer, and an output layer.
  • the convolutional neural network model from the front to the back is: a convolution layer, a convolution layer, a maximum pooling layer, a convolution layer, a maximum pooling layer, and a convolution.
  • the output of the convolutional neural network model (that is, the output of the output layer) is the probability that the center point of the input image is the lesion area.
  • the loss function used in training the convolutional neural network model can be defined as:
  • y ′ is the lesion probability of the center point of the training sample (that is, the probability that the center point of the training sample belongs to the diseased area) predicted by the convolutional neural network model for the training sample, and y is the label, and the value is 0 or 1.
  • the center point of the sample is 1 if there is a lesion, and 0 if there is no lesion.
  • Convolutional neural network models can be trained using neural network training algorithms, such as back-propagation algorithms.
  • the convolutional neural network model can be trained using an adadelta algorithm.
  • the neural network training algorithm is a well-known technology and will not be repeated here.
  • the convolutional neural network model is trained using images labeled with diseased areas.
  • the image labeled with the lesion area may be a color image obtained through steps 101-103.
  • the processed second magnetic resonance training image is a second component
  • the preprocessed third magnetic resonance training image is a third component.
  • the preprocessed first magnetic resonance training image, the second magnetic resonance training image, and the first The three magnetic resonance training images are fused into a color training image; the color training image is labeled with a lesion area to obtain the image labeled with the lesion area.
  • Convolutional neural network models can be trained using multiple images labeled with diseased areas. For each image labeled with a lesion area, a square area of a preset size (for example, 21 * 21, the same size as the block in step 104) is extracted from the image, and the extracted square area is used as training for the convolutional neural network model sample.
  • the training samples may include positive training samples and negative training samples.
  • a number of points for each image marked with a diseased area, a number of points (for example, a total of 5000) points are selected for the non-lesioned area and the diseased area in the image, and each selected point is used as the center to obtain each image on the image.
  • the square area corresponding to each point If the selected point is in the diseased area, the corresponding square area is the positive training sample of the convolutional neural network model; if the selected point is in the non-lesioned area, the corresponding square area is the negative training sample of the convolutional neural network model.
  • N for each image labeled with a diseased area, N (for example, 2500) points are selected from the non-lesioned area and the diseased area of the image, for a total of 2N points. Therefore, for each image labeled with a lesion area, N positive training samples and negative training samples can be obtained.
  • points in the non-lesioned area and the diseased area in the image labeled with the diseased area may be selected from non-lesioned areas and diseased areas in an image labeled with a diseased area according to a predetermined rule.
  • a neighboring area of the diseased area of the image may be determined, and a first number (for example, 4 / N) of points is selected in the neighboring area to determine a diseased area of the image.
  • a second number for example, 2 / N
  • select a non-relevant region of the lesion area of the image determine a non-relevant region of the lesion area of the image
  • select a first number for example, 4 / N
  • the adjacent area, similar area, and non-relevant area constitute the entire non-lesioned area of the image.
  • the adjacent area may be an area within a preset range (for example, within 1 cm) outside the diseased area.
  • the similar region may be a region where a pixel value is a preset value (for example, a region where the G component exceeds 2).
  • the preset range may be morphologically expanded to obtain the neighboring area.
  • Step 106 Determine whether the preset part is a diseased part and determine a diseased position according to a diseased probability of a center point of each block in the color image.
  • a preset threshold for example, 0.5. If the probability of lesions at the center point of any block in the color image is greater than or equal to a preset threshold, Then, it is determined that the preset site is a lesion site. The position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part. Otherwise, if the lesion probability of the center point of any block in the color image is less than a preset threshold, it is determined that the preset part is not a lesion part.
  • a preset threshold for example, 0.5.
  • a preset threshold for example, 0.5
  • a first preset number for example, 5
  • the probability of the lesion at the center point of the block is greater than or equal to a preset threshold (for example, 0.5) is greater than the first preset number, it is determined that the preset site is a lesion site.
  • the position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part.
  • the preset part is determined to be a non-lesion part.
  • a preset threshold for example, 0.5
  • a second preset number for example, 3
  • the probability of a lesion at a center point of an adjacent block is greater than or equal to a preset threshold (for example, 0.5) is greater than a second preset number, it is determined that the preset part is a lesion part.
  • the position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part.
  • the preset part is not a diseased part.
  • the first preset number and the second preset number may be the same or different.
  • Embodiment 1 A first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing a magnetic resonance scan on a preset part of a human body using different magnetic resonance scanning sequences are obtained;
  • the second magnetic resonance image and the third magnetic resonance image are pre-processed;
  • the pre-processed first magnetic resonance image is used as the first component
  • the pre-processed second magnetic resonance image is used as the second component
  • the third magnetic resonance image is a third component
  • the pre-processed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into a color image
  • the color image is divided into multiple regions of the same size Block
  • using the trained convolutional neural network model to predict each block of the color image to obtain the lesion probability of the center point of each block, wherein the convolutional neural network model uses the Training on the image; judging whether the preset part is a diseased part and determining the position of the disease according to the diseased probability of
  • the method for identifying a lesion part in the first embodiment uses different sequence images (ie, the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image obtained by scanning different magnetic resonance scan sequences) to identify the lesion part, and uses a single sequence image (Ie, magnetic resonance images obtained from a single scan sequence scan) Compared with the method of identifying lesions, this method improves the accuracy of identifying the lesions.
  • the convolutional neural network model of the method for identifying a lesion part according to the first embodiment predicts a lesion probability of a block center point according to each block of the fused color image, and compared with predicting a lesion probability of a single pixel in the image, the method Improved detection efficiency. Therefore, the method realizes fast and accurate identification of lesions.
  • FIG. 3 is a structural diagram of a lesion site identification device provided in Embodiment 2 of the present application.
  • the lesion site identification device 10 may include: an obtaining unit 301, a preprocessing unit 302, a fusion unit 303, a segmentation unit 304, a prediction unit 305, and a determination unit 306.
  • the obtaining unit 301 is configured to obtain a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing magnetic resonance scanning on a preset part of a human body by using different magnetic resonance scanning sequences.
  • Magnetic Resonance Imaging Magnetic resonance imaging
  • MRI imaging is a type of tomography. It uses magnetic resonance phenomena to obtain electromagnetic signals from the human body and reconstruct human information to obtain MRI images. .
  • the lesion site identification device may be used to detect colorectal cancer (which may be rectal cancer or colon cancer) and locate the cancerous site of the large intestine.
  • the preset site is the large intestine.
  • the preset part may be other parts or organs of the human body, and the diseased part recognition device may be applied to detect lesions in other parts or organs of the human body.
  • MRI is a multi-parameter imaging.
  • the contrast of the image is related to the number of hydrogen protons in the tissue (that is, the human tissue device), the T1 and T2 time of the tissue, and the fluid flow speed. These factors can be reflected by applying different MRI scan sequences Images with different emphasis. Different images obtained from the same anatomical location (ie, the same layer) using different magnetic resonance scan sequences can provide different parameter information of the tissue and can be used to identify the lesion site.
  • the first magnetic resonance image may be a T2w (T2weighted) image
  • the second magnetic resonance image may be a diffusion-weighted (DWI) image with a first dispersion sensitivity coefficient.
  • the third magnetic resonance image may be a DWI image with a second diffusion sensitivity coefficient. It should be noted that the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are images obtained by scanning the same anatomical position (that is, the same slice) of the preset part.
  • T2w imaging reflects the difference in T2 relaxation (lateral relaxation) between tissues. The longer the T2 of the tissue, the slower the recovery, the stronger the signal (the image becomes white), and the shorter the T2 of the tissue, the faster the recovery, the weaker the signal (the image becomes black). According to the T2w image, it can be judged what substance is in different positions in the image.
  • DWI is an imaging method based on the flow-air effect, one of the MR imaging elements, and reflects the microscopic motion of water molecules in living tissues in macroscopic images. Diffusion-weighted imaging observes the microscopic diffusion of water molecules.
  • the diffusion sensitivity coefficient is also called b value, which represents the time, amplitude, and shape of the gradient magnetic field applied in magnetic resonance scanning. Magnetic resonance scanning equipment can simultaneously obtain multiple DWI images with different b-values on one level.
  • the first dispersion sensitivity coefficient may be a high dispersion sensitivity coefficient
  • the second dispersion sensitivity coefficient may be a low dispersion sensitivity coefficient.
  • the first dispersion sensitivity coefficient is 1000
  • the second dispersion sensitivity coefficient is 0,
  • the unit of the dispersion sensitivity coefficient is mm 2 / s.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be images obtained by performing a magnetic resonance scan on a preset part by using other magnetic resonance scanning sequences.
  • the lesion recognition device 10 may be included in a computer device, and the computer device may receive the first magnetic field from another computing device (such as a server that stores the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image in advance).
  • another computing device such as a server that stores the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image in advance.
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be included in a computer device, and the computer device may receive the first magnetic field from another computing device (such as a server that stores the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image in advance).
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be included in a computer device, and the computer device may receive the first magnetic field from another computing device (such as a server that stores the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image in advance).
  • the resonance image, the second magnetic resonance image, and the third magnetic resonance image may be included in
  • the computer device may control the magnetic resonance equipment to scan a preset part of the human body to obtain a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be stored in the memory of the computer device in advance, and the computer device reads the first magnetic resonance image and the second magnetic resonance image from the memory. With a third magnetic resonance image.
  • the preprocessing unit 302 is configured to preprocess the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image.
  • the pre-processing of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may include standardizing the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image, and the first magnetic resonance image. Image registration, second magnetic resonance image and third magnetic resonance image.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be based on the mean and standard deviation of the pixel values of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image.
  • two images may be calculated.
  • the mutual information of the two images maximizes the mutual information of the two images, thereby realizing the image registration of the two images.
  • a and b indicate the range of pixel values (usually gray values) in image A and image B
  • #a indicates the number of pixels in image A that belong to the range a
  • #b indicates pixels in image B.
  • the value belongs to the number of pixels in the range b
  • #A and #B respectively represent the number of pixels of image A and image B
  • p (a) represents the probability of occurrence of pixels in image A whose pixel values belong to the range a
  • p (b ) Represents the probability that a pixel value in the image B falls within the range b.
  • Registration can include:
  • a first reference point is selected on the first magnetic resonance image
  • a second reference point is selected on the second magnetic resonance image
  • a third reference point is selected on the third magnetic resonance image.
  • the first reference point and the second reference point And the third reference point is a point at the same position as the preset part;
  • one of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be selected as a reference. Align the image not selected as the reference with the image selected as the reference.
  • the first magnetic resonance image is a T2w image
  • the second magnetic resonance image and the third magnetic resonance image are DWI images with different dispersion sensitivity coefficients.
  • the T2w image can be selected as a reference to make the two DWI images to T2w image alignment.
  • DWI images with different dispersion sensitivity coefficients can be scanned at the same time. Therefore, when aligning two DWI images with different dispersion sensitivity coefficients to the T2w image, it is only necessary to align one DWI image to the T2w image and the other DWI image. The same alignment is sufficient.
  • the image not selected as the reference may be gradually deformed, and the image not selected as the reference may be gradually aligned with the image selected as the reference.
  • Deforming the image not selected as the reference may include enlarging or reducing the image not selected as the reference, stretching the image not selected as the reference in a preset direction, and rotating the image not selected as the reference by a preset angle.
  • the resolution of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image can be reduced.
  • the resolutions of the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image may be reduced by one, two, and four times, respectively, and the resolutions may be reduced by one, two, and four, respectively. Four times the first magnetic resonance image, the second magnetic resonance image and the third magnetic resonance image are image registered.
  • the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image after the resolution is doubled are image-registered, and the first magnetic resonance image, Image registration is performed on the second magnetic resonance image and the third magnetic resonance image, and image registration is performed on the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image after the resolution is reduced by four times.
  • the final configuration result is obtained according to the registration results of the three low-resolution images (for example, taking an average value).
  • the fusion unit 303 is configured to use the pre-processed first magnetic resonance image as the first component, the pre-processed second magnetic resonance image as the second component, and the pre-processed third magnetic resonance image as the third component. , The pre-processed first magnetic resonance image, the second magnetic resonance image and the third magnetic resonance image are fused into a color image.
  • the pre-processed first magnetic resonance image is taken as the R component (that is, the red component), and the pre-processed second magnetic resonance image is taken as the G component (that is, the green component).
  • the three magnetic resonance images are used as the B component (ie, the blue component), and the preprocessed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into an RGB color image.
  • the pre-processed first magnetic resonance image is used as the Y component (that is, brightness), and the pre-processed second magnetic resonance image is used as the U component (that is, the first chroma).
  • the third magnetic resonance image is used as the V component (that is, the second chromaticity), and the pre-processed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into a YUV color image.
  • a dividing unit 304 is configured to divide the color image into a plurality of blocks of a preset size.
  • Color images can be divided in a preset direction. For example, color images are segmented in order from top to bottom and left to right.
  • Each block obtained by the segmentation has a preset size (that is, the size of the image received by the convolutional neural network model in step 105), for example, 21 * 21.
  • the divided blocks do not overlap each other.
  • the color image size is 168 * 168
  • the color image is divided into 64 non-overlapping blocks, each block is 21 * 21.
  • a prediction unit 305 is configured to use a trained convolutional neural network model to predict each block of the color image to obtain a lesion probability of a center point of each block, wherein the convolutional neural network model uses annotations Images with diseased areas are trained.
  • the convolutional neural network model may include a convolutional layer, a maximum pooling layer, and an output layer.
  • the convolutional neural network model from the front to the back is: a convolution layer, a convolution layer, a maximum pooling layer, a convolution layer, a maximum pooling layer, and a convolution.
  • the output of the convolutional neural network model (that is, the output of the output layer) is the probability that the center point of the input image is the lesion area.
  • the loss function used in training the convolutional neural network model can be defined as:
  • y ′ is the lesion probability of the center point of the training sample (that is, the probability that the center point of the training sample belongs to the diseased area) predicted by the convolutional neural network model for the training sample, and y is the label, and the value is 0 or 1.
  • the center point of the sample is 1 if there is a lesion, and 0 if there is no lesion.
  • Convolutional neural network models can be trained using neural network training algorithms, such as back-propagation algorithms.
  • the convolutional neural network model can be trained using an adadelta algorithm.
  • the neural network training algorithm is a well-known technology and will not be repeated here.
  • the convolutional neural network model is trained using images labeled with diseased areas.
  • the image labeled with the lesion area may be a color image obtained by the above-mentioned units 301-303.
  • the obtaining unit 301 obtains a first magnetic resonance training image, a second magnetic resonance training image obtained by performing magnetic resonance scanning on a preset part of the human body using different magnetic resonance scanning sequences, and A third magnetic resonance training training image;
  • the preprocessing unit 302 performs preprocessing on the first magnetic resonance training image, the second magnetic resonance training image, and a third magnetic resonance training image;
  • the fusion unit 303 performs preprocessing on the first magnetic resonance
  • the resonance training image is the first component
  • the preprocessed second magnetic resonance training image is the second component
  • the preprocessed third magnetic resonance training image is the third component.
  • the preprocessed first magnetic resonance training is performed.
  • the image, the second magnetic resonance training image and the third magnetic resonance training image are fused into a color training image. Mark the lesion area
  • Convolutional neural network models can be trained using multiple images labeled with diseased areas. For each image labeled with a lesion area, a square area of a preset size (for example, 21 * 21, the same size as the block obtained by the segmentation unit 304) is extracted from the image, and the extracted square area is used as a convolutional neural network model Training samples.
  • the training samples may include positive training samples and negative training samples.
  • each image labeled with a diseased area several (for example, a total of 5000) points are selected from the non-lesioned area and the diseased area in the image, and each selected point is taken as the center to obtain each image on the image.
  • the square area corresponding to each point If the selected point is in the diseased area, the corresponding square area is the positive training sample of the convolutional neural network model; if the selected point is in the non-lesioned area, the corresponding square area is the negative training sample of the convolutional neural network model.
  • N for each image labeled with a diseased area, N (for example, 2500) points are selected from the non-lesioned area and the diseased area of the image, for a total of 2N points. Therefore, for each image labeled with a lesion area, N positive training samples and negative training samples can be obtained.
  • points in the non-lesioned area and the diseased area in the image labeled with the diseased area may be selected from non-lesioned areas and diseased areas in an image labeled with a diseased area according to a predetermined rule.
  • a neighboring area of the diseased area of the image may be determined, and a first number (for example, 4 / N) of points is selected in the neighboring area to determine a diseased area of the image.
  • a second number for example, 2 / N
  • a non-relevant area of the diseased area of the image determines a non-relevant area of the diseased area of the image
  • select a third number for example, 4 / N
  • the adjacent area, similar area, and non-relevant area constitute the entire non-lesioned area of the image.
  • the adjacent area may be an area within a preset range (for example, within 1 cm) outside the diseased area.
  • the similar region may be a region where a pixel value is a preset value (for example, a region where the G component exceeds 2).
  • the preset range may be morphologically expanded to obtain the neighboring area.
  • the judging unit 306 is configured to judge whether the preset part is a diseased part and determine a diseased position according to a diseased probability of a center point of each block in the color image.
  • a preset threshold for example, 0.5. If the probability of lesions at the center point of any block in the color image is greater than or equal to a preset threshold, Then, it is determined that the preset site is a lesion site. The position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part. Otherwise, if the lesion probability of the center point of any block in the color image is less than a preset threshold, it is determined that the preset part is not a lesion part.
  • a preset threshold for example, 0.5.
  • a preset threshold for example, 0.5
  • a first preset number for example, 5
  • the probability of the lesion at the center point of the block is greater than or equal to a preset threshold (for example, 0.5) is greater than the first preset number, it is determined that the preset site is a lesion site.
  • the position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part.
  • the preset part is determined to be a non-lesion part.
  • a preset threshold for example, 0.5
  • a second preset number for example, 3
  • the probability of a lesion at a center point of an adjacent block is greater than or equal to a preset threshold (for example, 0.5) is greater than a second preset number, it is determined that the preset part is a lesion part.
  • the position of the center point where the lesion probability is greater than or equal to the preset threshold is the lesion position of the preset part.
  • the preset part is not a diseased part.
  • the first preset number and the second preset number may be the same or different.
  • a first magnetic resonance image, a second magnetic resonance image, and a third magnetic resonance image obtained by performing magnetic resonance scanning on a preset part of a human body using different magnetic resonance scanning sequences are obtained;
  • the second magnetic resonance image and the third magnetic resonance image are pre-processed;
  • the pre-processed first magnetic resonance image is used as the first component
  • the pre-processed second magnetic resonance image is used as the second component
  • the third magnetic resonance image is a third component
  • the pre-processed first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image are fused into a color image
  • the color image is divided into multiple regions of the same size Block
  • using the trained convolutional neural network model to predict each block of the color image to obtain the lesion probability of the center point of each block, wherein the convolutional neural network model uses the Training on the image; judging whether the preset part is a diseased part and determining the position of the disease according to the diseased probability of the center point of each block
  • the lesion site identification device of the second embodiment uses different sequence images (ie, the first magnetic resonance image, the second magnetic resonance image, and the third magnetic resonance image obtained by scanning different magnetic resonance scan sequences) to identify the lesion site, and uses a single sequence image (Ie, a magnetic resonance image obtained by a single scan sequence scan) Compared with a lesion site recognition device that recognizes a lesion site, the device improves the accuracy of the lesion site recognition.
  • the convolutional neural network model of the lesion site identification device of the second embodiment predicts the lesion probability of the center point of the block according to each block of the fused color image. Compared with the prediction of the probability of lesion of a single pixel in the image, the device Improved detection efficiency. Therefore, the device realizes rapid and accurate identification of a lesion part.
  • FIG. 4 is a schematic diagram of a computer device according to a third embodiment of the present application.
  • the computer device 1 includes a memory 20, a processor 30, and computer-readable instructions 40 stored in the memory 20 and executable on the processor 30, such as a lesion recognition program.
  • the processor 30 executes the computer-readable instructions 40, the steps in the embodiment of the method for identifying a lesion site are implemented, for example, steps 101-106 shown in FIG.
  • the processor 30 executes the computer-readable instructions 40
  • the functions of each module / unit in the foregoing device embodiment are implemented, for example, units 301-306 in FIG. 3.
  • the computer-readable instructions 40 may be divided into one or more modules / units, the one or more modules / units are stored in the memory 20 and executed by the processor 30, To complete this application.
  • the one or more modules / units may be a series of computer-readable instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 40 in the computer device 1.
  • the computer-readable instruction 40 can be divided into an obtaining unit 301, a preprocessing unit 302, a fusion unit 303, a dividing unit 304, a prediction unit 305, and a judgment unit 306 in FIG. 3, and the specific functions of each unit refer to the second embodiment .
  • the computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram 4 is only an example of the computer device 1, and does not constitute a limitation on the computer device 1. It may include more or less components than shown in the figure, or combine some components, or be different
  • the components, for example, the computer apparatus 1 may further include an input-output device, a network access device, a bus, and the like.
  • the so-called processor 30 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor 30 may be any conventional processor, etc.
  • the processor 30 is a control center of the computer device 1 and is connected to the entire computer device 1 by using various interfaces and lines. Various parts.
  • the memory 20 may be configured to store the computer-readable instructions 40 and / or modules / units, and the processor 30 may execute or execute the computer-readable instructions and / or modules / units stored in the memory 20, and
  • the data stored in the memory 20 is called to implement various functions of the computer device 1.
  • the memory 20 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may Data (such as audio data, phone book, etc.) created according to the use of the computer device 1 are stored.
  • the memory 20 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD).
  • a non-volatile memory such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD).
  • SSD Secure Digital
  • flash memory card Flash card
  • flash memory device at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • the modules / units integrated in the computer device 1 When the modules / units integrated in the computer device 1 are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, this application implements all or part of the processes in the methods of the above embodiments, and can also be completed by computer-readable instructions to instruct related hardware.
  • the computer-readable instructions can be stored in a non-volatile memory. In the read storage medium, when the computer-readable instructions are executed by a processor, the steps of the foregoing method embodiments can be implemented.
  • the computer-readable instructions include computer-readable instruction codes, and the computer-readable instruction codes may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the non-volatile readable medium may include: any entity or device capable of carrying the computer-readable instruction code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signals telecommunication signals
  • telecommunication signals and software distribution media.
  • the content contained in the computer-readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdictions.
  • non-volatile The readable medium does not include electric carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

一种病变部位识别方法,所述方法包括:获取应用不同的磁共振扫描序列对预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;将所述彩色图像分割为预设大小的多个区块;利用训练好的卷积神经网络模型预测每个区块的中心点的病变概率;根据每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。本申请还提供一种病变部位识别装置、计算机装置及可读存储介质。本申请可以实现快速准确的病变部位识别。

Description

病变部位识别方法及装置、计算机装置及可读存储介质
本申请要求于2018年05月23日提交中国专利局,申请号为201810503241.6发明名称为“病变部位识别方法及装置、计算机装置及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,具体涉及一种识别磁共振图像中病变部位的方法及装置、计算机装置和可读存储介质。
背景技术
直肠癌是指从齿状线至直肠乙状结肠交界处之间的癌,是消化道最常见的恶性肿瘤之一,并且在青少年人群中发病率逐渐升高。目前直肠癌主要的诊断方法是医生通过分析磁共振MRI(Magnetic Resonance Imaging,磁共振成像)图像进行诊断。然而,医生诊断会花费大量的人力物力,而且诊断结果很大程度上取决于医生的专业水平。近年来深度学习在各个领域得到了突飞猛进的发展。如何利用深度学习实现高准确度的病变部位识别成为亟待解决的问题。
发明内容
鉴于以上内容,有必要提出一种病变部位识别方法及装置、计算机装置和可读存储介质,其可以实现快速准确的病变部位识别。
本申请的第一方面提供一种病变部位识别方法,所述方法包括:
获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
将所述彩色图像分割为预设大小的多个区块;
利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
本申请的第二方面提供一种病变部位识别装置,所述装置包括:
获取单元,用于获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
预处理单元,用于对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
融合单元,用于以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量, 将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
分割单元,用于将所述彩色图像分割为预设大小的多个区块;
预测单元,用于利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
判断单元,用于根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
本申请的第三方面提供一种计算机装置,所述计算机装置包括存储器和处理器,所述存储器存储有至少一条计算机可读指令,所述处理器执行所述至少一条计算机可读指令时实现所述病变部位识别方法。
本申请的第四方面提供一种非易失性可读存储介质,其上存储有至少一条计算机可读指令,所述至少一条计算机可读指令被处理器执行时实现所述病变部位识别方法。
本申请获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;将所述彩色图像分割为预设大小的多个区块;利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
本申请使用不同序列图像(即不同磁共振扫描序列扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像)进行病变部位识别,与使用单序列图像(即单一扫描序列扫描得到的磁共振图像)进行病变部位识别相比,本申请提高了病变部位识别的准确率。并且,本申请的卷积神经网络模型根据融合后的彩色图像的各个区块预测区块中心点的病变概率,与对图像中的单个像素预测病变概率相比,本申请提高了检测效率。因此,本申请实现了快速准确的病变部位识别。
附图说明
图1是本申请实施例一提供的病变部位识别方法的流程图。
图2是本申请使用的卷积神经网络模型的结构示意图。
图3是本申请实施例二提供的病变部位识别装置的结构图。
图4是本申请实施例三提供的计算机装置的示意图。
具体实施方式
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
优选地,本申请的病变部位识别方法应用在一个或者多个计算机装置中。 所述计算机装置是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
实施例一
图1是本申请实施例一提供的病变部位识别方法的流程图。所述病变部位识别方法应用于计算机装置。所述病变部位识别方法根据不同序列磁共振图像进行病变部位识别,确定预设部位是否为病变部位并确定病变位置。
如图1所示,所述病变部位识别方法具体包括以下步骤:
步骤101,获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像。
MRI(Magnetic Resonance Imaging,磁共振成像)图像是常用的医学图像之一,MRI成像是断层成像的一种,它利用磁共振现象从人体中获得电磁信号,并重建出人体信息,从而得到MRI图像。
在一具体实施例中,可以应用所述病变部位识别方法检测大肠癌(可以是直肠癌或结肠癌),定位大肠的癌变部位。在此应用场景中,所述预设部位是大肠。可以理解,在其他的场景中,所述预设部位可以是人体的其他部位或器官,可以应用所述病变部位识别对人体的其他部位或器官的病变进行检测。
MRI是一种多参数成像,图像的对比度与组织(即人体组织)所含的氢质子数、组织的T1和T2时间、液体流动速度有关,应用不同的磁共振扫描序列可以得到反映这些因素不同侧重点的图像。应用不同的磁共振扫描序列在同一解剖位置(即同一层面)上得到的不同图像可以提供组织的不同参数信息,可用来进行病变部位识别。在一较佳实施例中,第一磁共振图像可以是T2w(T2weighted,T2加权)图像,第二磁共振图像可以是第一弥散敏感系数下的DWI(diffusion-weighted imaging,弥散加权成像)图像,第三磁共振图像可以是第二弥散敏感系数下的DWI图像。需要说明的是,第一磁共振图像、第二磁共振图像与第三磁共振图像是对预设部位的同一解剖位置(即同一层面)扫描得到的图像。
T2w成像反映的是组织间T2弛豫(横向弛豫)的差别。组织的T2越长,恢复越慢,信号就越强(图像发白),组织的T2越短,恢复越快,信号就越弱(图像发黑)。根据T2w图像可以判断图像中的不同位置是什么物质。
DWI是建立在MR成像要素之一——流空效应上的一种成像方法,在宏观图像中反映活体组织水分子的微观运动。弥散加权成像观察的是微观的水分子流动扩散现象。弥散敏感系数也叫b值,表示磁共振扫描应用的梯度磁场的时间、幅度、形状。磁共振扫描设备可以在一个层面上同时得到多个不同b值的DWI图像。
在一具体实施例中,第一弥散敏感系数可以为高弥散敏感系数,第二弥散敏感系数可以为低弥散敏感系数。例如,第一弥散敏感系数为1000,第二弥散敏感系数为0,弥散敏感系数的单位为mm 2/s。
可以理解,第一磁共振图像、第二磁共振图像与第三磁共振图像可以是 应用其他的磁共振扫描序列对预设部位进行磁共振扫描得到的图像。
获取第一磁共振图像、第二磁共振图像与第三磁共振图像可以是多种方式。例如,应用所述病变部位识别方法的计算机装置可以从其他的计算设备(例如预先存储第一磁共振图像、第二磁共振图像与第三磁共振图像的服务器)接收第一磁共振图像、第二磁共振图像与第三磁共振图像。
或者,应用所述病变部位识别方法的计算机装置可以控制磁共振设备对人体预设部位进行扫描,得到第一磁共振图像、第二磁共振图像与第三磁共振图像。
或者,应用所述病变部位识别方法的计算机装置的存储器中可以预先存储第一磁共振图像、第二磁共振图像与第三磁共振图像,所述计算机装置从所述存储器中读取第一磁共振图像、第二磁共振图像与第三磁共振图像。
步骤102,对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理。
对第一磁共振图像、第二磁共振图像与第三磁共振图像的预处理可以包括对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化,以及对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准。
在一实施例中,可以基于第一磁共振图像、第二磁共振图像与第三磁共振图像的像素值的均值和标准差对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化。具体地,对于第一磁共振图像、第二磁共振图像或第三磁共振图像,计算该图像的像素值的均值u和标准差e,对该图像的每个像素值进行如下转换:x′=(x-u)/e,其中x是原来的像素值,x′为标准化后的像素值。
可以理解,可以采用其他的标准化方法对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化。图像标准化方法为公知技术,此处不再赘述。
应用不同的磁共振扫描序列对预设部位进行磁共振扫是有时间间隔的,病人体位可能会发生位移。因此,需要对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,把三个图像的内容对应起来,也就是使第一磁共振图像、第二磁共振图像与第三磁共振图像的各个部分相对应。
在一实施例中,对于第一磁共振图像、第二磁共振图像与第三磁共振图像中的任意两个图像(例如第一磁共振图像与第二磁共振图像),可以计算两个图像的互信息,使两个图像的互信息最大,从而实现两个图像的图像配准。
图像A与图像B的互信息可以表示为:
Figure PCTCN2018099614-appb-000001
Figure PCTCN2018099614-appb-000002
Figure PCTCN2018099614-appb-000003
Figure PCTCN2018099614-appb-000004
其中,a、b分别表示图像A、图像B中像素值(通常为灰度值)的范围,#a表示图像A中像素值属于范围a内的像素的个数,#b表示图像B中像素值属于范围b内的像素的个数,#A、#B分别表示图像A、图像B的像素数,p(a)表示图像A中像素值属于范围a内的像素出现的概率,p(b)表示图像B中像素值属于范围b内的像素出现的概率。
可以采用其他的图像配准方法对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准。例如,可以在第一磁共振图像、第二磁共振图像与第三磁共振图像上各选取一个参照点,依据该参照点将第一磁共振图像、第二磁共振图像与第三磁共振图像配准,具体的,可以包括:
在第一磁共振图像上选取第一参照点,在第二磁共振图像上选取第二参照点,在第三磁共振图像上选取第三参照点,所述第一参照点、第二参照点与所述第三参照点是所述预设部位的相同位置上的点;
计算第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算第三磁共振图像中各个像素点与所述第三参照点的相对坐标;
根据第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算第一磁共振图像的中心点,根据第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算第二磁共振图像的中心点,以及根据第三磁共振图像中各个像素点与所述第三参照点的相对坐标,计算第三磁共振图像的中心点;
将第一磁共振图像的中心点、第二磁共振图像的中心点和第三磁共振图像的中心点对齐。
在对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准时,可以选择第一磁共振图像、第二磁共振图像与第三磁共振图像中的一个图像作为基准,使未选为基准的图像向选为基准的图像对准。
在一具体实施例中,第一磁共振图像是T2w图像,第二磁共振图像与第三磁共振图像是不同弥散敏感系数下的DWI图像,可以选择T2w图像作为基准,使两个DWI图像向T2w图像对准。不同弥散敏感系数下的DWI图像可以同时扫描得到,因此,在将两个不同弥散敏感系数下的DWI图像向T2w图像对准时,只需将一个DWI图像向T2w图像对准,另一个DWI图像进行同样的对准即可。
在使未选为基准的图像向选为基准的图像对准的过程中,可以对未选为基准的图像逐渐进行形变,使未选为基准的图像逐渐对准选为基准的图像。对未选为基准的图像进行形变可以包括将未选为基准的图像放大或缩小、将未选为基准的图像按照预设方向拉伸、将未选为基准的图像旋转预设角度。
在对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准的过程中,可以减小第一磁共振图像、第二磁共振图像与第三磁共振图像的分辨率,使用多个较低分辨率的图像来进行图像配准,以增加配准的鲁棒性。例如,可以将第一磁共振图像、第二磁共振图像与第三磁共振图像的分辨率分别都减小一倍、二倍和四倍,将分辨率分别都减小一倍、二倍和四倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像分别进行图像配准。也就是说,将分辨率减小一倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,将分辨率减小二倍后的第一磁共振图像、第二磁共 振图像与第三磁共振图像进行图像配准,将分辨率减小四倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像来进行图像配准。根据三次低分辨率图像的配准结果得到最终的配置结果(例如取平均值)。
步骤103,以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像。
在一实施例中,将预处理后的第一磁共振图像作为R分量(即红色分量),将预处理后的第二磁共振图像作为G分量(即绿色分量),将预处理后的第三磁共振图像作为B分量(即蓝色分量),将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为RGB彩色图像。
在另一实施例中,将预处理后的第一磁共振图像作为Y分量(即亮度),将预处理后的第二磁共振图像作为U分量(即第一色度),将预处理后的第三磁共振图像作为V分量(即第二色度),将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为YUV彩色图像。
步骤104,将所述彩色图像分割为预设大小的多个区块。
可以按照预设方向对彩色图像进行分割。例如,按照从上到下,从左到右的顺序对彩色图像进行分割。
分割得到的每个区块为预设大小(即步骤105中卷积神经网络模型接收的图像的大小),例如21*21。
在一实施例中,分割得到的各个区块互不重叠。例如,彩色图像大小为168*168,将该彩色图像分割为64个互不重叠的区块,每个区块为21*21。
步骤105,利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练。
所述卷积神经网络模型可以包括卷积层、最大池化层和输出层。在一具体实施例中,参阅图2所示,所述卷积神经网络模型从前向后依次为:卷积层、卷积层、最大池化层、卷积层、最大池化层、卷积层、最大池化层、全连接层、全连接层、输出层。卷积神经网络模型的输出(即输出层的输出)为输入图像的中心点为病变区域的概率。
在一实施例中,卷积神经网络模型训练时所用的损失函数可以定义为:
L(y′,y)=-[ylog(y′)+(1-y)log(1-y′)]。
其中,y′是卷积神经网络模型对训练样本预测得到的训练样本的中心点的病变概率(即训练样本的中心点属于病变区域的概率),y是标签,数值是0或者1,若训练样本的中心点有病变则为1,没有病变则为0。
可以使用神经网络训练算法,例如反向传播算法对卷积神经网络模型进行训练。在一实施例中,可以使用adadelta算法对卷积神经网络模型进行训练。神经网络训练算法为公知技术,此处不再赘述。
卷积神经网络模型使用标注有病变区域的图像进行训练。该标注有病变区域的图像可以是通过步骤101-103得到的彩色图像。例如,在对卷积神经网络模型进行训练前,获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振训练图像、第二磁共振训练图像与第三磁共 振训练训练图像;对所述第一磁共振训练图像、第二磁共振训练图像与第三磁共振训练图像进行预处理;以预处理后的第一磁共振训练图像为第一分量,以预处理后的第二磁共振训练图像为第二分量,以预处理后的第三磁共振训练图像为第三分量,将预处理后的第一磁共振训练图像、第二磁共振训练图像与第三磁共振训练图像融合为彩色训练图像;对所述彩色训练图像标注病变区域,得到所述标注有病变区域的图像。
可以利用多个标注有病变区域的图像对卷积神经网络模型进行训练。对于每个标注有病变区域的图像,从该图像提取预设大小(例如21*21,与步骤104中区块的大小相同)的方块区域,以提取的方块区域作为卷积神经网络模型的训练样本。所述训练样本可以包括正训练样本和负训练样本。
具体地,对于每个标注有病变区域的图像,对该图像中的非病变区域和病变区域选取若干个(例如共5000个)点,以选取的每个点为中心,在该图像上获取每个点对应的方块区域。若选取的点在病变区域内,则对应的方块区域为卷积神经网络模型的正训练样本;若选取的点在非病变区域内,则对应的方块区域为卷积神经网络模型的负训练样本。
在一具体实施例中,对每个标注有病变区域的图像,从该图像的非病变区域和病变区域各选取N(例如2500)个点,共计2N个点。因此,对于每个标注有病变区域的图像,可以得到正训练样本和负训练样本各N个。
可以在标注有病变区域的图像中的非病变区域和病变区域随机选取点。或者,可以按照预定规则在标注有病变区域的图像中的非病变区域和病变区域选取点。
在一实施例中,对于标注有病变区域的图像,可以确定该图像的病变区域的邻近区域,在该邻近区域中选取第一数量(例如4/N)个点,确定该图像的病变区域的相似区域,在该相似区域选取第二数量(例如2/N)个点,确定该图像的病变区域的非相关区域,在该非相关区域中选取第一数量(例如4/N)个点,所述邻近区域、相似区域、非相关区域组成该图像的整个非病变区域。所述邻近区域可以是病变区域外预设范围内(例如1cm内)的区域。所述相似区域可以是像素值为预设值的区域(例如G分量超过2的区域)。当邻近区域是病变区域外预设范围内的区域时,可以对该预设范围进行形态学膨胀,得到所述邻近区域。
步骤106,根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
可以判断所述彩色图像中任意区块的中心点的病变概率是否大于或等于预设阈值(例如0.5),若所述彩色图像中任意区块的中心点的病变概率大于或等于预设阈值,则判断所述预设部位为病变部位。病变概率大于或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中任意区块的中心点的病变概率小于预设阈值,则判断所述预设部位非病变部位。
或者,可以判断所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第一预设数量(例如5),若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第一预设数量,则判断所述预设部位为病变部位。病变概率大于 或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量小于第一预设数量,则判断所述预设部位非病变部位。
或者,可以判断所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第二预设数量(例如3),若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第二预设数量,则判断所述预设部位为病变部位。病变概率大于或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量小于第二预设数量,则判断所述预设部位非病变部位。
所述第一预设数量、第二预设数量可以相同也可以不同。
实施例一获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;将所述彩色图像分割为相同大小的多个区块;利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
实施例一的病变部位识别方法使用不同序列图像(即不同磁共振扫描序列扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像)进行病变部位识别,与使用单序列图像(即单一扫描序列扫描得到的磁共振图像)进行病变部位识别相比,本方法提高了病变部位识别的准确率。并且,实施例一的病变部位识别方法的卷积神经网络模型根据融合后的彩色图像的各个区块预测区块中心点的病变概率,与对图像中的单个像素预测病变概率相比,本方法提高了检测效率。因此,本方法实现了快速准确的病变部位识别。
实施例二
图3为本申请实施例二提供的病变部位识别装置的结构图。如图3所示,所述病变部位识别装置10可以包括:获取单元301、预处理单元302、融合单元303、分割单元304、预测单元305、判断单元306。
获取单元301,用于获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像。
MRI(Magnetic Resonance Imaging,磁共振成像)图像是常用的医学图像之一,MRI成像是断层成像的一种,它利用磁共振现象从人体中获得电磁信号,并重建出人体信息,从而得到MRI图像。
在一具体实施例中,可以应用所述病变部位识别装置检测大肠癌(可以是直肠癌或结肠癌),定位大肠的癌变部位。在此应用场景中,所述预设部位是大肠。可以理解,在其他的场景中,所述预设部位可以是人体的其他部位或器官,可以应用所述病变部位识别装置对人体的其他部位或器官的病变进行检测。
MRI是一种多参数成像,图像的对比度与组织(即人体组织装置)所含的氢质子数、组织的T1和T2时间、液体流动速度有关,应用不同的磁共振扫描序列可以得到反映这些因素不同侧重点的图像。应用不同的磁共振扫描序列在同一解剖位置(即同一层面)上得到的不同图像可以提供组织的不同参数信息,可用来进行病变部位识别。在一较佳实施例中,第一磁共振图像可以是T2w(T2weighted,T2加权)图像,第二磁共振图像可以是第一弥散敏感系数下的DWI(diffusion-weighted imaging,弥散加权成像)图像,第三磁共振图像可以是第二弥散敏感系数下的DWI图像。需要说明的是,第一磁共振图像、第二磁共振图像与第三磁共振图像是对预设部位的同一解剖位置(即同一层面)扫描得到的图像。
T2w成像反映的是组织间T2弛豫(横向弛豫)的差别。组织的T2越长,恢复越慢,信号就越强(图像发白),组织的T2越短,恢复越快,信号就越弱(图像发黑)。根据T2w图像可以判断图像中的不同位置是什么物质。
DWI是建立在MR成像要素之一——流空效应上的一种成像方法,在宏观图像中反映活体组织水分子的微观运动。弥散加权成像观察的是微观的水分子流动扩散现象。弥散敏感系数也叫b值,表示磁共振扫描应用的梯度磁场的时间、幅度、形状。磁共振扫描设备可以在一个层面上同时得到多个不同b值的DWI图像。
在一具体实施例中,第一弥散敏感系数可以为高弥散敏感系数,第二弥散敏感系数可以为低弥散敏感系数。例如,第一弥散敏感系数为1000,第二弥散敏感系数为0,弥散敏感系数的单位为mm 2/s。
可以理解,第一磁共振图像、第二磁共振图像与第三磁共振图像可以是应用其他的磁共振扫描序列对预设部位进行磁共振扫描得到的图像。
获取第一磁共振图像、第二磁共振图像与第三磁共振图像可以是多种方式。例如,病变部位识别装置10可以包括在计算机装置中,计算机装置可以从其他的计算设备(例如预先存储第一磁共振图像、第二磁共振图像与第三磁共振图像的服务器)接收第一磁共振图像、第二磁共振图像与第三磁共振图像。
或者,计算机装置可以控制磁共振设备对人体预设部位进行扫描,得到第一磁共振图像、第二磁共振图像与第三磁共振图像。
或者,计算机装置的存储器中可以预先存储第一磁共振图像、第二磁共振图像与第三磁共振图像,所述计算机装置从所述存储器中读取第一磁共振图像、第二磁共振图像与第三磁共振图像。
预处理单元302,用于对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理。
对第一磁共振图像、第二磁共振图像与第三磁共振图像的预处理可以包括对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化,以及对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准。
在一实施例中,可以基于第一磁共振图像、第二磁共振图像与第三磁共振图像的像素值的均值和标准差对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化。具体地,对于第一磁共振图像、第二磁共振图像或第三磁共振图像,计算该图像的像素值的均值u和标准差e,对该图像的每 个像素值进行如下转换:x′=(x-u)/e,其中x是原来的像素值,x′为标准化后的像素值。
可以理解,可以采用其他的标准化方法对第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化。图像标准化方法为公知技术,此处不再赘述。
应用不同的磁共振扫描序列对预设部位进行磁共振扫是有时间间隔的,病人体位可能会发生位移。因此,需要对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,把三个图像的内容对应起来,也就是使第一磁共振图像、第二磁共振图像与第三磁共振图像的各个部分相对应。
在一实施例中,对于第一磁共振图像、第二磁共振图像与第三磁共振图像中的任意两个图像(例如第一磁共振图像与第二磁共振图像),可以计算两个图像的互信息,使两个图像的互信息最大,从而实现两个图像的图像配准。
图像A与图像B的互信息可以表示为:
Figure PCTCN2018099614-appb-000005
Figure PCTCN2018099614-appb-000006
Figure PCTCN2018099614-appb-000007
Figure PCTCN2018099614-appb-000008
其中,a、b分别表示图像A、图像B中像素值(通常为灰度值)的范围,#a表示图像A中像素值属于范围a内的像素的个数,#b表示图像B中像素值属于范围b内的像素的个数,#A、#B分别表示图像A、图像B的像素数,p(a)表示图像A中像素值属于范围a内的像素出现的概率,p(b)表示图像B中像素值属于范围b内的像素出现的概率。
可以采用其他的图像配准方法对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准。例如,可以在第一磁共振图像、第二磁共振图像与第三磁共振图像上各选取一个参照点,依据该参照点将第一磁共振图像、第二磁共振图像与第三磁共振图像配准,具体的,可以包括:
在第一磁共振图像上选取第一参照点,在第二磁共振图像上选取第二参照点,在第三磁共振图像上选取第三参照点,所述第一参照点、第二参照点与所述第三参照点是所述预设部位的相同位置上的点;
计算第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算第三磁共振图像中各个像素点与所述第三参照点的相对坐标;
根据第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算第一磁共振图像的中心点,根据第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算第二磁共振图像的中心点,以及根据第三磁共振图像中各个像素点与所述第三参照点的相对坐标,计算第三磁共振图像的中心点;
将第一磁共振图像的中心点、第二磁共振图像的中心点和第三磁共振图 像的中心点对齐。
在对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准时,可以选择第一磁共振图像、第二磁共振图像与第三磁共振图像中的一个图像作为基准,使未选为基准的图像向选为基准的图像对准。
在一具体实施例中,第一磁共振图像是T2w图像,第二磁共振图像与第三磁共振图像是不同弥散敏感系数下的DWI图像,可以选择T2w图像作为基准,使两个DWI图像向T2w图像对准。不同弥散敏感系数下的DWI图像可以同时扫描得到,因此,在将两个不同弥散敏感系数下的DWI图像向T2w图像对准时,只需将一个DWI图像向T2w图像对准,另一个DWI图像进行同样的对准即可。
在使未选为基准的图像向选为基准的图像对准的过程中,可以对未选为基准的图像逐渐进行形变,使未选为基准的图像逐渐对准选为基准的图像。对未选为基准的图像进行形变可以包括将未选为基准的图像放大或缩小、将未选为基准的图像按照预设方向拉伸、将未选为基准的图像旋转预设角度。
在对第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准的过程中,可以减小第一磁共振图像、第二磁共振图像与第三磁共振图像的分辨率,使用多个较低分辨率的图像来进行图像配准,以增加配准的鲁棒性。例如,可以将第一磁共振图像、第二磁共振图像与第三磁共振图像的分辨率分别都减小一倍、二倍和四倍,将分辨率分别都减小一倍、二倍和四倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像分别进行图像配准。也就是说,将分辨率减小一倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,将分辨率减小二倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,将分辨率减小四倍后的第一磁共振图像、第二磁共振图像与第三磁共振图像来进行图像配准。根据三次低分辨率图像的配准结果得到最终的配置结果(例如取平均值)。
融合单元303,用于以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像。
在一实施例中,将预处理后的第一磁共振图像作为R分量(即红色分量),将预处理后的第二磁共振图像作为G分量(即绿色分量),将预处理后的第三磁共振图像作为B分量(即蓝色分量),将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为RGB彩色图像。
在另一实施例中,将预处理后的第一磁共振图像作为Y分量(即亮度),将预处理后的第二磁共振图像作为U分量(即第一色度),将预处理后的第三磁共振图像作为V分量(即第二色度),将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为YUV彩色图像。
分割单元304,用于将所述彩色图像分割为预设大小的多个区块。
可以按照预设方向对彩色图像进行分割。例如,按照从上到下,从左到右的顺序对彩色图像进行分割。
分割得到的每个区块为预设大小(即步骤105中卷积神经网络模型接收的图像的大小),例如21*21。
在一实施例中,分割得到的各个区块互不重叠。例如,彩色图像大小为168*168,将该彩色图像分割为64个互不重叠的区块,每个区块为21*21。
预测单元305,用于利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练。
所述卷积神经网络模型可以包括卷积层、最大池化层和输出层。在一具体实施例中,参阅图2所示,所述卷积神经网络模型从前向后依次为:卷积层、卷积层、最大池化层、卷积层、最大池化层、卷积层、最大池化层、全连接层、全连接层、输出层。卷积神经网络模型的输出(即输出层的输出)为输入图像的中心点为病变区域的概率。
在一实施例中,卷积神经网络模型训练时所用的损失函数可以定义为:
L(y′,y)=-[ylog(y′)+(1-y)log(1-y′)]。
其中,y′是卷积神经网络模型对训练样本预测得到的训练样本的中心点的病变概率(即训练样本的中心点属于病变区域的概率),y是标签,数值是0或者1,若训练样本的中心点有病变则为1,没有病变则为0。
可以使用神经网络训练算法,例如反向传播算法对卷积神经网络模型进行训练。在一实施例中,可以使用adadelta算法对卷积神经网络模型进行训练。神经网络训练算法为公知技术,此处不再赘述。
卷积神经网络模型使用标注有病变区域的图像进行训练。该标注有病变区域的图像可以是通过上述单元301-303得到的彩色图像。例如,在对卷积神经网络模型进行训练前,获取单元301获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振训练图像、第二磁共振训练图像与第三磁共振训练训练图像;预处理单元302对所述第一磁共振训练图像、第二磁共振训练图像与第三磁共振训练图像进行预处理;融合单元303以预处理后的第一磁共振训练图像为第一分量,以预处理后的第二磁共振训练图像为第二分量,以预处理后的第三磁共振训练图像为第三分量,将预处理后的第一磁共振训练图像、第二磁共振训练图像与第三磁共振训练图像融合为彩色训练图像。对所述彩色训练图像标注病变区域,得到所述标注有病变区域的图像。
可以利用多个标注有病变区域的图像对卷积神经网络模型进行训练。对于每个标注有病变区域的图像,从该图像提取预设大小(例如21*21,与分割单元304分割得到的区块大小相同)的方块区域,以提取的方块区域作为卷积神经网络模型的训练样本。所述训练样本可以包括正训练样本和负训练样本。
具体地,对于每个标注有病变区域的图像,从该图像中的非病变区域和病变区域选取若干个(例如共5000个)点,以选取的每个点为中心,在该图像上获取每个点对应的方块区域。若选取的点在病变区域内,则对应的方块区域为卷积神经网络模型的正训练样本;若选取的点在非病变区域内,则对应的方块区域为卷积神经网络模型的负训练样本。
在一具体实施例中,对每个标注有病变区域的图像,从该图像的非病变区域和病变区域各选取N(例如2500)个点,共计2N个点。因此,对于每个标注有病变区域的图像,可以得到正训练样本和负训练样本各N个。
可以在标注有病变区域的图像中的非病变区域和病变区域随机选取点。或者,可以按照预定规则在标注有病变区域的图像中的非病变区域和病变区域选取点。
在一实施例中,对于标注有病变区域的图像,可以确定该图像的病变区域的邻近区域,在该邻近区域中选取第一数量(例如4/N)个点,确定该图像的病变区域的相似区域,在该相似区域选取第二数量(例如2/N)个点,确定该图像的病变区域的非相关区域,在该非相关区域中选取第三数量(例如4/N)个点,所述邻近区域、相似区域、非相关区域组成该图像的整个非病变区域。所述邻近区域可以是病变区域外预设范围内(例如1cm内)的区域。所述相似区域可以是像素值为预设值的区域(例如G分量超过2的区域)。当邻近区域是病变区域外预设范围内的区域时,可以对该预设范围进行形态学膨胀,得到所述邻近区域。
判断单元306,用于根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
可以判断所述彩色图像中任意区块的中心点的病变概率是否大于或等于预设阈值(例如0.5),若所述彩色图像中任意区块的中心点的病变概率大于或等于预设阈值,则判断所述预设部位为病变部位。病变概率大于或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中任意区块的中心点的病变概率小于预设阈值,则判断所述预设部位非病变部位。
或者,可以判断所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第一预设数量(例如5),若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第一预设数量,则判断所述预设部位为病变部位。病变概率大于或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量小于第一预设数量,则判断所述预设部位非病变部位。
或者,可以判断所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第二预设数量(例如3),若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值(例如0.5)的区块数量是否大于第二预设数量,则判断所述预设部位为病变部位。病变概率大于或等于预设阈值的中心点的位置就是预设部位的病变位置。否则,若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量小于第二预设数量,则判断所述预设部位非病变部位。
所述第一预设数量、第二预设数量可以相同也可以不同。
实施例二获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;将所述彩色图像分割为相同大小的多个区块;利用训练好的卷积神经网络模型对所述彩色图像的每 个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
实施例二的病变部位识别装置使用不同序列图像(即不同磁共振扫描序列扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像)进行病变部位识别,与使用单序列图像(即单一扫描序列扫描得到的磁共振图像)进行病变部位识别的病变部位识别装置相比,本装置提高了病变部位识别的准确率。并且,实施例二的病变部位识别装置的卷积神经网络模型根据融合后的彩色图像的各个区块预测区块中心点的病变概率,与对图像中的单个像素预测病变概率相比,本装置提高了检测效率。因此,本装置实现了快速准确的病变部位识别。
实施例三
图4为本申请实施例三提供的计算机装置的示意图。所述计算机装置1包括存储器20、处理器30以及存储在所述存储器20中并可在所述处理器30上运行的计算机可读指令40,例如病变部位识别程序。所述处理器30执行所述计算机可读指令40时实现上述病变部位识别方法实施例中的步骤,例如图1所示的步骤101-106。或者,所述处理器30执行所述计算机可读指令40时实现上述装置实施例中各模块/单元的功能,例如图3中的单元301-306。
示例性的,所述计算机可读指令40可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器20中,并由所述处理器30执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令40在所述计算机装置1中的执行过程。例如,所述计算机可读指令40可以被分割成图3中的获取单元301、预处理单元302、融合单元303、分割单元304、预测单元305、判断单元306,各单元具体功能参见实施例二。
所述计算机装置1可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图4仅仅是计算机装置1的示例,并不构成对计算机装置1的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机装置1还可以包括输入输出设备、网络接入设备、总线等。
所称处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器30也可以是任何常规的处理器等,所述处理器30是所述计算机装置1的控制中心,利用各种接口和线路连接整个计算机装置1的各个部分。
所述存储器20可用于存储所述计算机可读指令40和/或模块/单元,所述处理器30通过运行或执行存储在所述存储器20内的计算机可读指令和/或模块/单元,以及调用存储在存储器20内的数据,实现所述计算机装置1的各种功能。所述存储器20可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功 能、图像播放功能等)等;存储数据区可存储根据计算机装置1的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器20可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述计算机装置1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述非易失性可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,非易失性可读介质不包括电载波信号和电信信号。
在本申请所提供的几个实施例中,应该理解到,所揭露的计算机装置和方法,可以通过其它的方式实现。例如,以上所描述的计算机装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。计算机装置权利要求中陈述的多个单元或计算机装置也可以由同一个单元或计算机装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种病变部位识别方法,其特征在于,所述方法包括:
    获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
    对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
    以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
    将所述彩色图像分割为预设大小的多个区块;
    利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
    根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
  2. 如权利要求1所述的方法,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,具体包括:
    对于第一磁共振图像、第二磁共振图像与第三磁共振图像中的任意两个图像A与图像B,计算图像A与图像B的互信息,使图像A与图像B的互信息最大,图像A与图像B的互信息为:
    Figure PCTCN2018099614-appb-100001
    其中,a、b分别表示图像A、图像B中像素值的范围,#a表示图像A中像素值属于范围a内的像素的个数,#b表示图像B中像素值属于范围b内的像素的个数,#A、#B分别表示图像A、图像B的像素数,p(a)表示图像A中像素值属于范围a内的像素出现的概率,p(b)表示图像B中像素值属于范围b内的像素出现的概率。
  3. 如权利要求1所述的方法,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,具体包括:
    在所述第一磁共振图像上选取第一参照点,在所述第二磁共振图像上选取第二参照点,在所述第三磁共振图像上选取第三参照点,所述第一参照点、 第二参照点与第三参照点是所述预设部位的相同位置上的点;
    计算所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标;
    根据所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第一磁共振图像的中心点,根据所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第二磁共振图像的中心点,以及根据所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标,计算所述第三磁共振图像的中心点;
    将所述第一磁共振图像的中心点、所述第二磁共振图像的中心点和所述第三磁共振图像的中心点对齐。
  4. 如权利要求1-3中任一项所述的方法,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化,具体包括:
    对于所述第一磁共振图像、第二磁共振图像与第三磁共振图像中的每一图像,计算该图像的像素值的均值u和标准差e,对该图像的每个像素值进行转换:x′=(x-u)/e,其中,x是原来的像素值,x′为标准化后的像素值。
  5. 如权利要求1-3中任一项所述的方法,其特征在于,所述以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像包括:
    将预处理后的第一磁共振图像作为R分量,将预处理后的第二磁共振图像作为G分量,将预处理后的第三磁共振图像作为B分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为RGB彩色图像;或者
    将预处理后的第一磁共振图像作为Y分量,将预处理后的第二磁共振图像作为U分量,将预处理后的第三磁共振图像作为V分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为YUV彩色图像。
  6. 如权利要求1-3中任一项所述的方法,其特征在于,所述卷积神经网络模型的训练样本通过如下方式获取:
    对于所述标注有病变区域的图像,对该图像中的非病变区域和病变区域选取若干个点,以选取的每个点为中心,在该图像上获取每个点对应的方块区域;
    若选取的点在所述病变区域内,则对应的方块区域为所述卷积神经网络模型的正训练样本;
    若选取的点在所述非病变区域内,则对应的方块区域为所述卷积神经网络模型的负训练样本。
  7. 如权利要求1-3中任一项所述的方法,其特征在于,所述根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置包括:
    判断所述彩色图像中任意区块的中心点的病变概率是否大于或等于预设阈值,若所述彩色图像中任意区块的中心点的病变概率大于或等于预设阈值,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置。
  8. 一种病变部位识别装置,其特征在于,所述装置包括:
    获取单元,用于获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
    预处理单元,用于对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
    融合单元,用于以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
    分割单元,用于将所述彩色图像分割为预设大小的多个区块;
    预测单元,用于利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
    判断单元,用于根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
  9. 一种计算机装置,其特征在于,所述计算机装置包括存储器和处理器,所述存储器存储有至少一条计算机可读指令,所述处理器执行所述至少一条计算机可读指令以实现以下步骤:
    获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
    对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
    以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
    将所述彩色图像分割为预设大小的多个区块;
    利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测, 得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
    根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
  10. 如权利要求9所述的计算机装置,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,具体包括:
    在所述第一磁共振图像上选取第一参照点,在所述第二磁共振图像上选取第二参照点,在所述第三磁共振图像上选取第三参照点,所述第一参照点、第二参照点与第三参照点是所述预设部位的相同位置上的点;
    计算所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标;
    根据所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第一磁共振图像的中心点,根据所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第二磁共振图像的中心点,以及根据所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标,计算所述第三磁共振图像的中心点;
    将所述第一磁共振图像的中心点、所述第二磁共振图像的中心点和所述第三磁共振图像的中心点对齐。
  11. 如权利要求9-10中任一项所述的计算机装置,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化,具体包括:
    对于所述第一磁共振图像、第二磁共振图像与第三磁共振图像中的每一图像,计算该图像的像素值的均值u和标准差e,对该图像的每个像素值进行转换:x′=(x-u)/e,其中,x是原来的像素值,x′为标准化后的像素值。
  12. 如权利要求9-10中任一项所述的计算机装置,其特征在于,所述以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像包括:
    将预处理后的第一磁共振图像作为R分量,将预处理后的第二磁共振图像作为G分量,将预处理后的第三磁共振图像作为B分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为RGB彩色图像;或者
    将预处理后的第一磁共振图像作为Y分量,将预处理后的第二磁共振图像作为U分量,将预处理后的第三磁共振图像作为V分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为YUV彩色图像。
  13. 如权利要求9-10中任一项所述的计算机装置,其特征在于,所述卷积神经网络模型的训练样本通过如下方式获取:
    对于所述标注有病变区域的图像,对该图像中的非病变区域和病变区域选取若干个点,以选取的每个点为中心,在该图像上获取每个点对应的方块区域;
    若选取的点在所述病变区域内,则对应的方块区域为所述卷积神经网络模型的正训练样本;
    若选取的点在所述非病变区域内,则对应的方块区域为所述卷积神经网络模型的负训练样本。
  14. 如权利要求9-10中任一项所述的计算机装置,其特征在于,所述根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置包括:
    判断所述彩色图像中任意区块的中心点的病变概率是否大于或等于预设阈值,若所述彩色图像中任意区块的中心点的病变概率大于或等于预设阈值,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置。
  15. 一种非易失性可读存储介质,所述非易失性可读存储介质上存储有至少一条计算机可读指令,其特征在于,所述至少一条计算机可读指令被处理器执行时实现以下步骤:
    获取应用不同的磁共振扫描序列对人体的预设部位进行磁共振扫描得到的第一磁共振图像、第二磁共振图像与第三磁共振图像;
    对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理;
    以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像;
    将所述彩色图像分割为预设大小的多个区块;
    利用训练好的卷积神经网络模型对所述彩色图像的每个区块进行预测,得到每个区块的中心点的病变概率,其中所述卷积神经网络模型使用标注有病变区域的图像进行训练;
    根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置。
  16. 如权利要求15所述的存储介质,其特征在于,所述对所述第一磁共振 图像、第二磁共振图像与第三磁共振图像进行预处理包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行图像配准,具体包括:
    在所述第一磁共振图像上选取第一参照点,在所述第二磁共振图像上选取第二参照点,在所述第三磁共振图像上选取第三参照点,所述第一参照点、第二参照点与第三参照点是所述预设部位的相同位置上的点;
    计算所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标;
    根据所述第一磁共振图像中各个像素点与所述第一参照点的相对坐标,计算所述第一磁共振图像的中心点,根据所述第二磁共振图像中各个像素点与所述第二参照点的相对坐标,计算所述第二磁共振图像的中心点,以及根据所述第三磁共振图像中各个像素点与所述第三参照点的相对坐标,计算所述第三磁共振图像的中心点;
    将所述第一磁共振图像的中心点、所述第二磁共振图像的中心点和所述第三磁共振图像的中心点对齐。
  17. 如权利要求15-16中任一项所述的存储介质,其特征在于,所述对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行预处理包括包括对所述第一磁共振图像、第二磁共振图像与第三磁共振图像进行标准化,具体包括:
    对于所述第一磁共振图像、第二磁共振图像与第三磁共振图像中的每一图像,计算该图像的像素值的均值u和标准差e,对该图像的每个像素值进行转换:x′=(x-u)/e,其中,x是原来的像素值,x′为标准化后的像素值。
  18. 如权利要求15-16中任一项所述的存储介质,其特征在于,所述以预处理后的第一磁共振图像为第一分量,以预处理后的第二磁共振图像为第二分量,以预处理后的第三磁共振图像为第三分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为彩色图像包括:
    将预处理后的第一磁共振图像作为R分量,将预处理后的第二磁共振图像作为G分量,将预处理后的第三磁共振图像作为B分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为RGB彩色图像;或者
    将预处理后的第一磁共振图像作为Y分量,将预处理后的第二磁共振图像作为U分量,将预处理后的第三磁共振图像作为V分量,将预处理后的第一磁共振图像、第二磁共振图像与第三磁共振图像融合为YUV彩色图像。
  19. 如权利要求15-16中任一项所述的存储介质,其特征在于,所述卷积神经网络模型的训练样本通过如下方式获取:
    对于所述标注有病变区域的图像,对该图像中的非病变区域和病变区域选取若干个点,以选取的每个点为中心,在该图像上获取每个点对应的方块区域;
    若选取的点在所述病变区域内,则对应的方块区域为所述卷积神经网络模型的正训练样本;
    若选取的点在所述非病变区域内,则对应的方块区域为所述卷积神经网络模型的负训练样本。
  20. 如权利要求15-16中任一项所述的存储介质,其特征在于,所述根据所述彩色图像中每个区块的中心点的病变概率判断所述预设部位是否为病变部位并确定病变位置包括:
    判断所述彩色图像中任意区块的中心点的病变概率是否大于或等于预设阈值,若所述彩色图像中任意区块的中心点的病变概率大于或等于预设阈值,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,若所述彩色图像中区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第一预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置;或者
    判断所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,若所述彩色图像中邻近区块的中心点的病变概率大于或等于预设阈值的区块数量是否大于第二预设数量,则判断所述预设部位为病变部位,病变概率大于或等于预设阈值的中心点的位置为所述预设部位的病变位置。
PCT/CN2018/099614 2018-05-23 2018-08-09 病变部位识别方法及装置、计算机装置及可读存储介质 WO2019223121A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810503241.6A CN108765399B (zh) 2018-05-23 2018-05-23 病变部位识别装置、计算机装置及可读存储介质
CN201810503241.6 2018-05-23

Publications (1)

Publication Number Publication Date
WO2019223121A1 true WO2019223121A1 (zh) 2019-11-28

Family

ID=64005216

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/099614 WO2019223121A1 (zh) 2018-05-23 2018-08-09 病变部位识别方法及装置、计算机装置及可读存储介质

Country Status (2)

Country Link
CN (1) CN108765399B (zh)
WO (1) WO2019223121A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109559303B (zh) * 2018-11-22 2020-12-01 广州达美智能科技有限公司 钙化点的识别方法、装置和计算机可读存储介质
CN109754387B (zh) * 2018-11-23 2021-11-23 北京永新医疗设备有限公司 一种全身骨显像放射性浓聚灶的智能检测定位方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230206A (zh) * 2017-06-02 2017-10-03 太原理工大学 一种基于多模态数据的超体素序列肺部图像的3d肺结节分割方法
CN107492086A (zh) * 2017-09-20 2017-12-19 华中科技大学 一种图像的融合方法和融合系统
CN107492097A (zh) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 一种识别mri图像感兴趣区域的方法及装置
CN107767378A (zh) * 2017-11-13 2018-03-06 浙江中医药大学 基于深度神经网络的gbm多模态磁共振图像分割方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020136440A1 (en) * 2000-08-30 2002-09-26 Yim Peter J. Vessel surface reconstruction with a tubular deformable model
US20030011624A1 (en) * 2001-07-13 2003-01-16 Randy Ellis Deformable transformations for interventional guidance
CN1299642C (zh) * 2003-12-23 2007-02-14 中国科学院自动化研究所 一种基于互信息敏感区域的多模态医学图像配准方法
WO2008000278A1 (en) * 2006-06-30 2008-01-03 Pnn Medical A/S Method of identification of an element in two or more images
CN100470587C (zh) * 2007-01-26 2009-03-18 清华大学 一种医学图像中腹部器官分割方法
US8422756B2 (en) * 2010-04-27 2013-04-16 Magnetic Resonance Innovations, Inc. Method of generating nuclear magnetic resonance images using susceptibility weighted imaging and susceptibility mapping (SWIM)
CN102622749B (zh) * 2012-02-22 2014-07-30 中国科学院自动化研究所 三维磁共振图像脑子结构自动分割的方法
CN103310458B (zh) * 2013-06-19 2016-05-11 北京理工大学 结合凸包匹配和多尺度分级策略的医学图像弹性配准方法
CN104240226B (zh) * 2013-06-20 2017-12-22 上海联影医疗科技有限公司 一种心脏图像的配准方法
US9883817B2 (en) * 2013-11-20 2018-02-06 Children's National Medical Center Management, assessment and treatment planning for inflammatory bowel disease
CN104161516B (zh) * 2014-01-09 2015-09-02 上海联影医疗科技有限公司 磁共振成像方位判断方法及其装置
CN105809175B (zh) * 2014-12-30 2020-08-21 深圳先进技术研究院 一种基于支持向量机算法的脑水肿分割方法及系统
CN105825509A (zh) * 2016-03-17 2016-08-03 电子科技大学 基于3d卷积神经网络的脑血管分割方法
CN106340021B (zh) * 2016-08-18 2020-11-27 上海联影医疗科技股份有限公司 血管提取方法
CN106295709A (zh) * 2016-08-18 2017-01-04 太原理工大学 基于多尺度脑网络特征的功能磁共振影像数据分类方法
CN107464250B (zh) * 2017-07-03 2020-12-04 深圳市第二人民医院 基于三维mri图像的乳腺肿瘤自动分割方法
WO2019041262A1 (en) * 2017-08-31 2019-03-07 Shenzhen United Imaging Healthcare Co., Ltd. SYSTEM AND METHOD FOR IMAGE SEGMENTATION

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107230206A (zh) * 2017-06-02 2017-10-03 太原理工大学 一种基于多模态数据的超体素序列肺部图像的3d肺结节分割方法
CN107492097A (zh) * 2017-08-07 2017-12-19 北京深睿博联科技有限责任公司 一种识别mri图像感兴趣区域的方法及装置
CN107492086A (zh) * 2017-09-20 2017-12-19 华中科技大学 一种图像的融合方法和融合系统
CN107767378A (zh) * 2017-11-13 2018-03-06 浙江中医药大学 基于深度神经网络的gbm多模态磁共振图像分割方法

Also Published As

Publication number Publication date
CN108765399A (zh) 2018-11-06
CN108765399B (zh) 2022-01-28

Similar Documents

Publication Publication Date Title
Navarro et al. Accurate segmentation and registration of skin lesion images to evaluate lesion change
Fauzi et al. Computerized segmentation and measurement of chronic wound images
Chaddad et al. Quantitative evaluation of robust skull stripping and tumor detection applied to axial MR images
WO2021196955A1 (zh) 图像识别方法及相关装置、设备
CN110678903B (zh) 用于3d图像中异位骨化的分析的系统和方法
WO2019223123A1 (zh) 病变部位识别方法及装置、计算机装置及可读存储介质
US11783488B2 (en) Method and device of extracting label in medical image
KR102176139B1 (ko) 연속적인 딥 인코더-디코더 네트워크를 이용한 이미지 세그먼테이션 장치 및 그 방법
WO2022095258A1 (zh) 图像目标分类方法、装置、设备、存储介质及程序
CN111462115A (zh) 医学图像显示方法、装置和计算机设备
WO2019223121A1 (zh) 病变部位识别方法及装置、计算机装置及可读存储介质
CN111754530A (zh) 一种前列腺超声图像分割分类方法
Skibbe et al. MarmoNet: a pipeline for automated projection mapping of the common marmoset brain from whole-brain serial two-photon tomography
Ramella Saliency-based segmentation of dermoscopic images using colour information
US20230411014A1 (en) Apparatus and method for training of machine learning models using annotated image data for pathology imaging
CN110647889B (zh) 医学图像识别方法、医学图像识别装置、终端设备及介质
Fauzi et al. Segmentation and management of chronic wound images: A computer-based approach
CN111062909A (zh) 乳腺肿块良恶性判断方法及设备
Ejaz et al. Confidence region identification and contour detection in MRI image
US20220108446A1 (en) Systems and methods to process electronic images to provide localized semantic analysis of whole slide images
US20220292683A1 (en) Liver fibrosis recognition method based on medical images and computing device using thereof
CN113920100A (zh) 基于知识蒸馏的弱监督骨扫描图像热点分割方法及系统
Aksenov et al. An ensemble of convolutional neural networks for the use in video endoscopy
CN114648509A (zh) 一种基于多分类任务的甲状腺癌检出系统
Ibrahim et al. Liver Multi-class Tumour Segmentation and Detection Based on Hyperion Pre-trained Models.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18919485

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18919485

Country of ref document: EP

Kind code of ref document: A1