WO2020087838A1 - 血管壁斑块识别设备、系统、方法及存储介质 - Google Patents

血管壁斑块识别设备、系统、方法及存储介质 Download PDF

Info

Publication number
WO2020087838A1
WO2020087838A1 PCT/CN2019/078488 CN2019078488W WO2020087838A1 WO 2020087838 A1 WO2020087838 A1 WO 2020087838A1 CN 2019078488 W CN2019078488 W CN 2019078488W WO 2020087838 A1 WO2020087838 A1 WO 2020087838A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
blood vessel
deep learning
plaque
neural network
Prior art date
Application number
PCT/CN2019/078488
Other languages
English (en)
French (fr)
Inventor
郑海荣
刘新
胡战利
张娜
李思玥
梁栋
杨永峰
Original Assignee
深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳先进技术研究院 filed Critical 深圳先进技术研究院
Publication of WO2020087838A1 publication Critical patent/WO2020087838A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular

Definitions

  • the present invention belongs to the field of medical technology, and particularly relates to a device, system, method and storage medium for identifying blood vessel wall plaque.
  • Magnetic resonance imaging is currently the only non-invasive imaging method that can clearly display atherosclerotic plaques throughout the body.
  • MRI of the blood vessel wall can not only quantitatively analyze systemic plaques such as intracranial arteries, carotid arteries and aorta, but also accurately identify unstable features such as fibrous caps, hemorrhage, calcification, lipid nuclei, inflammation, etc. , Is currently recognized as the best plaque imaging method.
  • the purpose of the present invention is to provide a blood vessel wall plaque identification device, system, method and storage medium, aiming to solve the existing technology, the artificial identification of blood vessel wall plaque caused by low efficiency and recognition accuracy cannot be effectively guaranteed The problem.
  • the present invention provides a vessel wall plaque identification device, including: a memory and a processor, the processor implementing the computer program stored in the memory to achieve the following steps:
  • a deep learning method is used to identify the plaques in the MRI image.
  • the deep learning method is used to identify the plaque in the MRI image, which specifically includes the following steps:
  • the initial image is input to a deep learning neural network to recognize the plaque, and a recognition result is obtained.
  • inputting the initial image into a deep learning neural network to identify the plaque specifically includes the following steps:
  • the residual convolutional neural network includes a convolutional network layer, an activation function network layer and a batch normalization network layer.
  • the adjustment standard is used to process the batch of standard data to obtain batch adjustment data having the same or similar distribution as the input batch data for output.
  • the present invention provides a blood vessel wall plaque identification system, the system includes:
  • An acquisition module for acquiring magnetic resonance MRI images of blood vessel walls and,
  • the recognition module is used to recognize the plaque in the MRI image by using a deep learning method.
  • the identification module specifically includes:
  • a preprocessing module for preprocessing the MRI image to obtain an initial image
  • the deep learning module is used for inputting the initial image to the deep learning neural network to identify the plaque, and obtain a recognition result.
  • the deep learning module specifically includes:
  • a convolution module used to perform feature extraction processing on the initial image to obtain a convolution feature image
  • a candidate frame module used to determine candidate regions for the convolutional feature image, and correspondingly obtain a fully connected feature map
  • a fully connected module is used to classify based on the fully connected feature map to obtain the recognition result.
  • the present invention also provides a blood vessel wall plaque identification method, the method includes the following steps:
  • a deep learning method is used to identify the plaques in the MRI image.
  • the present invention also provides a computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method are implemented.
  • a magnetic resonance MRI image of a blood vessel wall is obtained; a plaque in the MRI image is identified using a deep learning method.
  • the deep learning method is used to identify the plaque of the blood vessel wall, which can greatly reduce the labor and improve the accuracy of the plaque recognition, thereby improving the recognition efficiency and ensuring the recognition accuracy.
  • FIG. 1 is a schematic structural diagram of a blood vessel wall plaque identification device according to Embodiment 1 of the present invention.
  • FIG. 2 is a flowchart of a method implemented by a processor in Embodiment 2 of the present invention
  • Embodiment 3 is a schematic structural diagram of a deep learning neural network in Embodiment 3 of the present invention.
  • Embodiment 4 is a processing flowchart of a deep learning neural network in Embodiment 3 of the present invention.
  • FIG. 5 is a schematic structural diagram of a residual convolutional neural network in Embodiment 4 of the present invention.
  • Embodiment 6 is a flowchart of processing of a batch-normalized network layer in Embodiment 5 of the present invention.
  • FIG. 7 is a schematic structural diagram of a blood vessel wall plaque identification system according to Embodiment 6 of the present invention.
  • Embodiment 8 is a schematic structural diagram of an identification module in Embodiment 7 of the present invention.
  • Embodiment 8 of the present invention is a schematic structural diagram of a deep learning module in Embodiment 8 of the present invention.
  • FIG. 10 is a processing flowchart of a method for identifying a plaque on a blood vessel wall according to Embodiment 10 of the present invention.
  • FIG. 11 is a schematic structural diagram of a deep learning neural network according to an application example of the present invention.
  • FIG. 1 shows a blood vessel wall plaque identification device provided in Embodiment 1 of the present invention.
  • the device is mainly used to: intelligently identify plaque in a blood vessel wall MRI image using artificial intelligence (Artificial Intelligence) technology.
  • the device may be a separate computer, chip, or may be physically integrated with other devices, for example, integrated with an MRI device, or may be represented as a cloud server.
  • Blood vessel wall plaque can be roughly divided into stable plaque and unstable plaque. Unstable plaque is easy to fall off from the blood vessel wall and cause thrombosis.
  • Unstable plaque has fibrous cap, hemorrhage, calcification, lipid nuclei, inflammation and other instability Sexual characteristics, when using AI technology to identify blood vessel wall plaque, not only can identify the presence of blood vessel wall plaque, but also can identify the type of blood vessel wall plaque. For ease of description, only parts related to the embodiments of the present invention are shown, which are described in detail as follows:
  • the blood vessel wall plaque recognition device includes: a memory 101 and a processor 102.
  • the processor executes the computer program 103 stored in the memory 101, the following steps are achieved: first, an MRI image of a blood vessel wall is obtained, and then a deep learning method is used to detect the spot in the MRI image Block identification.
  • the device in order to realize the transmission of data and signaling such as images, the device may further include a network module; in order to realize the output of recognition results, the device may also include an output module such as a display screen; in order to realize manual control, The device may also include input modules such as a mouse and a keyboard.
  • An MRI image of a blood vessel wall usually refers to a blood vessel wall slice image.
  • any suitable deep learning method can be used to identify the plaque in the MRI image of the blood vessel wall, for example: regional convolutional neural network (Regions with Convolutional Neural Network, R-CNN), fast region Convolutional neural network (Fast R-CNN), multi-class single shot detector (Single Shot MultiBox Detector, SSD), etc.
  • R-CNN regional convolutional neural network
  • Fast R-CNN fast region Convolutional neural network
  • SSD single shot detector
  • the deep learning method is used to identify the plaque of the blood vessel wall, which can greatly reduce the labor and improve the accuracy of plaque recognition, thereby improving the recognition efficiency and ensuring the recognition accuracy.
  • the use of MRI to carry out a comprehensive and accurate image evaluation of ischemic stroke-related vascular bed plaques, and the use of artificial intelligence for rapid and accurate diagnosis, is of great significance for the screening and etiological exploration of high-risk stroke population to prevent recurrence.
  • this embodiment further provides the following content:
  • step S201 the above MRI image is preprocessed to obtain an initial image.
  • preprocessing may involve cropping the image to reduce redundant calculations.
  • step S202 the initial image is input to the deep learning neural network to perform patch recognition, and a recognition result is obtained.
  • the architecture of the deep learning neural network may adopt R-CNN architecture, Fast R-CNN architecture, Accelerated Regional Convolutional Neural Network (Faster R-CNN) architecture, SSD architecture, and masked area convolutional neural network. (Mask R-CNN) architecture, etc.
  • this embodiment further provides the following content:
  • the deep learning neural network specifically includes: a convolution subnetwork 301, a candidate frame subnetwork 302, and a fully connected subnetwork 303.
  • each sub-network processing is roughly as follows, and each sub-network processing corresponds to the detailed flow of the above step S202:
  • the convolution sub-network 301 can perform step S401 shown in FIG. 4 to perform feature extraction processing on the initial image to obtain a convolution feature image.
  • the convolution subnetwork 301 may include a multi-segment convolutional neural network, and each segment of the convolutional neural network may use a residual convolutional neural network to alleviate the problems of gradient disappearance and gradient explosion, or non-residual convolutions.
  • a convolutional neural network of course, the convolution subnetwork 301 may also use a combination of a non-residual convolutional neural network and a residual convolutional neural network.
  • the candidate frame sub-network 302 may perform step S402 shown in FIG. 4 to determine candidate regions for the convolutional feature image, and correspondingly obtain a fully connected feature map.
  • the candidate frame sub-network 302 may adopt a sliding window of a predetermined size, and based on the center point of each sliding window, generate a predetermined number of candidate frames with a predetermined size on the initial image. The center point of the sliding window corresponds.
  • candidate regions corresponding to each candidate frame can be obtained. Each candidate region correspondingly generates a candidate region feature map.
  • Candidate region feature maps can also be pooled accordingly to obtain fully connected feature maps.
  • the fully-connected sub-network 303 may perform step S403 shown in FIG. 4, perform classification and other processing based on the fully-connected feature map, and obtain a recognition result, and the recognition result indicates whether there is a blood vessel wall plaque.
  • the two branches of the fully-connected sub-network 303 can be respectively subjected to corresponding classification, regression and other processing.
  • the corresponding fully-connected sub-network 303 can correspondingly include a classification network layer and a regression network layer.
  • the classification network layer can be used to determine whether the candidate area is the foreground or the background according to the fully connected feature map, that is, whether there is a blood vessel wall plaque in the candidate area.
  • the regression network layer can be used to correct the coordinates of the candidate frame and finally determine the location of the plaque.
  • the area-based convolutional neural network is used to recognize the plaque of the blood vessel wall, which can improve the accuracy of the recognition and facilitate the application of AI artificial intelligence diagnosis using medical images.
  • this embodiment further provides the following content:
  • the residual convolutional neural network may include multiple network layers as shown in FIG. 5: Convolutional network layer 501, activation function network layer 502, and batch normalization network layer 503. Among them, each network layer processing is roughly as follows:
  • the convolutional network layer 501 can use a preset convolution kernel to perform convolution processing on the input image.
  • the activation function network layer 502 may use an S-type (Sigmoid) function, a hyperbolic tangent (Tahn) function, or a rectified linear unit (ReLU) function to perform activation processing.
  • Sigmoid S-type
  • Tihn hyperbolic tangent
  • ReLU rectified linear unit
  • the batch normalization network layer 503 can not only realize the traditional standardization process, but also enable the network to accelerate convergence and further alleviate the problems of gradient disappearance and gradient explosion.
  • this embodiment further provides the following content:
  • processing of the batch normalized network layer 503 may specifically include the steps shown in FIG. 6:
  • step S601 the input batch data processed through the convolutional network layer 501 are averaged.
  • step S602 the variance of the batch data is calculated according to the mean.
  • step S603 the batch data is standardized according to the mean and variance to obtain batch standard data.
  • step S604 the batch standard data is processed using an adjustment factor to obtain batch adjustment data having the same or similar distribution as the input batch data for output.
  • the adjustment factor has a corresponding initial value during initialization, and then based on the initial value, the adjustment factor can be trained together with the parameters processed by the network layer in the reverse transmission, so that the adjustment factor can learn the input batch The distribution of data. After the input batch data is processed by batch normalization, the distribution of the original input batch data remains.
  • FIG. 7 correspondingly shows the blood vessel wall plaque recognition system provided in Embodiment 6 of the present invention.
  • the system is also mainly used to: use AI technology to intelligently recognize the plaque in the blood vessel wall MRI image.
  • the system may be a separate Computers and chips can also be in the form of a group of computers or a chipset formed by cascading chips. For ease of explanation, only parts related to the embodiments of the present invention are shown, and the details are as follows:
  • the blood vessel wall plaque identification system includes:
  • An acquisition module 701 for acquiring magnetic resonance MRI images of blood vessel walls and,
  • the recognition module 702 is used to recognize the plaque in the MRI image by using the deep learning method.
  • this embodiment further provides the following content:
  • the identification module 702 specifically includes the structure shown in FIG. 8:
  • the preprocessing module 801 is used to preprocess the MRI image to obtain the initial image;
  • the deep learning module 802 is used to input the initial image to the deep learning neural network to perform patch recognition and obtain a recognition result.
  • this embodiment further provides the following content:
  • the deep learning module 802 specifically includes the structure shown in FIG. 9:
  • the convolution module 901 is used to perform feature extraction processing on the initial image to obtain a convolution feature image
  • the candidate frame module 902 is used to determine candidate regions for the convolutional feature image and obtain a fully connected feature map accordingly;
  • the fully connected module 903 is used to classify based on the fully connected feature map and obtain a recognition result.
  • this embodiment further provides the following content:
  • the convolution module 901 may specifically use several residual convolutional neural networks to perform feature extraction processing on the initial image.
  • the residual convolutional neural network may include a convolutional network layer 501, an activation function network layer 502, and a batch normalization network layer 503, which are still shown in FIG. The specific processing of each network layer will not be repeated.
  • FIG. 10 correspondingly shows the blood vessel wall plaque identification method provided in Embodiment 10 of the present invention.
  • the method specifically includes the following steps:
  • step S1001 an MRI image of the blood vessel wall is obtained.
  • step S1002 a deep learning method is used to identify the plaque in the MRI image.
  • each step may be similar to the content described in the corresponding positions in the foregoing embodiments, and will not be repeated here.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented. The steps S1001 to S1002 shown. Alternatively, when the computer program is executed by the processor, the functions described in the foregoing system embodiments are realized, for example, the functions of the aforementioned deep learning neural network.
  • the computer-readable storage medium in the embodiments of the present invention may include any entity or device capable of carrying computer program code, and a recording medium, such as ROM / RAM, magnetic disk, optical disk, flash memory, and other memories.
  • the deep learning neural network can be used to identify plaques on blood vessel walls, and may specifically include the architecture shown in FIG. 11:
  • the entire deep learning neural network includes: a convolution subnetwork 301, a candidate box subnetwork 302, and a fully connected subnetwork 303.
  • the convolution subnetwork 501 includes a first-stage convolutional neural network 1101, a pooling layer 1102, a second-stage convolutional neural network 1103, a third-stage convolutional neural network 1104, and a fourth-stage convolutional neural network 1105.
  • the first segment of the convolutional neural network 1101 uses a non-residual convolutional neural network
  • the second segment of the convolutional neural network 1103, the third segment of the convolutional neural network 1104 and the fourth segment of the convolutional neural network 1105 use the residual Product neural network.
  • the residual convolutional neural network includes multiple network layers, still shown in FIG. 5: a convolutional network layer 501, an activation function network layer 502, and a batch normalization network layer 503.
  • the candidate frame sub-network 302 includes: a regional candidate network (Region Proposal Network, RPN) 1106 and a regional pooling network 1107.
  • RPN Regional Proposal Network
  • the fully connected sub-network 303 includes a classification network layer 1108 and a regression network layer 1109.
  • a fifth segment convolutional neural network 1111 is also included between the candidate box subnetwork 302 and the fully connected subnetwork 303.
  • a mask network layer 1110 is also set.
  • an initial image of size 224 ⁇ 224 is obtained.
  • the MRI image of the blood vessel wall here is usually a slice image.
  • the initial image is input to the first segment of the convolutional neural network 1101 for initial feature extraction of the convolution calculation.
  • the resulting feature map is processed by the pooling layer 1102, and then output to the second segment of the convolutional neural network 1103, the third
  • the segment convolutional neural network 1104 and the fourth segment convolutional neural network 1105 perform further feature extraction.
  • the size of the convolution kernel used in the first-stage convolutional neural network 1101 for convolution calculation is 7 ⁇ 7, and the step size is 2, which can reduce the data size by half.
  • the size of the feature map output by the first-stage convolutional neural network 1101 is 112 ⁇ 112. After the feature map output by the first segment of the convolutional neural network 1101 is processed by the pooling layer 1102, the size of the feature map is 56 ⁇ 56.
  • the convolutional network layer 501 in the residual convolutional neural network used can be calculated using the following formula (1):
  • i, j are the pixel coordinate positions of the input image
  • I is the input image data
  • K is the convolution kernel
  • p, n are the width and height of the convolution kernel
  • S (i, j) is the output convolution data .
  • the batch normalized network layer 503 can perform the following calculations:
  • the input batch data ⁇ x 1 ... m is the output data of the convolutional network layer 501.
  • n is the total number of data.
  • is a small positive number to avoid the divisor being zero.
  • is the scaling factor and ⁇ is the translation factor.
  • the adjustment factors ⁇ and ⁇ have corresponding initial values during initialization.
  • the initial value of ⁇ is approximately equal to 1
  • the initial value of ⁇ is approximately equal to 0, and then based on This initial value, the adjustment factors ⁇ and ⁇ can be trained together with the parameters processed by the network layer in the reverse transmission, so that ⁇ and ⁇ learn the distribution of the input batch data, and the input batch data is batch normalized After processing, the distribution of the batch data originally entered is still retained.
  • the activation function network layer 502 can perform the calculation shown in the following formula (6):
  • x is the output data of the batch normalized network layer 503
  • f (x) is the output of the activation function network layer 502.
  • the above three operations of the convolutional network layer 501, the activation function network layer 502, and the batch normalization network layer 503 can form a neural network block.
  • the second segment of the convolutional neural network 1103 has 3 neural network blocks. Among them, the size of the convolution kernel used in one neural network block is 1 ⁇ 1, and the number of convolution kernels is 64; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 64; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 256.
  • the third segment of the convolutional neural network 1104 has 4 neural network blocks, of which the size of the convolution kernel used in one neural network block is 1 ⁇ 1 and the number of convolution kernels is 128; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 128; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 512.
  • the fourth segment of the convolutional neural network 1105 has 23 neural network blocks.
  • the size of the convolution kernel used in one neural network block is 1 ⁇ 1, and the number of convolution kernels is 256;
  • the size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 256;
  • the size of the convolution kernel used in another neural network block is 1 ⁇ 1, and the number of convolution kernels is 1024.
  • the output convolution feature image is 14 ⁇ 14 ⁇ 1024, indicating that the output convolution feature image size is 14 ⁇ 14, and the number of convolution kernels is 1024.
  • the convolution feature image processed by the convolution sub-network 301 is then input into the RPN 1106 and the regional pooling network 1107 for corresponding processing.
  • RPN1106 is used to extract candidate regions. Specifically, a sliding window with a predetermined size of 3 ⁇ 3 is used. Based on the center point of each sliding window, a predetermined number of 9 candidate frames with a predetermined size are generated on the initial image. Each candidate The center point of the frame corresponds to the center point of the sliding window. Correspondingly, candidate regions corresponding to each candidate frame can be obtained. Each candidate region correspondingly generates a candidate region feature map.
  • the output convolutional feature image is 14 ⁇ 14 ⁇ 1024
  • the predetermined size of the sliding window is 3 ⁇ 3
  • the predetermined number of candidate frames is 9, then 256 can be obtained accordingly
  • candidate region feature maps that is, 256-dimensional fully connected features.
  • the area size of some candidate frames is the same, and the area size of this partial candidate frame is different from the area size of other partial candidate frames.
  • the area and aspect ratio of the candidate frames can be obtained according to the settings.
  • the area pooling network 1107 is used to pool the candidate area feature map into a fixed-size pooling feature map according to the position coordinates of the candidate frame.
  • the regional pooling network 1107 can be RoiAlign network.
  • the candidate box is derived from the regression model, which is generally a floating-point number.
  • the RoiAlign network does not quantize floating-point numbers. For each candidate box, divide the candidate region feature map into 7 ⁇ 7 units, fix four coordinate positions in each unit, calculate the values of the four positions by bilinear interpolation, and then perform the maximum pooling operation . For each candidate box, a pooled feature map of 7 ⁇ 7 ⁇ 1024 is obtained, and all pooled feature maps constitute the initial fully connected feature map.
  • the fifth segment convolutional neural network 1111 has 3 neural network blocks, of which the size of the convolution kernel used in one neural network block is 1 ⁇ 1 and the number of convolution kernels is 512; The size of the convolution kernel used is 3 ⁇ 3, and the number of convolution kernels is 512; there is also a convolution kernel size used in the neural network block of 1 ⁇ 1, and the number of convolution kernels is 2048.
  • the final fully-connected feature map processed by the fifth-stage convolutional neural network 1111 enters three branches of the fully-connected sub-network 803: a classification network layer 1108, a regression network layer 1109, and a mask network layer 1110.
  • the classification network layer 1108 is used to input the final fully connected feature map processed by the fifth segment convolutional neural network 1111, and to judge whether the candidate area is the foreground or the background, and the output is a 14 ⁇ 14 ⁇ 18 array, where “18 "Means that the nine candidate boxes will output both foreground and background results.
  • the regression network layer 1209 is used to predict the coordinates, height and width of the center anchor point of the candidate frame, and to modify the coordinates of the candidate frame.
  • the output is 14 ⁇ 14 ⁇ 36, where “36” represents the four endpoint values of the nine candidate frames.
  • the mask network layer 1110 uses a 2 ⁇ 2 convolution kernel of a certain size to upsample the feature map of the candidate area that is determined to be a calcification and has undergone position correction, to obtain a 14 ⁇ 14 ⁇ 256 feature map.
  • the convolution process obtains a 14 ⁇ 14 ⁇ 2 feature map, which is then masked to segment the foreground and background.
  • the number of categories is 2, indicating the presence or absence of breast calcifications.
  • the location of the calcifications can be further obtained.
  • the calculation of the classification network layer loss function used in the fully connected subnetwork 303 to optimize the classification is shown in the following formula (7), which is used to optimize the regression when the classification result is the presence of calcified foci
  • the calculation of the regression network layer loss function is shown in the following formula (8).
  • the value of b is (ti-ti '), ti is the predicted coordinate, and ti' is the real coordinate.
  • the optimization processing of the mask processing may involve: in the classification processing, the cross entropy is calculated after the activation function Sigmoid processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

本发明适用医疗技术领域,提供了一种血管壁斑块识别设备、系统、方法及存储介质,首先获得血管壁磁共振MRI图像;利用深度学习方法对所述MRI图像中的斑块进行识别。这样,采用深度学习方法进行血管壁斑块的识别,可极大地减少人工,提高斑块识别的准确性,从而提高了识别效率并能保证识别准确率。采用MRI对缺血性脑卒中相关血管床斑块进行全面、精确的影像评估,并利用人工智能进行快速准确诊断,对脑卒中高危人群筛查和病因探查以防止再发具有十分重要的意义。

Description

血管壁斑块识别设备、系统、方法及存储介质 技术领域
本发明属于医疗技术领域,尤其涉及一种血管壁斑块识别设备、系统、方法及存储介质。
背景技术
磁共振成像(Magnetic Resonance Imaging,MRI)是目前唯一能够清晰显示全身动脉粥样硬化斑块的无创性成像方法。血管壁MRI不仅可以对颅内动脉、颈动脉和主动脉等全身血管斑块进行定量分析,也能够准确识别易损斑块的纤维帽、出血、钙化、脂质核、炎症等不稳定性特征,是目前公认最好的斑块成像方法。
随着MRI设备的国产化和社会应用的普及,以及MRI斑块成像的独特优势,采用MRI对脑卒中高危人群进行全面斑块筛查和脑卒中病因探寻,必将成为我国未来脑卒中防治的重要手段。而且,由于三维高分辨MRI血管壁成像的数据量巨大,每位检查者的图像可达到500幅,有经验的专业医生至少需要花费30分钟才能完成一名检查者的诊断,工作量偏大,效率偏低,且识别准确率会因为医生疲劳等状况的出现而无法有效保证。
发明内容
本发明的目的在于提供一种血管壁斑块识别设备、系统、方法及存储介质,旨在解决现有技术所存在的、人工识别血管壁斑块所导致效率偏低和识别准确率无法有效保证的问题。
一方面,本发明提供了一种血管壁斑块识别设备,包括:存储器及处理器,所述处理器执行所述存储器中存储的计算机程序时实现如下步骤:
获得血管壁磁共振MRI图像;
利用深度学习方法对所述MRI图像中的斑块进行识别。
进一步的,利用深度学习方法对所述MRI图像中的斑块进行识别,具体包括下述步骤:
对所述MRI图像进行预处理,得到初始图像;
将所述初始图像输入至深度学习神经网络进行所述斑块的识别,得到识别结果。
进一步的,将所述初始图像输入至深度学习神经网络进行所述斑块的识别,具体包括下述步骤:
对所述初始图像进行特征提取处理,得到卷积特征图像;
对所述卷积特征图像确定候选区域,相应得到全连接特征图;
基于所述全连接特征图进行分类,得到所述识别结果。
进一步的,对所述初始图像进行特征提取处理,得到卷积特征图像,具体为:
采用若干残差卷积神经网络对所述初始图像进行特征提取处理,
其中,所述残差卷积神经网络中包括卷积网络层、激活函数网络层及批量归一化网络层。
进一步的,采用若干残差卷积神经网络对所述初始图像进行特征提取处理,具体包括下述步骤:
通过所述批量归一化网络层对输入的批量数据求均值;
根据所述均值求所述批量数据的方差;
根据所述均值及所述方差,对所述批量数据进行标准化处理,得到批量标准数据;
采用调整因子对所述批量标准数据进行处理,得到具有与输入的所述批量数据的分布相同或类似的批量调整数据以进行输出。
另一方面,本发明提供了一种血管壁斑块识别系统,所述系统包括:
获取模块,用于获得血管壁磁共振MRI图像;以及,
识别模块,用于利用深度学习方法对所述MRI图像中的斑块进行识别。
进一步的,所述识别模块具体包括:
预处理模块,用于对所述MRI图像进行预处理,得到初始图像;以及,
深度学习模块,用于将所述初始图像输入至深度学习神经网络进行所述斑块的识别,得到识别结果。
进一步的,所述深度学习模块具体包括:
卷积模块,用于对所述初始图像进行特征提取处理,得到卷积特征图像;
候选框模块,用于对所述卷积特征图像确定候选区域,相应得到全连接特征图;以及,
全连接模块,用于基于所述全连接特征图进行分类,得到所述识别结果。
另一方面,本发明还提供了一种血管壁斑块的识别方法,所述方法包括下述步骤:
获得血管壁磁共振MRI图像;
利用深度学习方法对所述MRI图像中的斑块进行识别。
另一方面,本发明还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如上述方法中的步骤。
本发明中,首先获得血管壁磁共振MRI图像;利用深度学习方法对所述MRI图像中的斑块进行识别。这样,采用深度学习方法进行血管壁斑块的识别,可极大地减少人工,提高斑块识别的准确性,从而提高了识别效率并能保证识别准确率。采用MRI对缺血性脑卒中相关血管床斑块进行全面、精确的影像评估,并利用人工智能进行快速准确诊断,对脑卒中高危人群筛查和病因探查以防止再发具有十分重要的意义。
附图说明
图1是本发明实施例一提供的血管壁斑块识别设备的结构示意图;
图2是本发明实施例二中处理器所实现方法的流程图;
图3是本发明实施例三中深度学习神经网络的架构示意图;
图4是本发明实施例三中深度学习神经网络的处理流程图;
图5是本发明实施例四中残差卷积神经网络的架构示意图;
图6是本发明实施例五中批量归一化网络层的处理流程图;
图7是本发明实施例六提供的血管壁斑块识别系统的结构示意图;
图8是本发明实施例七中识别模块的结构示意图;
图9是本发明实施例八中深度学习模块的结构示意图;
图10是本发明实施例十提供的血管壁斑块识别方法的处理流程图;
图11是本发明一应用实例的深度学习神经网络的架构示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
以下结合具体实施例对本发明的具体实现进行详细描述:
实施例一:
图1示出了本发明实施例一提供的血管壁斑块识别设备,该设备主要用于:利用人工智能(Artificial Intelligence,AI)技术,对血管壁MRI图像中的斑块进行智能识别,该设备可以是单独的计算机、芯片,也可以与其他设备在物理上集成,例如:与MRI设备集成,也可以表现为云端服务器等形式。血管壁斑块大致可分为稳定斑块和不稳定斑块,不稳定斑块容易从血管壁脱落而造成血栓,不稳定斑块具有纤维帽、出血、钙化、脂质核、炎症等不稳定性特征,在利用AI技术进行血管壁斑块识别时,不仅能识别出是否存在血管壁斑块,也能识别出血管壁斑块类型。为了便于说明,仅示出了与本发明实施例相关的部分, 详述如下:
该血管壁斑块识别设备包括:存储器101及处理器102,处理器执行存储器101中存储的计算机程序103时实现如下步骤:首先获得血管壁MRI图像,然后利用深度学习方法对MRI图像中的斑块进行识别。在本实施例中,为实现图像等数据、信令的传输,该设备还可以包括网络模块;为实现识别结果的显示等输出,该设备还可以包括显示屏等输出模块;为实现人工操控,该设备还可以包括鼠标、键盘等输入模块。血管壁MRI图像通常是指血管壁切片图像。在本实施例中,可采用合适的任何一种深度学习方法,实现对血管壁MRI图像中斑块的识别,例如:区域卷积神经网络(Regions with Convolutional Neural Network,R-CNN)、快速区域卷积神经网络(Fast R-CNN)、多分类单杆检测器(Single Shot MultiBox Detector,SSD)等。
实施本实施例,采用深度学习方法进行血管壁斑块的识别,可极大地减少人工,提高斑块识别的准确性,从而提高了识别效率并能保证识别准确率。采用MRI对缺血性脑卒中相关血管床斑块进行全面、精确的影像评估,并利用人工智能进行快速准确诊断,对脑卒中高危人群筛查和病因探查以防止再发具有十分重要的意义。
实施例二:
本实施例在实施例一基础上,进一步提供了如下内容:
本实施例中,处理器102执行存储器101中存储的计算机程序103时,具体实现如图2所示的方法中的步骤:
在步骤S201中,对上述MRI图像进行预处理,得到初始图像。在本实施例中,预处理可涉及对图像的裁剪,以减少冗余计算。
在步骤S202中,将初始图像输入至深度学习神经网络进行斑块的识别,得到识别结果。在本实施例中,深度学习神经网络的架构可相应采用R-CNN架构、Fast R-CNN架构、加速区域卷积神经网络(Faster R-CNN)架构、SSD架构、掩膜区域卷积神经网络(Mask R-CNN)架构等。
实施例三:
本实施例在实施例二基础上,进一步提供了如下内容:
本实施例中,如图3所示,深度学习神经网络具体包括:卷积子网络301、候选框子网络302及全连接子网络303。其中,每个子网络处理大致如下,且每个子网络处理对应上述步骤S202的细化流程:
卷积子网络301,可执行如图4所示的步骤S401,对初始图像进行特征提取处理,得到卷积特征图像。在本实施例中,卷积子网络301可包括多段卷积神经网络,每段卷积神经网络可采用残差卷积神经网络以减缓梯度消失和梯度爆炸等问题,也可以采用非残差卷积神经网络,当然,卷积子网络301也可以采用非残差卷积神经网络与残差卷积神经网络的组合。
候选框子网络302,可执行如图4所示的步骤S402,对卷积特征图像确定候选区域,相应得到全连接特征图。在本实施例中,候选框子网络302可采用预定尺寸的滑动窗口,基于每一个滑动窗口的中心点,在初始图像上生成预定数量的、具有预定尺寸的候选框,每个候选框中心点与滑动窗口的中心点对应。对应可获得与每一个候选框对应的候选区域。每一个候选区域对应生成一候选区域特征图。候选区域特征图还可以相应进行区域池化处理,得到全连接特征图。
全连接子网络303,可执行如图4所示的步骤S403,基于全连接特征图进行分类等处理,得到识别结果,识别结果指示是否存在血管壁斑块。在本实施例中,可在全连接子网络303的两个分支分别进行相应的分类、回归等处理,相应全连接子网络303可对应包含分类网络层及回归网络层。分类网络层可用于根据全连接特征图判断候选区域是前景还是背景,也即判断候选区域中是否存在血管壁斑块,回归网络层可用于修正候选框的坐标,最终确定斑块所在位置。
实施本实施例,采用基于区域的卷积神经网络实现对血管壁斑块的识别,可提高识别的准确度,有利于利用医学影像的AI人工智能诊断的应用推广。
实施例四:
本实施例在实施例三基础上,进一步提供了如下内容:
在卷积子网络301及对应的步骤S401中,可采用若干残差卷积神经网络对初始图像进行特征提取处理,而残差卷积神经网络可包括如图5所示的多个网络层:卷积网络层501、激活函数网络层502及批量归一化网络层503。其中,每个网络层处理大致如下:
卷积网络层501可利用预设卷积核实现对输入图像进行卷积处理。
激活函数网络层502可利用S型(Sigmoid)函数、双曲正切(Tahn)函数或整流线性单元(The Rectified Linear Unit,ReLU)函数等进行激活处理。
批量归一化网络层503不仅能实现传统标准化处理,而且能使网络能够加速收敛,进一步减缓梯度消失和梯度爆炸的问题。
实施例五:
本实施例在实施例四基础上,进一步提供了如下内容:
在本实施例中,批量归一化网络层503的处理具体可包括如图6所示的步骤:
在步骤S601中,对输入的、经由卷积网络层501处理所得的批量数据求均值。
在步骤S602中,根据均值求批量数据的方差。
在步骤S603中,根据均值及方差,对批量数据进行标准化处理,得到批量标准数据。
在步骤S604中,采用调整因子对批量标准数据进行处理,得到具有与输入的批量数据的分布相同或类似的批量调整数据以进行输出。在本实施例中,调整因子在初始化时具有相应的初始值,然后基于该初始值,调整因子可以在反向传输中,与网络层处理的参数一同进行训练,使得调整因子能学习输入的批量数据的分布,输入的批量数据经过批量归一化处理后,仍保留原来输入的批量数据的分布。
实施例六:
图7相应示出了本发明实施例六提供的血管壁斑块识别系统,该系统同样主要用于:利用AI技术,对血管壁MRI图像中的斑块进行智能识别,该系统可以是单独的计算机、芯片,也可以是由计算机组成的群组,或者由芯片级联而成的芯片组等形式。为了便于说明,仅示出了与本发明实施例相关的部分,详述如下:
该血管壁斑块识别系统包括:
获取模块701,用于获得血管壁磁共振MRI图像;以及,
识别模块702,用于利用深度学习方法对MRI图像中的斑块进行识别。
获取模块701及识别模块702相应需要阐释的内容,在上述其他实施例中进行了类似表述,此处不再赘述。
实施例七:
本实施例在实施例六基础上,进一步提供了如下内容:
在本实施例中,识别模块702具体包括如图8所示的结构:
预处理模块801,用于对MRI图像进行预处理,得到初始图像;以及,
深度学习模块802,用于将初始图像输入至深度学习神经网络进行斑块的识别,得到识别结果。
预处理模块801及深度学习模块802相应需要阐释的内容,在上述其他实施例中进行了类似表述,此处不再赘述。
实施例八:
本实施例在实施例七基础上,进一步提供了如下内容:
在本实施例中,深度学习模块802具体包括如图9所示的结构:
卷积模块901,用于对初始图像进行特征提取处理,得到卷积特征图像;
候选框模块902,用于对卷积特征图像确定候选区域,相应得到全连接特征图;以及,
全连接模块903,用于基于全连接特征图进行分类,得到识别结果。
同样,卷积模块901、候选框模块902及全连接模块903相应需要阐释的内容,在上述其他实施例中进行了类似表述,此处不再赘述。
实施例九:
本实施例在实施例八基础上,进一步提供了如下内容:
在本实施例中,卷积模块901具体可采用若干残差卷积神经网络对初始图像进行特征提取处理。其中,残差卷积神经网络中可包括仍如图5所示的卷积网络层501、激活函数网络层502及批量归一化网络层503。具体各网络层的处理不再赘述。
实施例十:
图10相应示出了本发明实施例十提供的血管壁斑块识别方法,该方法具体包括如下步骤:
在步骤S1001中,获得血管壁MRI图像。
在步骤S1002中,利用深度学习方法对MRI图像中的斑块进行识别。
其中,各步骤的内容可与上述各实施例中相应位置描述的内容类同,此处不再赘述。
实施例十一:
在本发明实施例中,提供了一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时实现上述方法实施例中的步骤,例如,图10所示的步骤S1001至S1002。或者,该计算机程序被处理器执行时实现上述各系统实施例中所描述的功能,例如:上述深度学习神经网络的功能。
本发明实施例的计算机可读存储介质可以包括能够携带计算机程序代码的任何实体或装置、记录介质,例如,ROM/RAM、磁盘、光盘、闪存等存储器。
下面通过一个应用实例,对上述各实施例中所涉及的深度学习神经网络进行具体说明。
该深度学习神经网络可用于对血管壁斑块进行识别,具体可包括如图11 所示的架构:
整个深度学习神经网络包括:卷积子网络301、候选框子网络302及全连接子网络303。
卷积子网络501包括:第一段卷积神经网络1101、池化层1102、第二段卷积神经网络1103、第三段卷积神经网络1104及第四段卷积神经网络1105。其中,第一段卷积神经网络1101采用非残差卷积神经网络,而第二段卷积神经网络1103、第三段卷积神经网络1104及第四段卷积神经网络1105采用残差卷积神经网络。残差卷积神经网络中包括多个网络层,仍如图5所示:卷积网络层501、激活函数网络层502及批量归一化网络层503。
候选框子网络302包括:区域候选网络(Region Proposal Network,RPN)1106及区域池化网络1107。
全连接子网络303包括:分类网络层1108及回归网络层1109。
在候选框子网络302与全连接子网络303之间还包括第五段卷积神经网络1111。
第五段卷积神经网络1111之后还设定一掩膜网络层1110。
以上深度学习神经网络的处理过程大致如下述:
1、由各投影图像处理所得血管壁MRI图像进行裁剪等预处理后,得到大小为224×224的初始图像。此处所称血管壁MRI图像通常为切片图像。
2、初始图像输入到第一段卷积神经网络1101进行卷积计算的初始特征提取,所得到的特征图经池化层1102处理后,再输出至第二段卷积神经网络1103、第三段卷积神经网络1104及第四段卷积神经网络1105进行进一步的特征提取。第一段卷积神经网络1101进行卷积计算所采用的卷积核大小为7×7,步长为2,可使数据尺寸减半,第一段卷积神经网络1101输出的特征图尺寸为112×112。第一段卷积神经网络1101输出的特征图经池化层1102处理后,得到特征图尺寸为56×56。
所采用的残差卷积神经网络中的卷积网络层501可采用如下公式(1)进行 计算:
Figure PCTCN2019078488-appb-000001
其中,i,j为输入图像的像素坐标位置,I为输入图像数据,K为卷积核,p,n分别为卷积核的宽和高,S(i,j)为输出的卷积数据。
批量归一化网络层503可进行如下计算:
首先,利用如下公式(2)对输入的批量数据求均值μ β。输入的批量数据β=x 1...m是卷积网络层501的输出数据。
Figure PCTCN2019078488-appb-000002
其中,m为数据总数。
其次,利用如下公式(3),根据均值求批量数据的方差σ β 2
Figure PCTCN2019078488-appb-000003
然后,利用如下公式(4),根据均值和方差,对批量数据进行标准化处理,得到批量标准数据
Figure PCTCN2019078488-appb-000004
Figure PCTCN2019078488-appb-000005
其中,∈为避免除数为零的微小正数。
接着,利用如下公式(5),采用调整因子α、ω对批量标准数据进行处理,得到具有与输入的批量数据的分布相同或类似的批量调整数据以进行输出,输出可作为下一激活函数网络层502的输入。
Figure PCTCN2019078488-appb-000006
其中,α为缩放因子,ω为平移因子,调整因子α、ω在初始化时具有相应的初始值,在本应用实例中,α的初始值约等于1、ω的初始值约等于0,然后基于该初始值,调整因子α、ω可以在反向传输中,与网络层处理的参数一同进行训练,从而,α、ω就学习了输入的批量数据的分布,输入的批量数据经过批量归一化处理后,仍保留原来输入的批量数据的分布。
激活函数网络层502可进行如下公式(6)所示的计算:
Figure PCTCN2019078488-appb-000007
其中,x为批量归一化网络层503的输出数据,f(x)为激活函数网络层502的输出。
上述的卷积网络层501、激活函数网络层502及批量归一化网络层503的三种操作可组成一个神经网络块。第二段卷积神经网络1103有3个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为64;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为64;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为256。第三段卷积神经网络1104有4个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为128;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为128;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为512。第四段卷积神经网络1105有23个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为256;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为256;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为1024。最终通过第一至第四段卷积神经网络,输出的卷积特征图像为14×14×1024,表示输出卷积特征图像大小为14×14,卷积核数量为1024。
3、经卷积子网络301处理所得的卷积特征图像随后输入至RPN1106、区域池化网络1107中进行相应处理。
RPN1106用于提取候选区域,具体是,采用预定尺寸3×3的滑动窗口,基于每一个滑动窗口的中心点,在初始图像上生成预定数量为9个、具有预定尺寸的候选框,每个候选框中心点与滑动窗口的中心点对应。对应可获得与每一个候选框对应的候选区域。每一个候选区域对应生成一候选区域特征图。由于通过第一至第四段卷积神经网络,输出的卷积特征图像为14×14×1024,滑动窗口预定尺寸为3×3,候选框预定数量为9个,那么,可相应得到256个候选区域,并相应得到256个候选区域特征图,即256维全连接特征。其中部分候选框的面积尺寸相同,该部分候选框的面积尺寸与其他部分候选框的面积尺寸不同,候选框的面积、长宽比可根据设定而得到的。
区域池化网络1107用于根据候选框的位置坐标,将候选区域特征图池化为固定尺寸的池化特征图。区域池化网络1107可选RoiAlign网络。候选框由回归模型得出,一般为浮点数,RoiAlign网络不对浮点数进行量化。对每个候选框,将候选区域特征图分成7×7个单元,在每个单元中固定四个坐标位置,通过双线性内插法计算出四个位置的值,然后进行最大池化操作。对每个候选框,得到7×7×1024的池化特征图,所有池化特征图构成初始的全连接特征图。
4、初始的全连接特征图经过第五段卷积神经网络1111处理后,输出相应最终的7×7×2048的全连接特征图。第五段卷积神经网络1111有3个神经网络块,其中,一种神经网络块中所采用的卷积核大小为1×1,卷积核数量为512;另一种神经网络块中所采用的卷积核大小为3×3,卷积核数量为512;还有一种神经网络块中采用的卷积核大小为1×1,卷积核数量为2048。
第五段卷积神经网络1111处理所得最终的全连接特征图进入全连接子网络803的三个分支:分类网络层1108、回归网络层1109及掩膜网络层1110。其中,分类网络层1108用于输入第五段卷积神经网络1111处理所得最终的全连接特征图,并以此判断候选区域是前景还是背景,输出为14×14×18的数组,其中“18”表示9个候选框会输出前景或背景两种结果。回归网络层1209用于预测候选框中心锚点的坐标、高与宽,修正候选框的坐标,输出为14×14×36, 其中“36”表示9个候选框的四个端点值。掩膜网络层1110利用一定尺寸2×2的卷积核对相应确定为钙化灶并经过位置修正的候选区域特征图进行上采样,得到14×14×256的特征图,对该特征图进行后续的卷积处理,得到14×14×2的特征图,继而进行掩膜处理,对前景与背景进行分割。在本应用实例中,类别数量为2,表示有无乳腺钙化灶,另外,还可以进一步得到钙化灶位置。
其中,全连接子网络303中所用到的、用于对分类进行优化的分类网络层损失函数的计算如下公式(7)所示,用于当分类结果为存在钙化灶时、对回归进行优化的回归网络层损失函数的计算如下公式(8)所示。
L cls=-log q……公式 (7)
其中,q为真实分类的概率。
Figure PCTCN2019078488-appb-000008
其中,b取值为(ti-ti’),ti为预测坐标,ti’为真实坐标。
而掩膜处理的优化处理可涉及:在分类处理时,经过激活函数Sigmoid处理后进行交叉熵的计算。
以上所述仅为本发明的较佳实施例而已,并不用以限制本发明,凡在本发明的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本发明的保护范围之内。

Claims (10)

  1. 一种血管壁斑块识别设备,其特征在于,包括:存储器及处理器,所述处理器执行所述存储器中存储的计算机程序时实现如下步骤:
    获得血管壁磁共振MRI图像;
    利用深度学习方法对所述MRI图像中的斑块进行识别。
  2. 如权利要求1所述的设备,其特征在于,利用深度学习方法对所述MRI图像中的斑块进行识别,具体包括下述步骤:
    对所述MRI图像进行预处理,得到初始图像;
    将所述初始图像输入至深度学习神经网络进行所述斑块的识别,得到识别结果。
  3. 如权利要求2所述的设备,其特征在于,将所述初始图像输入至深度学习神经网络进行所述斑块的识别,具体包括下述步骤:
    对所述初始图像进行特征提取处理,得到卷积特征图像;
    对所述卷积特征图像确定候选区域,相应得到全连接特征图;
    基于所述全连接特征图进行分类,得到所述识别结果。
  4. 如权利要求3所述的设备,其特征在于,对所述初始图像进行特征提取处理,得到卷积特征图像,具体为:
    采用若干残差卷积神经网络对所述初始图像进行特征提取处理,
    其中,所述残差卷积神经网络中包括卷积网络层、激活函数网络层及批量归一化网络层。
  5. 如权利要求4所述的设备,其特征在于,采用若干残差卷积神经网络对所述初始图像进行特征提取处理,具体包括下述步骤:
    通过所述批量归一化网络层对输入的批量数据求均值;
    根据所述均值求所述批量数据的方差;
    根据所述均值及所述方差,对所述批量数据进行标准化处理,得到批量标准数据;
    采用调整因子对所述批量标准数据进行处理,得到具有与输入的所述批量数据的分布相同或类似的批量调整数据以进行输出。
  6. 一种血管壁斑块识别系统,其特征在于,所述系统包括:
    获取模块,用于获得血管壁磁共振MRI图像;以及,
    识别模块,用于利用深度学习方法对所述MRI图像中的斑块进行识别。
  7. 如权利要求6所述的系统,其特征在于,所述识别模块具体包括:
    预处理模块,用于对所述MRI图像进行预处理,得到初始图像;以及,
    深度学习模块,用于将所述初始图像输入至深度学习神经网络进行所述斑块的识别,得到识别结果。
  8. 如权利要求7所述的系统,其特征在于,所述深度学习模块具体包括:
    卷积模块,用于对所述初始图像进行特征提取处理,得到卷积特征图像;
    候选框模块,用于对所述卷积特征图像确定候选区域,相应得到全连接特征图;以及,
    全连接模块,用于基于所述全连接特征图进行分类,得到所述识别结果。
  9. 一种血管壁斑块的识别方法,其特征在于,所述方法包括下述步骤:
    获得血管壁磁共振MRI图像;
    利用深度学习方法对所述MRI图像中的斑块进行识别。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求9所述方法中的步骤。
PCT/CN2019/078488 2018-10-29 2019-03-18 血管壁斑块识别设备、系统、方法及存储介质 WO2020087838A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811269040.0 2018-10-29
CN201811269040.0A CN109584209B (zh) 2018-10-29 2018-10-29 血管壁斑块识别设备、系统、方法及存储介质

Publications (1)

Publication Number Publication Date
WO2020087838A1 true WO2020087838A1 (zh) 2020-05-07

Family

ID=65921085

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/078488 WO2020087838A1 (zh) 2018-10-29 2019-03-18 血管壁斑块识别设备、系统、方法及存储介质

Country Status (2)

Country Link
CN (1) CN109584209B (zh)
WO (1) WO2020087838A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862049A (zh) * 2020-07-22 2020-10-30 齐鲁工业大学 基于深度学习的脑胶质瘤分割网络系统及分割方法
CN111951268A (zh) * 2020-08-11 2020-11-17 长沙大端信息科技有限公司 颅脑超声图像并行分割方法及装置
CN112216391A (zh) * 2020-10-22 2021-01-12 深圳市第二人民医院(深圳市转化医学研究院) 基于颈动脉粥样硬化情况评估脑卒中发病风险方法及装置
CN113870176A (zh) * 2021-07-29 2021-12-31 金科智融科技(珠海)有限公司 一种基于非限定环境下所拍摄照片生成证件照的方法
CN114469174A (zh) * 2021-12-17 2022-05-13 上海深至信息科技有限公司 一种基于超声扫查视频的动脉斑块识别方法及系统
CN116012367A (zh) * 2023-02-14 2023-04-25 山东省人工智能研究院 一种基于深度学习的胃部胃黏膜特征及位置识别方法

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859201B (zh) * 2019-02-15 2021-04-16 数坤(北京)网络科技有限公司 一种非钙化斑块检出方法及其设备
CN110215232A (zh) * 2019-04-30 2019-09-10 南方医科大学南方医院 基于目标检测算法的冠状动脉血管内超声斑块分析方法
CN110852987B (zh) * 2019-09-24 2022-04-22 西安交通大学 基于深形态学的血管斑块检测方法、设备及存储介质
CN110738643B (zh) * 2019-10-08 2023-07-28 上海联影智能医疗科技有限公司 脑出血的分析方法、计算机设备和存储介质
CN111178369B (zh) * 2019-12-11 2023-12-19 中国科学院苏州生物医学工程技术研究所 一种医学影像的识别方法及系统、电子设备、存储介质
CN113096126B (zh) * 2021-06-03 2021-09-24 四川九通智路科技有限公司 基于图像识别深度学习的道路病害检测系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160174902A1 (en) * 2013-10-17 2016-06-23 Siemens Aktiengesellschaft Method and System for Anatomical Object Detection Using Marginal Space Deep Neural Networks
CN107067396A (zh) * 2017-04-26 2017-08-18 中国人民解放军总医院 一种基于自编码器的核磁共振图像处理装置与方法
CN107818821A (zh) * 2016-09-09 2018-03-20 西门子保健有限责任公司 在医学成像中的基于机器学习的组织定征
CN108542390A (zh) * 2018-03-07 2018-09-18 清华大学 基于多对比度磁共振影像的血管斑块成分识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354159B2 (en) * 2016-09-06 2019-07-16 Carnegie Mellon University Methods and software for detecting objects in an image using a contextual multiscale fast region-based convolutional neural network
CN107507148B (zh) * 2017-08-30 2018-12-18 南方医科大学 基于卷积神经网络去除磁共振图像降采样伪影的方法
CN108710829A (zh) * 2018-04-19 2018-10-26 北京红云智胜科技有限公司 一种基于深度学习的表情分类及微表情检测的方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160174902A1 (en) * 2013-10-17 2016-06-23 Siemens Aktiengesellschaft Method and System for Anatomical Object Detection Using Marginal Space Deep Neural Networks
CN107818821A (zh) * 2016-09-09 2018-03-20 西门子保健有限责任公司 在医学成像中的基于机器学习的组织定征
CN107067396A (zh) * 2017-04-26 2017-08-18 中国人民解放军总医院 一种基于自编码器的核磁共振图像处理装置与方法
CN108542390A (zh) * 2018-03-07 2018-09-18 清华大学 基于多对比度磁共振影像的血管斑块成分识别方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862049A (zh) * 2020-07-22 2020-10-30 齐鲁工业大学 基于深度学习的脑胶质瘤分割网络系统及分割方法
CN111862049B (zh) * 2020-07-22 2024-03-29 齐鲁工业大学 基于深度学习的脑胶质瘤分割网络系统及分割方法
CN111951268A (zh) * 2020-08-11 2020-11-17 长沙大端信息科技有限公司 颅脑超声图像并行分割方法及装置
CN112216391A (zh) * 2020-10-22 2021-01-12 深圳市第二人民医院(深圳市转化医学研究院) 基于颈动脉粥样硬化情况评估脑卒中发病风险方法及装置
CN112216391B (zh) * 2020-10-22 2024-05-10 深圳市第二人民医院(深圳市转化医学研究院) 基于颈动脉粥样硬化情况评估脑卒中发病风险方法及装置
CN113870176A (zh) * 2021-07-29 2021-12-31 金科智融科技(珠海)有限公司 一种基于非限定环境下所拍摄照片生成证件照的方法
CN114469174A (zh) * 2021-12-17 2022-05-13 上海深至信息科技有限公司 一种基于超声扫查视频的动脉斑块识别方法及系统
CN116012367A (zh) * 2023-02-14 2023-04-25 山东省人工智能研究院 一种基于深度学习的胃部胃黏膜特征及位置识别方法
CN116012367B (zh) * 2023-02-14 2023-09-12 山东省人工智能研究院 一种基于深度学习的胃部胃黏膜特征及位置识别方法

Also Published As

Publication number Publication date
CN109584209B (zh) 2023-04-28
CN109584209A (zh) 2019-04-05

Similar Documents

Publication Publication Date Title
WO2020087838A1 (zh) 血管壁斑块识别设备、系统、方法及存储介质
US11798132B2 (en) Image inpainting method and apparatus, computer device, and storage medium
CN108898160B (zh) 基于cnn和影像组学特征融合的乳腺癌组织病理学分级方法
US20210390706A1 (en) Detection model training method and apparatus, computer device and storage medium
WO2018108129A1 (zh) 用于识别物体类别的方法及装置、电子设备
CN107610087B (zh) 一种基于深度学习的舌苔自动分割方法
WO2018120942A1 (zh) 一种多模型融合自动检测医学图像中病变的系统及方法
WO2022012110A1 (zh) 胚胎光镜图像中细胞的识别方法及系统、设备及存储介质
Xu et al. Look, investigate, and classify: a deep hybrid attention method for breast cancer classification
CN113344849A (zh) 一种基于YOLOv5的微乳头检测系统
CN110889446A (zh) 人脸图像识别模型训练及人脸图像识别方法和装置
WO2021136368A1 (zh) 钼靶图像中胸大肌区域自动检测方法及装置
Liu et al. A fully automatic segmentation algorithm for CT lung images based on random forest
Yao et al. Pneumonia detection using an improved algorithm based on faster r-cnn
US11922625B2 (en) Predicting overall survival in early stage lung cancer with feature driven local cell graphs (FeDeG)
CN111798424B (zh) 一种基于医学图像的结节检测方法、装置及电子设备
CN111462060A (zh) 胎儿超声图像中标准切面图像的检测方法和装置
WO2024021461A1 (zh) 缺陷检测方法及装置、设备、存储介质
CN116091490A (zh) 一种基于YOLOv4-CA-CBAM-K-means++-SIOU的肺结节检测方法
Hu et al. A multi-instance networks with multiple views for classification of mammograms
Wen et al. Pulmonary nodule detection based on convolutional block attention module
CN111127400A (zh) 一种乳腺病变检测方法和装置
Joshi et al. Graph deep network for optic disc and optic cup segmentation for glaucoma disease using retinal imaging
CN116563647B (zh) 年龄相关性黄斑病变图像分类方法及装置
CN112669319A (zh) 一种多视角多尺度淋巴结假阳性抑制建模方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19878307

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19878307

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 03.11.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19878307

Country of ref document: EP

Kind code of ref document: A1