WO2020215985A1 - 医学影像分割方法、装置、电子设备和存储介质 - Google Patents

医学影像分割方法、装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2020215985A1
WO2020215985A1 PCT/CN2020/081660 CN2020081660W WO2020215985A1 WO 2020215985 A1 WO2020215985 A1 WO 2020215985A1 CN 2020081660 W CN2020081660 W CN 2020081660W WO 2020215985 A1 WO2020215985 A1 WO 2020215985A1
Authority
WO
WIPO (PCT)
Prior art keywords
slice
feature information
level feature
pair
segmentation
Prior art date
Application number
PCT/CN2020/081660
Other languages
English (en)
French (fr)
Inventor
曹世磊
王仁振
马锴
郑冶枫
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP20793969.5A priority Critical patent/EP3961484A4/en
Priority to KR1020217020738A priority patent/KR102607800B1/ko
Priority to JP2021541593A priority patent/JP7180004B2/ja
Publication of WO2020215985A1 publication Critical patent/WO2020215985A1/zh
Priority to US17/388,249 priority patent/US11887311B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/809Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
    • G06V10/811Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data the classifiers operating on different input data, e.g. multi-modal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30056Liver; Hepatic
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs

Definitions

  • This application relates to the field of artificial intelligence (Artificial Intelligence, AI) technology, and specifically relates to a medical image processing technology.
  • AI Artificial Intelligence
  • a two-dimensional (2 Dimension, 2D) convolutional neural network can be pre-trained to segment the liver image slice by slice, and then the three-dimensional (3 Dimension, 3D) liver image to be segmented is like the electronic liver.
  • Computed Tomography (CT) images are sliced, and the slices are respectively imported into the trained 2D convolutional neural network for segmentation, and segmentation results are obtained, such as the liver area, and so on.
  • CT Computed Tomography
  • the embodiments of the present application provide a medical image segmentation method, device, and storage medium, which can improve the accuracy of segmentation.
  • An embodiment of the application provides a medical image segmentation method, which is executed by an electronic device, and the method includes:
  • the slice pair including two slices sampled from the medical image to be segmented
  • a segmentation result of the slice pair is generated.
  • an embodiment of the present application also provides a medical image segmentation device, including:
  • An extraction unit configured to use different receptive fields to perform feature extraction on each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice in the slice pair;
  • a segmentation unit for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and the high-level feature information of the slice to obtain an initial segmentation result of the slice;
  • a fusion unit for fusing the low-level feature information and the high-level feature information of each slice in the slice pair;
  • a determining unit configured to determine the association information between the slices in the slice pair according to the feature information after fusion
  • the generating unit is configured to generate a segmentation result of the slice pair based on the associated information and the initial segmentation result of each slice in the slice pair.
  • this application also provides an electronic device, including a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to execute any one of the application programs provided in the embodiments of this application. Operations in a medical image segmentation method.
  • an embodiment of the present application also provides a storage medium that stores multiple instructions, and the instructions are suitable for loading by a processor to execute any of the medical image segmentation methods provided in the embodiments of the present application. step.
  • embodiments of the present application also provide a computer program product, including instructions, which when run on a computer, cause the computer to execute the steps in any Chinese medical image segmentation method provided in the embodiments of the present application.
  • the embodiment of the application can use different receptive fields to perform feature extraction on each slice of the slice pair to obtain the high-level feature information and low-level feature information of each slice. For each slice, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice. On the other hand, pair the low-level feature information and high-level feature information of each slice in the slice.
  • the method provided in the embodiment of the present application simultaneously segment two slices (slice pair), and use the correlation between the slices to further adjust the segmentation result, Therefore, the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
  • FIG. 1 is a schematic diagram of a scene of a medical image segmentation method provided by an embodiment of the present application
  • Figure 2 is a flowchart of a medical image segmentation method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of the receptive field in the medical image segmentation method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of the structure of the residual network in the image segmentation model provided by an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an image segmentation model provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of signal components in a medical image segmentation method provided by an embodiment of the present application.
  • FIG. 7 is a schematic structural diagram of a channel attention module in an image segmentation model provided by an embodiment of the present application.
  • FIG. 8 is another schematic structural diagram of an image segmentation model provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of the association relationship in the medical image segmentation method provided by the embodiment of the present application.
  • FIG. 10 is another schematic diagram of the association relationship in the medical image segmentation method provided by the embodiment of the present application.
  • FIG. 11 is another flowchart of a medical image segmentation method provided by an embodiment of the present application.
  • FIG. 12 is an exemplary diagram of overlapping squares in a medical image segmentation method provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a medical image segmentation device provided by an embodiment of the present application.
  • FIG. 14 is a schematic diagram of another structure of the medical image segmentation device provided by an embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Artificial intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge, and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technology of computer science, which attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can react in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, covering a wide range of fields, including both hardware-level technology and software-level technology.
  • Basic artificial intelligence technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • Computer Vision is a science that studies how to make machines "see”. More specifically, it refers to the use of cameras and computers instead of human eyes to identify, track, and measure machine vision for targets, and further Do graphics processing to make computer processing more suitable for human eyes to observe or send to the instrument to detect images.
  • Computer vision technology usually includes image segmentation, image processing, image recognition, image semantic understanding, image retrieval, OCR, video processing, video semantic understanding, video content/behavior recognition, three-dimensional object reconstruction, 3D technology, virtual reality, augmented reality, synchronization Technologies such as positioning and map construction also include common facial recognition, fingerprint recognition and other biometric recognition technologies.
  • Machine Learning is a multi-disciplinary interdisciplinary, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other subjects. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance.
  • Machine learning is the core of artificial intelligence, the fundamental way to make computers intelligent, and its applications cover all fields of artificial intelligence.
  • Machine learning and deep learning usually include artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and teaching learning techniques.
  • the medical image segmentation method provided by the embodiments of the present application involves artificial computer vision technology and machine learning technology, etc., which are specifically described by the following embodiments.
  • the embodiments of the present application provide a medical image segmentation method, device, electronic equipment, and storage medium.
  • the medical image segmentation device can be integrated in an electronic device, and the electronic device can be a server or a terminal or other equipment.
  • the so-called image segmentation refers to the technology and process of dividing an image into a number of specific areas with unique properties and proposing objects of interest. In the embodiments of the present application, it mainly refers to segmenting the three-dimensional medical image and finding the required target object.
  • the 3D medical image is divided into multiple single-frame slices (referred to as slices) along the z-axis, and then the slices Segment the liver area, etc.; after obtaining the segmentation results of all the slices of the 3D medical image, combine these segmentation results along the z-axis to obtain the 3D segmentation result corresponding to the 3D medical image-the target object such as the liver area 3D shape.
  • the segmented target object can subsequently be analyzed by medical staff or other medical experts for further operations.
  • the electronic device can acquire a slice pair (the slice pair includes two slices sampled from the medical image to be segmented), using different receptive fields. (receptive field) Perform feature extraction on each slice in the slice pair to obtain the high-level feature information and low-level feature information of each slice; then, on the one hand, for each slice in the slice pair, according to the low-level feature information and high-level feature information of the slice.
  • the feature information segment the target object in the slice to obtain the initial segmentation result of the slice; on the other hand, the low-level feature information and high-level feature information of each slice in the slice pair are fused, and the slice pair is determined according to the fused feature information
  • the segmentation result of the slice pair is generated based on the association information and the initial segmentation result of each slice in the slice pair.
  • the medical image segmentation device may be specifically integrated in an electronic device.
  • the electronic device may be a server or a terminal.
  • the terminal may include a tablet computer, Notebook computers, personal computers (Personal Computer, PC), medical image acquisition equipment, or other electronic medical equipment, etc.
  • a medical image segmentation method includes: acquiring a slice pair, the slice pair including two slices sampled from a medical image to be segmented; and using different receptive fields to perform feature extraction on each slice in the slice pair to obtain the slice pair High-level feature information and low-level feature information of each slice in the slice pair; for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice; Fuse the low-level feature information and high-level feature information of each slice in the slice pair, and determine the correlation information between the slices in the slice pair based on the fused feature information; generate based on the correlation information and the initial segmentation results of each slice in the slice pair The segmentation result of this slice pair.
  • the specific process of the medical image segmentation method can be as follows:
  • a medical image to be segmented can be acquired, and two slices can be sampled from the medical image to be segmented.
  • the set of these two slices is called a slice pair.
  • the medical image to be segmented can be provided to the medical image segmentation device after image acquisition of biological tissues (such as the heart or liver) by each medical image acquisition device.
  • the medical image acquisition equipment may include electronic equipment such as a magnetic resonance imaging (MRI), a computer tomography device, a colposcope, or an endoscope.
  • the receptive field determines the area size of the input layer corresponding to an element in the output result of a certain layer. That is to say, the receptive field is the size of the element point on the input image mapped on the output result of a certain layer of the convolutional neural network (ie, feature map, also called feature information), for example, see Figure 3.
  • the size of the receptive field of the output feature image element of the first convolutional layer (such as C 1 ) is equal to the size of the convolution kernel (Filter size), and the high-level convolutional layer (such as C 4, etc.)
  • the size of the receptive field is related to the size of the convolution kernel and the step length of all layers before it. Therefore, different levels of information can be captured based on different receptive fields, and the purpose of extracting feature information of different scales can be achieved; that is, through After using different receptive fields to perform feature extraction on slices, multiple scales of high-level feature information and multiple scales of low-level feature information of the slice can be obtained.
  • the high-level feature information and low-level feature information of each slice in a slice pair may include:
  • the slice pair includes the first slice and the second slice
  • the residual network includes the first residual network branch and the second residual network branch that are parallel and the same in structure.
  • the first residual network branch in the residual network can be used to perform feature extraction on the first slice to obtain the high-level feature information corresponding to the first slice and the low-level feature information of different scales; and the residual network
  • the second residual network branch of performs feature extraction on the second slice to obtain high-level feature information and bottom-level feature information of different scales corresponding to the second slice.
  • the high-level feature information refers to the feature map finally output by the residual network.
  • the so-called “high-level feature” can generally contain information related to categories and high-level abstractions.
  • the low-level feature information refers to the feature map obtained by the residual network during the feature extraction process of the medical image to be segmented.
  • the so-called “low-level feature” can generally contain image details such as edges and textures.
  • the high-level feature information refers to the last piece of residual The feature map output by the module
  • the low-level feature information refers to the feature map output by other residual modules except the first residual module and the last residual module.
  • each residual network branch includes residual module 1 (Block1), residual module 2 (Block2), residual module 3 (Block3), residual module 4 (Block4) and residual module 5. (Block5), the feature map output by the residual module 5 is high-level feature information, and the feature map output by the residual module 2, the residual module 3, and the residual module 4 is the low-level feature information.
  • the network structure of the first residual network branch and the second residual network branch can be specifically determined according to actual application requirements.
  • ResNet-18 a residual network
  • the first residual network branch The parameters of and the parameters of the second residual network branch can be shared, and the specific parameter settings can be determined according to actual application requirements.
  • spatial pyramid pooling (SPP) processing may also be performed on the obtained high-level feature information.
  • a spatial pyramid pooling module such as an Atrous Spatial Pyramid Pooling (ASPP) module, may be added after the first residual network branch and the second residual network branch respectively.
  • ASPP uses Atrous Convolution, it can expand the feature receiving field without sacrificing feature spatial resolution. Therefore, it is natural to extract higher-level feature information at more scales.
  • the parameters of the ASPP connected to the first residual network branch and the parameters of the ASPP connected to the second residual network branch may not be shared, and the specific parameters can be determined according to actual application requirements, so we will not do it here. Repeat.
  • the residual network part can be regarded as the coding module part of the segmentation model after training.
  • the target object in the slice can be segmented through the segmentation network in the segmentation model after training according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
  • the details can be as follows:
  • the low-level feature information and high-level feature information of the slice are respectively convolved (Conv), and the high-level feature information after the convolution processing is upsampled (Upsample) to have the same size as the low-level feature information after the convolution process, and then connect with the low-level feature information after the convolution process (Concat) to obtain the connected feature information, and filter the slices according to the connected feature information Pixels of the target object to get the initial segmentation result of the slice.
  • the segmentation network can be regarded as the decoding module part of the segmentation model after training.
  • the segmentation network includes a first segmentation network branch (decoding module A) and a second segmentation network branch (decoding module B) that are parallel and have the same structure .
  • the details can be as follows:
  • the high-level feature information after the product process is up-sampled to the same size as the low-level feature information after the convolution process, it is connected with the low-level feature information after the convolution process to obtain the connected feature information of the first slice.
  • the convolution kernel of the connected feature information can be "3 ⁇ 3".
  • the fusion network in the segmentation model after training can be used to fuse the low-level feature information and high-level feature information of each slice in the slice pair.
  • the step of "segmenting the fusion network in the model after training, fusing the low-level feature information and high-level feature information of each slice in the slice pair" can include:
  • the low-level feature information of each slice in the slice pair is added element by element to obtain the fused low-level feature information.
  • the low-level feature information of the first slice and the low-level feature information of the second slice can be added element by element. , Obtain low-level feature information after fusion.
  • the slice pair includes the first slice and the second slice as an example.
  • the high-level feature information of the first slice and the high-level feature information of the second slice can be correlated element by element. Plus, get high-level feature information after fusion.
  • the fusion low-level feature information and the fused high-level feature information are fused to obtain the fused feature information.
  • any one of the following methods can be used for fusion:
  • the fused low-level feature information and the fused high-level feature information are added element by element to obtain the fused feature information.
  • the second method can also be used to fuse the fused low-level feature information and the fused high-level feature information, as follows:
  • weight is assigned to the fused low-level feature information according to the fused low-level feature information and the fused high-level feature information to obtain the weighted feature information; the weighted post-processing feature sum After the fusion, the low-level feature information is multiplied element by element to obtain the processed feature information, and the processed feature information and the fused high-level feature information are added element by element to obtain the fused feature information, see Figure 5.
  • the channel attention module refers to the network module that adopts the attention mechanism of the channel domain.
  • each image is initially represented by three channels (R, G, B), and then after different convolution kernels, each channel will generate new signals, such as each image feature Using 64-core convolution for each channel will generate a matrix of 64 new channels (H, W, 64).
  • H and W represent the height and width of the image feature, and so on.
  • the characteristics of each channel actually represent the components of the image on different convolution kernels, similar to time-frequency transformation, and the convolution with the convolution kernel is similar to the Fourier transform of the signal, so that this
  • the information of a characteristic channel is decomposed into signal components on 64 convolution kernels, for example, see Figure 6.
  • each signal can be decomposed into signal components on 64 convolution kernels (equivalent to the 64 channels generated), however, the contribution of the new 64 channels to key information is not the same, but how many There are few, so you can assign a weight to each channel to represent the correlation between the channel and the key information (information that plays a key role in the segmentation task). The greater the weight, the higher the correlation and the higher the correlation. The channel is the channel that needs more attention. For this reason, this mechanism is called the "channel domain attention mechanism".
  • the structure of the channel attention module can be specifically determined according to the needs of actual applications.
  • the channel attention module in the fusion network in the segmentation model after training can be divided according to the fusion
  • the post-low-level feature information and the fused high-level feature information give weights to the fused low-level feature information to obtain the weighted feature information.
  • the weighted feature information and the fused low-level feature information are multiplied element by element (Mul) to get
  • the processed characteristic information and the fused high-level characteristic information are added element by element to obtain the fused characteristic information.
  • steps 103 and 104 can be executed in no particular order, and will not be repeated here.
  • the fusion network in the segmentation model after training can be used to determine the associated information between the slices in the slice pair according to the fusion feature information.
  • the features belonging to the target object can be screened out from the fusion feature information, according to The filtered features (that is, the features belonging to the target object) determine the associated information between the slices in the slice pair.
  • the target object refers to the object that needs to be identified in the slice, such as "liver” in liver image segmentation, “heart” in heart image segmentation, and so on.
  • the target object specifically the liver as an example, at this time, it can be determined that the area where the features belonging to the liver are filtered out is the foreground area of the slice, and the other remaining areas in the slice are the background area of the slice.
  • the target object specifically the heart as an example, at this time, it can be determined that the area where the features belonging to the heart are filtered out is the foreground area of the slice, and the other remaining areas in the slice are the background area of the slice.
  • the difference pixel For example, in the feature information after fusion, only the pixels in the foreground area of any slice in the slice pair can be combined to obtain the pixel set of the difference area, referred to as the difference pixel; and the fused feature information can also belong to the slice
  • the pixel points of the foreground area of the two slices are combined to obtain the pixel set of the intersection area, referred to as the intersection pixel.
  • the feature information after fusion can be regarded as the feature information corresponding to the superimposed slice after "superimposing all the slices in the slice pair". Therefore, in the superimposed slice, the foreground area (two If the pixels in the foreground area of each slice do not produce overlapping areas, the difference pixels can be obtained. Similarly, the pixels in the overlapping areas of the foreground area can be obtained to obtain the intersection pixels.
  • pixels that belong to the background areas of two slices in the slice pair at the same time can be used as the background areas of the slice pair, in other words, the intersection of the background areas of all slices As the background area of the slice pair, then the pixel type identification is performed on the background area, the difference pixel and the intersection pixel of the slice pair to obtain the association information between the slices.
  • different pixel values can be used to identify the pixel types of these areas. For example, you can set the pixel value of the background area of the slice pair to "0", set the value of the difference pixel to "1", and set the intersection pixel The value of is set to "2"; alternatively, the pixel value of the background area can be set to "0", the value of the difference pixel is set to "2”, the value of the intersection pixel is set to "1", etc. .
  • different colors can be used to identify the pixel types of these areas. For example, you can set the background area to "black”, the value of the difference pixel to “red”, and the value of the intersection pixel to " Green”; or, you can also set the pixel value of the background area to "black”, the value of the difference pixel to "green”, the value of the intersection pixel to “red”, etc.
  • the step “generate the slice based on the correlation information between the slices in the slice pair and the initial segmentation results of each slice in the slice pair.
  • the “right segmentation result” can include:
  • association information between the slices at this time refers to the association information between the first slice and the second slice, it can reflect the difference pixels and intersection pixels between the first slice and the second slice. Therefore, According to the association information and the initial segmentation result of the first slice, the segmentation result of the second slice can be predicted.
  • the predicted segmentation result of the second slice is " ⁇ (A ⁇ B) ⁇ C ⁇ B", where " ⁇ ” refers to "union” and " ⁇ " refers to difference.
  • the first slice Similar to predicting the segmentation result of the second slice, if the difference pixel of the first slice and the second slice is the A area, the intersection pixel is the B area, and the initial segmentation result of the second slice is the D area, then the first slice The predicted segmentation result is " ⁇ (A ⁇ B) ⁇ D ⁇ B".
  • the predicted segmentation result of the first slice and the initial segmentation result of the first slice are averaged to obtain an adjusted segmentation result of the first slice.
  • the pixel value in the predicted segmentation result of the first slice and the pixel value at the same position in the initial segmentation result of the first slice are averaged, and the pixel average value is used as the adjusted segmentation result of the first slice The pixel value at the same position in.
  • the predicted segmentation result of the second slice and the initial segmentation result of the second slice are averaged to obtain an adjusted segmentation result of the second slice.
  • the pixel value in the predicted segmentation result of the second slice and the pixel value at the same position in the initial segmentation result of the second slice are averaged, and the pixel average value is taken as the same position in the adjusted segmentation result of the second slice The pixel value on the.
  • the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice are averaged, and the averaged result is binarized to obtain the segmentation result of the slice pair.
  • the pixel value in the adjusted segmentation result of the first slice and the pixel value in the same position in the adjusted segmentation result of the second slice are averaged, and the pixel average value is regarded as the same in the segmentation result of the slice pair The pixel value at the position.
  • binarization refers to setting the gray value of the pixels on the image to 0 or 255, which means that the entire image presents an obvious visual effect of only black and white.
  • the post-training segmentation model in the embodiment of the present application may include a residual network, a segmentation network, and a fusion network.
  • the residual network may include a first residual network branch and a second residual network branch in parallel
  • the segmentation network may include a first partition network branch and a second partition network branch in parallel.
  • the residual network part can be regarded as the encoder part of the trained image segmentation model, called the encoding module, used for feature information extraction
  • the segmentation network can be regarded as the training
  • the decoder part of the post-segmentation model, called the decoding module is used for classification and segmentation according to the extracted feature information.
  • the post-training segmentation model may be trained on samples from multiple pairs of slices marked with true values. Specifically, it may be pre-set by operation and maintenance personnel, or it may be obtained through training by the image segmentation device itself. That is, before the step of "using the residual network in the segmentation model after training to perform feature extraction on each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice", the medical image segmentation method may further include:
  • the original data set For example, it is possible to collect multiple medical images as the original data set, for example, obtain the original data set from a database or the Internet, and then preprocess the medical images in the original data set to obtain input standards that meet the preset segmentation model Image, you can get medical image samples. Cut the obtained medical image sample into slices (referred to as slice samples in the embodiment of this application), label each slice sample with the target object (referred to as true value labeling), and form a set in pairs to obtain Multiple pairs of slice pair samples labeled with true values.
  • slice samples in the embodiment of this application
  • true value labeling label each slice sample with the target object
  • preprocessing may include operations such as deduplication, cropping, rotation, and/or flipping. For example, if the input size of the preset segmentation network is "128*128*32 (width*height*deep)" as an example, at this time, the image in the original data set can be cropped to a size of "128*128*32" Of course, other preprocessing operations can be performed on these images.
  • the residual can be used at this time
  • the first residual network branch in the network performs feature extraction on the first slice sample to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice sample; and adopt the first slice of the residual network
  • the two-residual network branch performs feature extraction on the second slice sample to obtain high-level feature information of different scales and bottom-level feature information of different scales corresponding to the second slice sample.
  • the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the slice sample
  • the predicted split value that is, the predicted probability map
  • the following operations can be performed at this time:
  • the convolution kernel of the connected feature information can be specified as " After 3 ⁇ 3" convolution processing, up-sampling to the size of the first slice sample, the predicted segmentation value of the first slice sample can be obtained.
  • the B. Perform convolution processing on the low-level feature information of the second slice sample and the high-level feature information of the second slice sample through the second segmentation network branch. For example, perform convolution processing with a convolution kernel of "1 ⁇ 1", and convolution After the processed high-level feature information is up-sampled to the same size as the convolution processed low-level feature information, it is connected with the convolution processed low-level feature information to obtain the connected feature information of the second slice sample. Then, according to The connected feature information filters the pixels belonging to the target object in the second slice sample to obtain the predicted segmentation value of the second slice sample. For example, the convolution kernel of the connected feature information can be "3 ⁇ 3". After convolution processing, up-sampling to the size of the second slice sample, the predicted segmentation value of the second slice sample can be obtained.
  • the low-level feature information and high-level feature information of each slice sample in the slice are merged, and the association between the slice and each slice sample in the sample is predicted based on the fused feature information information.
  • the low-level feature information of each slice sample in the slice can be added element by element to obtain the low-level feature information after fusion
  • the high-level feature information of each slice sample in the slice can be added element by element to obtain After the fusion of high-level feature information; then, through the fusion network in the preset segmentation model, the fused low-level feature information and the fused high-level feature information are fused to obtain the fused feature information, which can then be obtained from the fused feature information
  • the characteristics belonging to the target object are screened out, and the correlation information between the slice samples in the slice pair sample is determined according to the screened characteristics.
  • the method of fusing the low-level feature information after the fusion and the high-level feature information after the fusion can be found in the previous embodiment.
  • the method of calculating the correlation information between the slice samples in the slice pair sample is the same as calculating the slice centering
  • the manner of associating information between each slice is also the same. For details, please refer to the previous embodiment, which is not repeated here.
  • a loss function such as a Dice loss function may be specifically used to converge the preset segmentation model according to the true value, predicted segmentation value, and predicted associated information, to obtain the segmentation model after training.
  • the loss function can be specifically set according to actual application requirements. For example, taking the slice pair sample including the first slice sample x i and the second slice sample x j as an example, if the first slice sample x i is labeled as true The value is y i , and the true value labeled by the second slice sample x j is y j , then the Dice loss function of the first segmentation network branch can be as follows:
  • the Dice loss function of the second division network branch can be as follows:
  • p i and p j are the predicted segmentation values of the first segmentation network branch and the second segmentation network branch, respectively
  • s and t are the position indexes of the rows and columns in the slice, respectively
  • Represents the true value marked by the pixel with position index (s, t) in the first slice sample Represents the predicted segmentation value of the pixel with the position index of (s, t) in the first slice sample
  • Represents the true value marked by the pixel with the position index of (s, t) in the second slice sample Represents the predicted segmentation value of the pixel with the position index (s, t) in the second slice sample at the position.
  • the Dice loss function of the fusion network can be calculated as :
  • y ij is the true value of the association relationship between the first slice sample x i and the second slice sample x j
  • the true value of the association relationship can be based on the marked true value of the first slice sample x i
  • the true value of the second slice sample x j is calculated, for example, the background area of the image after the superposition of the first slice sample x i and the second slice sample x j , and the true value of the first slice sample x i can be determined
  • the difference and intersection between the true value marked with the second slice sample x j , the background area, difference and intersection obtained here are the superimposed “background” of the first slice sample x i and the second slice sample x j
  • the true value of the "region, difference pixel, and intersection pixel" is the true value of the association relationship mentioned in the embodiment of this application.
  • p ij is the correlation between the first slice sample x i and the second slice sample x j output by the fusion network
  • s and t are the position indexes of the row and column in the slice
  • l is the above three types of relationship (that is, the background area, The category index of the intersection pixel and difference pixel).
  • the overall loss function of the image segmentation model can be calculated for:
  • ⁇ 1 , ⁇ 2 and ⁇ # are hyperparameters manually set to balance the contribution of each part of the loss to the overall loss.
  • different receptive fields can be used to separately perform feature extraction on the slices in the slice pair to obtain the high-level feature information and low-level feature information of each slice.
  • the feature information is fused, and the correlation information between the slices in the slice pair is determined according to the fused feature information.
  • the slice is generated based on the correlation information between the slices in the slice pair and the initial segmentation result of each slice in the slice pair
  • the segmentation results are further adjusted to ensure that the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
  • the image segmentation device is integrated in an electronic device, and its target object is the liver as an example for description.
  • the image segmentation model can include a residual network, a segmentation network, and a fusion network.
  • the residual network can include two parallel residual network branches with the same structure—the first residual network Branch and the second residual network branch.
  • an ASPP void convolutional spatial pyramid pooling
  • the residual network is used as the coding module of the image segmentation model. It is used to extract feature information from the input image such as the slice in the slice pair or the slice sample in the slice pair sample.
  • the segmentation network can include two segmentation network branches that are side by side and have the same structure—a first segmentation network branch and a second segmentation network branch.
  • the segmentation network serves as the decoding module of the image segmentation model and is used to extract data from the encoding module.
  • the feature information is used to segment the target object such as the liver.
  • the fusion network is used to predict the relationship between each slice in the slice pair (or each slice sample in the slice pair sample) based on the feature information extracted by the encoding module. Based on the structure of the image segmentation model, its training method will be described in detail below.
  • the electronic device can collect multiple 3D medical images containing the liver structure, such as from a database or network, etc., and then preprocess these 3D medical images, such as deduplication and cropping. , Rotate, and/or flip to obtain an image that meets the input criteria of the preset segmentation model as a medical image sample. Then, the medical image sample is processed along the z axis (3D coordinate axis ⁇ x, y, z ⁇ ) Direction, sampling at a certain time interval to obtain multiple slice samples. After that, mark the liver area and other information in each slice sample, and form a set in pairs to obtain multiple pairs of slices with true values sample.
  • slice sample 1 and slice sample 2 are formed into slice pair sample 1, and then slice sample 1 and slice sample 3 are combined again Slice to sample 2 and so on.
  • slice sample augmentation to get more training data (ie data augmentation), so that even a small amount of manual annotation data can be used to complete the image segmentation Model training.
  • the electronic device can input the slice pair samples into a preset image segmentation model, and perform feature extraction on the slice samples through the residual network.
  • a residual network branch performs feature extraction on the first slice sample in the slice to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice sample; and through the second residual network The branch performs feature extraction on the second slice sample in the sample to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the second slice sample.
  • ASPP can be further used to further process the high-level feature information corresponding to the first slice sample and the high-level feature information corresponding to the second slice sample to obtain more high-level feature information of different scales, see FIG. 8.
  • the electronic device can use these high-level feature information and low-level feature information.
  • Information using the first segmentation network branch and the second segmentation network branch to segment the first slice sample and the second slice sample to obtain the predicted segmentation value of the first slice sample and the predicted segmentation value of the second slice sample.
  • the electronic device can merge the low-level feature information and high-level feature information of the first slice sample and the second slice sample through the fusion network, and predict the first slice sample and the second slice according to the fused feature information
  • the related information between samples for example, can be as follows:
  • the low-level feature information of the first slice sample and the low-level feature information of the second slice sample can be added element by element to obtain the fused low-level feature information, and the high-level feature information of the first slice sample and the first slice can be combined.
  • the high-level feature information of the two slice samples is added element by element to obtain the fused high-level feature information.
  • the channel attention module is used to assign weight to the fused low-level feature information according to the fused low-level feature information and the fused high-level feature information.
  • the weighted feature information is obtained, and then the weighted feature information and the fused low-level feature information are multiplied element by element (Mul) to obtain the processed feature information, and the processed feature information and the fused high-level feature information are carried out one by one.
  • the elements are added together to get the feature information after fusion.
  • the features belonging to the liver can be filtered from the fused feature information, and the correlation information between the first slice sample and the second slice sample can be predicted based on the filtered characteristics belonging to the liver. For example, the first slice can be predicted.
  • the real value of the slice-to-sample labeling, the predicted segmentation value of each slice sample in the slice-to-sample, and the predicted associated information can be used to converge the preset image segmentation model to obtain the trained image segmentation model.
  • the true value of the slice labeling the sample includes the liver area labelled in the first slice sample and the liver area labelled in the second slice sample. And through the liver area marked in the first slice sample and the liver area marked in the second slice sample, the true relationship between the first slice sample and the second slice sample can be further determined, including the first slice sample The background area of the slice pair sample composed of the second slice sample, the real difference pixel between the first slice sample and the second slice sample, and the real intersection pixel between the first slice sample and the second slice sample Wait.
  • the slice composed of the first slice sample and the second slice sample has the real background area of the sample
  • the background area of the first slice sample can be obtained by superimposing the first slice sample and the second slice sample And the intersection of the background area of the second slice sample.
  • the real difference pixel between the first slice sample and the second slice sample and the real intersection pixel between the first slice sample and the second slice sample, the liver area and the first slice sample marked by the first slice sample can be calculated.
  • the difference between the liver regions labeled by the two slice samples and the intersection between the liver regions labeled by the first slice sample and the liver regions labeled by the second slice sample are calculated.
  • the liver area marked by the first slice sample and the liver area marked by the second slice sample may be In the overlay image, different colors or pixel values are identified for different types of areas. For example, referring to Figures 9 and 10, you can mark the color of the background area of the slice on the sample as black, the color of the intersection pixel as red (white in Figures 9 and 10), and the color of the intersection pixel as green (Gray in Figure 9 and Figure 10), or you can set the pixel value of the background area of the slice pair sample to 0, set the pixel value of the intersection pixel to 1, and set the pixel value of the intersection pixel to 2, etc. Wait.
  • the images in the center of FIGS. 9 and 10 are superimposed images of the liver area marked by the first slice sample and the liver area marked by the second slice sample.
  • the first slice sample and the second slice sample in Figure 9 are sampled from different CT images, while the first slice sample and the second slice sample in Figure 10 are sampled from the same CT image .
  • the Dice loss function When converging, you can use the Dice loss function to converge.
  • the Dice loss function The details can be as follows:
  • ⁇ 1 , ⁇ 2 and ⁇ # are hyperparameters manually set to balance the contribution of each part of the loss to the overall loss.
  • sum (y ij , p ij ) please refer to the previous embodiment, which will not be repeated here.
  • the Dice loss function is used to converge the preset image segmentation model, one training is completed, and so on, after multiple trainings, the trained image segmentation model can be obtained.
  • the part of the image segmentation model "used to determine the relationship between slice samples" can use information other than the label information of the slice sample itself on the target object (that is, the shape of the slice) in the training process. (Inter-association relationship) to train the image segmentation model to learn prior knowledge of the shape (Prior knowledge, which refers to the knowledge that can be used by machine learning algorithms). Therefore, the part "used to determine the association relationship between slice samples" It can also be referred to as the proxy supervision (Proxy Supervision) part, which will not be repeated here.
  • Prxy Supervision proxy supervision
  • the image segmentation model after training includes a residual network, a segmentation network, and a fusion network.
  • the residual network can include a first residual network branch and a second residual network branch, and the segmentation network includes a first segmentation network branch and a second residual network branch. Two split network branches.
  • a medical image segmentation method the specific process can be as follows:
  • the electronic device acquires a medical image to be segmented.
  • the electronic device may receive medical images sent by various medical image acquisition devices, such as MRI or CT, which collect images of the liver of the human body, and use these medical images as medical images to be segmented.
  • various medical image acquisition devices such as MRI or CT, which collect images of the liver of the human body, and use these medical images as medical images to be segmented.
  • the received medical image may be preprocessed, such as deduplication, cropping, rotation, and/or flipping.
  • the electronic device samples two slices from the medical image to be segmented to obtain a slice pair that currently needs to be segmented.
  • the electronic device can sample two slices continuously along the z-axis at a certain time interval to form a slice pair, or it can randomly sample two slices along the z-axis at a certain time interval to form a slice pair, etc. Wait.
  • a patch-wise unit with overlapping parts may be used for sampling.
  • patch-wise is a basic unit of an image.
  • image-wise refers to the image level (that is, using an image as a unit)
  • patch-wise refers to the area between the pixel level and the image level, where each patch is composed of A lot of pixels.
  • the two slices sampled into the slice pair may not overlap, may also have partial overlap, or may also be completely overlapped (that is, the same slice). It should be understood that since in the image segmentation model after training, the parameters of the same network structure in different branches may be different (for example, the parameters of ASPP in different branches are not shared), so the same input is different The initial segmentation results output by the branch may also be different, so it makes sense even if the two input slices are the same.
  • the electronic device performs feature extraction on each slice in the slice pair through the residual network in the image segmentation model after training, to obtain high-level feature information and low-level feature information of each slice.
  • step 203 may be specifically as follows:
  • the electronic device uses the first residual network branch in the residual network such as ResNet-18 to perform feature extraction on the first slice to obtain high-level feature information corresponding to the first slice and low-level feature information of different scales, and then use ASPP processes the high-level feature information corresponding to the first slice to obtain high-level feature information of multiple scales corresponding to the first slice.
  • ResNet-18 residual network branch in the residual network such as ResNet-18
  • the electronic device uses the second residual network branch in the residual network such as another ResNet-18 to perform feature extraction on the second slice to obtain high-level feature information corresponding to the second slice and low-level feature information of different scales, and then Then another ASPP is used to process the high-level feature information corresponding to the second slice to obtain high-level feature information of multiple scales corresponding to the second slice.
  • the second residual network branch in the residual network such as another ResNet-18 to perform feature extraction on the second slice to obtain high-level feature information corresponding to the second slice and low-level feature information of different scales
  • another ASPP is used to process the high-level feature information corresponding to the second slice to obtain high-level feature information of multiple scales corresponding to the second slice.
  • the parameters of the first residual network branch and the second residual network branch can be shared, while the parameters of the ASPP connected by the two branches may not be shared.
  • the specific parameters can be changed according to actual application requirements. Certainly, I won’t repeat it here.
  • the electronic device divides the target object in the slice through the segmentation network in the image segmentation model after training according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation of the slice. result.
  • step 204 may be specifically as follows:
  • the electronic device uses the first segmentation network branch to perform convolution processing on the low-level feature information and high-level feature information of the first slice with a convolution kernel of "1 ⁇ 1", and upsample the high-level feature information after the convolution processing to the AND volume
  • the convolution kernel for the connected feature information is " 3 ⁇ 3" convolution processing, and up-sampling the concatenated feature information after the convolution processing to the size of the first slice, the initial segmentation result of the first slice can be obtained.
  • the other branch can also perform similar operations, that is, the electronic device uses the second segmentation network branch to convolve the low-level feature information of the second slice and the high-level feature information of the second slice.
  • the kernel is a "1 ⁇ 1" volume.
  • Convolution processing up-sampling the convolution processed high-level feature information to the same size as the convolution processed low-level feature information, and then connect it with the convolution processed low-level feature information to obtain the connected feature of the second slice Then, perform convolution processing with the convolution kernel of "3 ⁇ 3" on the connected feature information, and up-sample the connected feature information after convolution processing to the size of the second slice, then the first slice can be obtained.
  • the initial segmentation result of the second slice is a "1 ⁇ 1" volume.
  • the electronic device fuses the low-level feature information and high-level feature information of each slice in the slice pair through the fusion network in the image segmentation model after training.
  • the details can be as follows:
  • the low-level feature information of the first slice and the low-level feature information of the second slice are added element by element to obtain the fused low-level feature information; on the other hand, the high-level feature information of the first slice is combined with the second slice The high-level feature information is added element by element to obtain the fused high-level feature information. Then, the fused low-level feature information and the fused high-level feature information are processed through the channel attention module in the fusion network in the segmentation model after training, and get The processed feature information, and then, the processed feature information and the fused high-level feature information are added element by element to obtain the fused feature information.
  • steps 204 and 205 can be executed in no particular order.
  • the electronic device uses the fusion network in the trained image segmentation model to determine the association information between the slices in the slice pair according to the fusion feature information.
  • the electronic device can specifically filter out the features belonging to the liver area from the fused feature information, and respectively determine the foreground area of the first slice (that is, the area where the liver is located) and the foreground area in the second slice ( That is, the area where the liver is located), and the remaining areas in the first slice except the union of the foreground areas of the two slices are used as the background area of the slice pair, and then the fused feature information belongs to only two slices
  • the pixels in the foreground area of any slice in the slice pair are taken as the difference set pixels of the slice pair, and the pixels belonging to the two slices in the fused feature information are taken as the intersection pixels.
  • the background area, difference set pixels and Intersecting pixels perform pixel type identification, for example, using different pixel values to identify these regions, or using different colors to identify these regions, etc., to obtain the associated information of the first slice and the second slice.
  • the operation of determining the associated information between the slices in the slice pair according to the fused feature information can be implemented through a variety of network structures.
  • the specific convolution kernel is 3 ⁇
  • the convolutional layer of 3 performs convolution processing on the fused feature information, and then upsamples the fused feature information after convolution processing to the same size as the input slices (first slice and second slice), You can get the associated information between the first slice and the second slice, such as the background area of the slice pair, the intersection pixel of the first slice and the second slice, and the difference pixel of the first slice and the second slice Wait.
  • the electronic device generates a segmentation result of the slice pair based on the association information between the slices in the slice pair and the initial segmentation result of each slice in the slice pair.
  • the segmentation result of the second slice can be predicted according to the correlation information between the slices and the initial segmentation result of the first slice, and the first slice can be obtained.
  • the predicted segmentation result of the second slice, and the segmentation result of the first slice is predicted according to the correlation information between the slices and the initial segmentation result of the second slice, and the predicted segmentation result of the first slice is obtained.
  • the predicted segmentation result and the initial segmentation result of the first slice are averaged to obtain the adjusted segmentation result of the first slice, and the predicted segmentation result of the second slice and the initial segmentation result of the second slice are averaged to obtain the first segmentation result.
  • the adjusted segmentation result of the second slice, and further, the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice are merged, such as averaging, and the averaged result is binarized.
  • the segmentation result of the slice pair can be obtained.
  • the electronic device can return to step 202 to sample the other two slices from the medical image to be segmented as the slice pair that currently needs to be segmented, and perform processing in the manner of steps 203 to 207 to obtain the corresponding segmentation results, and so on After obtaining the segmentation results of all the slice pairs in the medical image to be segmented, the segmentation results of these slice pairs are combined in the order of the slices to obtain the segmentation result of the medical image to be segmented (ie, the 3D segmentation result).
  • an image segmentation model can be trained in advance by using slice-to-sample and the correlation between slice samples in the slice-to-sample (information such as prior knowledge), and then, after obtaining the medical image to be segmented,
  • the image segmentation model can be used to extract features from the slice pairs of the medical image to be segmented using different receptive fields to obtain high-level feature information and low-level feature information for each slice in the slice pair.
  • Slice segment the liver area in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
  • the trained image segmentation model considering the correlation between the slices of the 3D medical image, the trained image segmentation model can be used to simultaneously segment two slices (slice pairs), and use the inter-slice Further adjustments are made to the segmentation results to ensure that the shape information of the target object (such as the liver) can be captured more accurately, making the segmentation more accurate.
  • an embodiment of the present application also provides a medical image segmentation device.
  • the medical image segmentation device may be integrated in an electronic device, such as a server or a terminal.
  • the terminal may include a tablet computer, a notebook computer, Personal computer, medical image acquisition equipment, or electronic medical equipment, etc.
  • the medical image segmentation device may obtain an acquisition unit 301, an extraction unit 302, a segmentation unit 303, a fusion unit 304, a determination unit 305, a generation unit 306, etc., as follows:
  • the acquiring unit 301 is configured to acquire a slice pair, which includes two slices sampled from the medical image to be segmented.
  • the acquiring unit 301 may be specifically configured to acquire a medical image to be segmented, and sample two slices from the medical image to be segmented to form a slice pair.
  • the medical image to be segmented may be collected by various medical image acquisition equipment such as MRI or CT on biological tissues such as the heart or liver, and then provided to the acquisition unit 301.
  • various medical image acquisition equipment such as MRI or CT on biological tissues such as the heart or liver
  • the extraction unit 302 is configured to perform feature extraction on each slice in the slice pair using different receptive fields to obtain high-level feature information and low-level feature information of each slice.
  • receptive fields can be used to perform feature extraction on the slice in various ways.
  • it can be implemented through a residual network, namely:
  • the extraction unit 302 can be specifically used to extract features of each slice in the slice pair through the residual network in the segmentation model after training, to obtain high-level feature information and low-level feature information of each slice.
  • the extraction unit 302 may use The first residual network branch in the residual network performs feature extraction on the first slice to obtain high-level feature information of different scales and low-level feature information of different scales corresponding to the first slice; and use the The second residual network branch performs feature extraction on the second slice to obtain high-level feature information of different scales and bottom-level feature information of different scales corresponding to the second slice.
  • the network structure of the first residual network branch and the second residual network branch can be specifically determined according to actual application requirements, for example, ResNet-18 can be used.
  • the parameters of the first residual network branch and the second residual network branch can be shared, and the specific parameter settings can be determined according to the needs of the actual application.
  • spatial pyramid pooling such as ASPP processing, may also be performed on the obtained high-level feature information.
  • ASPP processing may also be performed on the obtained high-level feature information.
  • the segmentation unit 303 is configured to segment the target object in the slice according to the low-level feature information and the high-level feature information of the slice for each slice in the slice pair to obtain the initial segmentation result of the slice.
  • the segmentation unit 303 can be specifically configured to segment the target object in the slice through the segmentation network in the segmentation model after training for each slice in the slice pair according to the low-level feature information and high-level feature information of the slice. Obtain the initial segmentation result of the slice; for example, it is specifically used as follows:
  • the low-level feature information and high-level feature information of the slice are respectively convolved through the segmentation network in the segmentation model after training; the high-level feature information after the convolution process is upsampled to the convolution process After the low-level feature information has the same size, it is connected with the low-level feature information after convolution processing to obtain the connected feature information; according to the connected feature information, the pixels belonging to the target object in the slice are filtered to obtain the slice
  • the initial segmentation result please refer to the previous method embodiment for details, which will not be repeated here.
  • the fusion unit 304 is configured to fuse the low-level feature information and the high-level feature information of each slice in the slice pair.
  • the fusion unit 304 may be specifically used to segment the fusion network in the model after training to fuse the low-level feature information and high-level feature information of each slice in the slice pair.
  • the fusion unit 304 can be specifically used for:
  • the low-level feature information of each slice in the slice pair is added element by element to obtain the fused low-level feature information; the high-level feature information of each slice in the slice pair is added element by element to obtain the fused high-level feature information; through training
  • the fusion network in the post-segmentation model fuses the fused low-level feature information and the fused high-level feature information to obtain the fused feature information.
  • the fusion unit 304 may be specifically used to segment the fusion network in the model after training to add the fused low-level feature information and the fused high-level feature information element by element to obtain the fused feature information.
  • the attention mechanism can also be used to allow the network to automatically recognize different features. Different weights are assigned to the information so that the network can selectively integrate the characteristic information. which is:
  • the fusion unit 304 can be specifically used for the channel attention module in the fusion network in the segmentation model after training, according to the fusion low-level feature information and the fused high-level feature information to assign weights to the fused low-level feature information, and get the weighted Feature information; multiply the weighted feature information and the fused low-level feature information element by element to obtain the processed feature information; add the processed feature information and the fused high-level feature information element by element to obtain the fused feature information .
  • the specific structure of the channel attention module can be determined according to actual application requirements, and will not be repeated here.
  • the determining unit 305 is configured to determine the association information between the slices in the slice pair according to the feature information after the fusion.
  • the target object refers to the object that needs to be identified in the slice, such as "liver” in liver image segmentation, “heart” in heart image segmentation, and so on.
  • the determining unit 305 may include a screening subunit and a determining subunit, as follows:
  • the screening subunit can be used to screen out the features belonging to the target object from the fusion feature information.
  • the determining subunit can be used to determine the associated information between the slices according to the filtered features. For example, it can be as follows:
  • the determining subunit can be specifically used to determine the background area and the foreground area of each slice in the slice pair according to the filtered features, calculate the difference pixel and the intersection pixel of the foreground area between the slices, according to the background area, difference set Pixels and intersection pixels generate the associated information between the slices in the slice pair.
  • the determining subunit can be specifically used to include pixels in the foreground area of any slice in the fused feature information as the difference set pixels; and in the fused feature information, they also belong to the slice
  • the pixels in the foreground area of the two slices in the center are regarded as the intersection pixels.
  • the determining subunit is specifically used to identify the background area, the difference pixel, and the intersection pixel to identify the pixel type to obtain the association information between the slices.
  • the determining subunit is specifically used to identify the background area, the difference pixel, and the intersection pixel to identify the pixel type to obtain the association information between the slices.
  • the generating unit 306 is configured to generate a segmentation result of the slice pair based on the association information and the initial segmentation result of each slice in the slice pair.
  • the generating unit 306 may be specifically used for:
  • the generating unit 306 may be specifically used for: averaging the predicted segmentation result of the first slice and the initial segmentation result of the first slice to obtain the adjusted segmentation result of the first slice; and for the second slice The predicted segmentation result of and the initial segmentation result of the second slice are averaged to obtain the adjusted segmentation result of the second slice.
  • the generating unit 306 may be specifically configured to perform averaging processing on the adjusted segmentation result of the first slice and the adjusted segmentation result of the second slice, and binarize the averaged result to obtain the slice. Right segmentation result.
  • the trained image segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be set in advance by the operation and maintenance personnel, or it can be trained by the image segmentation device itself. get. That is, as shown in FIG. 14, the image segmentation device may further include an acquisition unit 307 and a training unit 308;
  • the collection unit 307 may be used to collect multiple pairs of slice pairs labeled with true values.
  • the slice pair sample includes two slice samples sampled from medical image samples.
  • the previous embodiment which will not be repeated here.
  • the training unit 308 can be used to perform feature extraction on each slice sample in the slice through the residual network in the preset segmentation model to obtain high-level feature information and low-level feature information of each slice sample; For each slice sample, according to the low-level feature information and high-level feature information of the slice sample, the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the predicted segmentation value of the slice sample;
  • the fusion network in the segmentation model fuses the low-level feature information and high-level feature information of each slice sample in the slice pair sample, and predicts the correlation information between the slice samples in the slice pair sample according to the fusion feature information; Value, slice prediction segmentation value and prediction associated information of each slice sample in the sample converge the preset segmentation model to obtain the segmentation model after training.
  • the training unit 308 may be specifically used to use the Dice loss function to converge the segmentation model according to the true value, the predicted segmentation value of each slice sample in the slice pair sample, and the predicted associated information to obtain the segmentation model after training.
  • each of the above units can be implemented as an independent entity, or can be combined arbitrarily, and implemented as the same or several entities.
  • each of the above units please refer to the previous method embodiments, which will not be repeated here.
  • the extraction unit 302 can use different receptive fields to perform feature extraction on each slice in the slice pair to obtain the high-level feature information and low-level feature information of each slice. Then, On the one hand, for each slice in the slice pair, the segmentation unit 303 segments the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice.
  • the fusion unit 304 The low-level feature information and high-level feature information of each slice in the slice pair are fused, and the determining unit 305 determines the association information between the slices in the slice pair according to the fused feature information, and further, the generation unit 306 determines the correlation information between each slice in the slice pair based on The correlation information between the slices and the initial segmentation result of each slice in the slice pair generate the segmentation result of the slice pair; considering the correlation between the slices of the 3D medical image, the device provided in this embodiment of the application simultaneously The slices (slice pairs) are segmented, and the segmentation results are further adjusted by using the correlation between the slices to ensure that the shape information of the target object (such as the liver) can be captured more accurately, and the segmentation accuracy is higher.
  • the target object such as the liver
  • FIG. 15 shows a schematic structural diagram of the electronic device involved in the embodiment of the present application, specifically:
  • the electronic device may include one or more processing core processors 401, one or more computer-readable storage medium memory 402, power supply 403, input unit 404 and other components.
  • processing core processors 401 one or more computer-readable storage medium memory 402, power supply 403, input unit 404 and other components.
  • FIG. 15 does not constitute a limitation on the electronic device, and may include more or fewer components than shown in the figure, or a combination of certain components, or different component arrangements. among them:
  • the processor 401 is the control center of the electronic device. It uses various interfaces and lines to connect the various parts of the entire electronic device. It runs or executes the software programs and/or modules stored in the memory 402, and calls the data stored in the memory 402. Data, perform various functions of electronic equipment and process data, so as to monitor the electronic equipment as a whole.
  • the processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface, and application programs, etc. , The modem processor mainly deals with wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 401.
  • the memory 402 can be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402.
  • the memory 402 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data created by the use of electronic equipment, etc.
  • the memory 402 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the memory 402 may further include a memory controller to provide the processor 401 with access to the memory 402.
  • the electronic device also includes a power supply 403 for supplying power to various components.
  • the power supply 403 may be logically connected to the processor 401 through a power management system, so that functions such as charging, discharging, and power management can be managed through the power management system.
  • the power supply 403 may also include one or more DC or AC power supplies, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and any other components.
  • the electronic device may further include an input unit 404, which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • an input unit 404 which can be used to receive inputted digital or character information and generate keyboard, mouse, joystick, optical or trackball signal input related to user settings and function control.
  • the electronic device may also include a display unit, etc., which will not be repeated here.
  • the processor 401 in the electronic device loads the executable file corresponding to the process of one or more application programs into the memory 402 according to the following instructions, and the processor 401 runs and stores the executable file
  • the application programs in the memory 402 thus realize various functions, as follows:
  • the slice pair including two slices sampled from the medical image to be segmented
  • the segmentation result of the slice pair is generated.
  • the residual network in the segmentation model after training can be used to perform feature extraction on each slice to obtain the high-level feature information and low-level feature information of each slice; then, for each slice in the slice pair, according to the The low-level feature information and high-level feature information of the slice are segmented through the segmentation network in the segmentation model after the training to obtain the initial segmentation result of the segment; and through the fusion network in the segmentation model after the training, The low-level feature information and high-level feature information of each slice in the slice pair are fused, and the correlation information between the slices in the slice pair is determined according to the fused characteristic information, and then, based on the correlation information and each slice in the slice pair The initial segmentation result of generates the segmentation result of the slice pair.
  • the post-training segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be pre-set by the operation and maintenance personnel, or it can be obtained by training by the image segmentation device itself . That is, the processor 401 may also run an application program stored in the memory 402, so as to realize the following functions:
  • the target object in the slice sample is segmented through the segmentation network in the preset segmentation model to obtain the predicted segmentation value of the slice sample ;
  • the low-level feature information and high-level feature information of each slice sample in the slice pair sample are merged, and the correlation information between each slice sample in the slice pair sample is predicted based on the fused characteristic information;
  • the preset segmentation model is converged according to the true value, the predicted segmentation value of each segment sample in the segment pair sample, and the predicted associated information to obtain the segmentation model after training.
  • the electronic device of this embodiment can use different receptive fields to perform feature extraction on each slice of the slice pair to obtain high-level feature information and low-level feature information of each slice. Then, On the one hand, for each slice in the slice pair, segment the target object in the slice according to the low-level feature information and high-level feature information of the slice to obtain the initial segmentation result of the slice; on the other hand, the slice is centered on the lower layer of each slice
  • the feature information and high-level feature information are fused, and the correlation information between the slices in the slice pair is determined according to the fused characteristic information, and then the initial segmentation results of each slice in the slice pair are adjusted by using the obtained correlation information to obtain the final The desired segmentation result; considering the correlation between the slices of the 3D medical image, the method provided in the embodiment of the present application simultaneously divides two slices (slice pair), and uses the correlation between the slices to divide the result Make further adjustments to ensure that the shape information of the target object (such as the liver
  • an embodiment of the present application provides a storage medium in which multiple instructions are stored, and the instructions can be loaded by a processor to execute the steps in any medical image segmentation method provided in the embodiments of the present application.
  • the instruction can perform the following steps:
  • the slice pair including two slices sampled from the medical image to be segmented
  • the segmentation result of the slice pair is generated.
  • the residual network in the segmentation model after training can be used to extract features of each slice in the slice pair to obtain high-level feature information and low-level feature information of each slice; then, for each slice in the slice pair, according to The low-level feature information and high-level feature information of the slice, the target object in the slice is segmented by the segmentation network in the segmentation model after training, and the initial segmentation result of the slice is obtained; and the fusion network in the segmentation model after the training , Fuse the low-level feature information and high-level feature information of each slice in the slice pair, and determine the correlation information between the slices in the slice pair based on the fused feature information, and then, based on the correlation information and each slice in the slice pair The initial segmentation result of the slice generates the segmentation result of the slice pair.
  • the post-training segmentation model can be trained on samples from multiple pairs of slices marked with true values. Specifically, it can be pre-set by the operation and maintenance personnel, or it can be obtained by training by the image segmentation device itself . That is, the instruction can also perform the following steps:
  • the target object in the slice sample is segmented through the segmentation network in the segmentation model to obtain the predicted segmentation value of the slice sample;
  • the low-level feature information and high-level feature information of each slice sample in the slice pair sample are merged, and the correlation information between each slice sample in the slice pair sample is predicted based on the fused characteristic information;
  • the preset segmentation model is converged according to the true value, the predicted segmentation value of each segment sample in the segment pair sample, and the predicted associated information to obtain the segmentation model after training.
  • the storage medium may include: read only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.

Abstract

本申请实施例公开了一种医学影像分割方法、装置、电子设备和存储介质;本申请实施例在获取到切片对后,可以采用不同感受野分别对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息,然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,另一方面将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定各切片之间的关联信息,再然后,基于各切片之间的关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果;该方案可以提高分割的精准性。

Description

医学影像分割方法、装置、电子设备和存储介质
本申请要求于2019年04月22日提交中国专利局、申请号为2019103227838、申请名称为“医学影像分割方法、装置、电子设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能(Artificial Intelligence,AI)技术领域,具体涉及一种医学影像处理技术。
背景技术
随着AI的发展,AI在医疗领域上的应用如今越来越为广泛,特别是在医学影像的分割上。例如,以肝脏的分割为例,可以预先训练一个逐切片分割肝脏影像的二维(2 Dimension,2D)卷积神经网络,然后,对待分割的三维(3 Dimension,3D)肝脏影像如肝脏的电子计算机断层扫描(Computed Tomography,CT)影像,进行切片处理,将切片分别导入训练好的2D卷积神经网络进行分割,得到分割结果,比如得到肝脏区域,等等。
然而,由于上述2D卷积神经网络需要对3D医学影像进行切片后,再逐片进行分割,忽略了扫描过程中谱段之间的关联性,因此,很难准确地捕捉到目标对象(如肝脏)的形状信息,导致分割精准性较低。
发明内容
本申请实施例提供一种医学影像分割方法、装置和存储介质,可以提高分割的精准性。
本申请实施例提供一种医学影像分割方法,由电子设备执行,所述方法包括:
获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;
采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息;
针对所述切片对中每一切片,根据该切片的所述低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息;
基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果。
相应的,本申请实施例还提供一种医学影像分割装置,包括:
获取单元,用于获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;
提取单元,用于采用不同感受野对所述切片对中每一切片进行特征提取, 得到所述切片对中每一切片的高层特征信息和低层特征信息;
分割单元,用于针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
融合单元,用于将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合;
确定单元,用于根据融合后特征信息确定所述切片对中各切片之间的关联信息;
生成单元,用于基于所述关联信息和所述切片对中各切片的初始分割结果生成所述切片对的分割结果。
相应的,本申请还提供一种电子设备,包括存储器和处理器;所述存储器存储有应用程序,所述处理器用于运行所述存储器内的应用程序,以执行本申请实施例提供的任一种医学影像分割方法中的操作。
此外,本申请实施例还提供一种存储介质,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行本申请实施例提供的任一种医学影像分割方法中的步骤。
此外,本申请实施例还提供一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行本申请实施例提供的任一中医学影像分割方法中的步骤。
本申请实施例在获取到切片对后,可以采用不同感受野分别对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息,然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,另一方面将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定切片对中各切片之间的关联信息,再然后,基于切片对中各切片之间的关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果;考虑到3D医学影像的切片之间具有关联性,本申请实施例提供的方法同时对两张切片(切片对)进行分割,并利用切片之间的关联关系对分割结果作进一步调整,因此,可以更加准确地捕捉到目标对象(如肝脏)的形状信息,使其分割精准性更高。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的医学影像分割方法的场景示意图;
图2是本申请实施例提供的医学影像分割方法的流程图;
图3是本申请实施例提供的医学影像分割方法中感受野的示意图;
图4是本申请实施例提供的影像分割模型中残差网络的结构示意图;
图5是本申请实施例提供的影像分割模型的结构示意图;
图6是本申请实施例提供的医学影像分割方法中信号分量的示意图;
图7是本申请实施例提供的影像分割模型中通道注意力模块的结构示意图;
图8是本申请实施例提供的影像分割模型的另一结构示意图;
图9是本申请实施例提供的医学影像分割方法中关联关系的示意图;
图10是本申请实施例提供的医学影像分割方法中关联关系的另一示意图;
图11是本申请实施例提供的医学影像分割方法的另一流程图;
图12是本申请实施例提供的医学影像分割方法中重叠方块的示例图;
图13是本申请实施例提供的医学影像分割装置的结构示意图;
图14是本申请实施例提供的医学影像分割装置的另一结构示意图;
图15是本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
人工智能是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、自然语言处理技术以及机器学习/深度学习等几大方向。
计算机视觉技术(Computer Vision,CV)是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取信息的人工智能系统。计算机视觉技术通常包括图像分割、图像处理、图像识别、图像语义理解、图像检索、OCR、视频处理、视频语义理解、视频内容/行为识别、三维物体重 建、3D技术、虚拟现实、增强现实、同步定位与地图构建等技术,还包括常见的人脸识别、指纹识别等生物特征识别技术。
机器学习(Machine Learning,ML)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。机器学习是人工智能的核心,是使计算机具有智能的根本途径,其应用遍及人工智能的各个领域。机器学习和深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习、归纳学习、式教学习等技术。
本申请实施例提供的医学影像分割方法涉及人工智能的计算机视觉技术和机器学习技术等,具体通过以下实施例进行说明。
本申请实施例提供一种医学影像分割方法、装置、电子设备和存储介质。其中,该医学影像分割装置可以集成在电子设备中,该电子设备可以是服务器,也可以是终端等设备。
所谓影像分割,指的是把影像分成若干个特定的、具有独特性质的区域,并提出感兴趣目标的技术和过程。在本申请实施例中,主要指的是对三维医学影像进行分割,并找出所需的目标对象,比如,将3D医学影像沿z轴方向分成多张单帧切片(简称切片),然后从切片中分割出肝脏区域等等;在得到3D医学影像的所有切片的分割结果后,将这些分割结果沿z轴方向进行组合,便可以得到该3D医学影像对应的3D分割结果——目标对象如肝脏区域的3D形态。该分割出来的目标对象后续可以供医护人员或其他医学专家进行分析,以便作出进一步的操作。
例如,参见图1,以该医学影像分割装置集成在电子设备中为例,该电子设备可以获取切片对(该切片对包括两张从待分割医学影像中采样所得的切片),采用不同感受野(receptive field)对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;另一方面将该切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定切片对中各切片之间的关联信息,进而,基于该关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果。
以下分别进行详细说明。需说明的是,以下实施例的描述顺序不作为对实施例优选顺序的限定。
本实施例将从医学影像分割装置的角度进行描述,该医学影像分割装置具体可以集成在电子设备中,该电子设备可以是服务器,也可以是终端等设备;其中,该终端可以包括平板电脑、笔记本电脑、个人计算(Personal Computer,PC)、医学影像采集设备、或者其他电子医疗设备等。
一种医学影像分割方法,包括:获取切片对,该切片对包括两张从待分割医学影像中采样所得的切片;采用不同感受野对该切片对中每一切片进行特征提取,得到该切片对中每一切片的高层特征信息和低层特征信息;针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定切片对中各切片之间的关联信息;基于该关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果。
如图2所示,该医学影像分割方法的具体流程可以如下:
101、获取切片对,其中,该切片对包括两张从待分割医学影像中采样所得的切片。
例如,可以获取待分割医学影像,从该待分割医学影像中采样两张切片,这两张切片组成的集合即称为切片对。
其中,该待分割医学影像可以由各医学影像采集设备对生物组织(如心脏或肝脏等)进行影像采集后,提供给该医学影像分割装置。其中,该医学影像采集设备可以包括核磁共振成像仪(Magnetic Resonance Imaging,MRI)、计算机断层扫描设备、阴道镜或内窥镜等电子设备。
102、采用不同感受野对该切片对中每一切片进行特征提取,得到该切片对中每一切片的高层特征信息和低层特征信息。
其中,在卷积神经网络中,感受野决定了某一层输出结果中一个元素所对应的输入层的区域大小。也就是说,感受野是卷积神经网络中的某一层输出结果(即特征图feature map,也称为特征信息)上的元素点在输入影像上映射的大小,比如,参见图3。一般的,第一层卷积层(比如C 1)的输出特征影像素的感受野的大小等于卷积核的大小(滤波器,Filter size),而高层卷积层(比如C 4等)的感受野大小则和它之前所有层的卷积核大小和步长有关系,因此,基于不同的感受野可以捕获到不同级别的信息,进而达到提取不同尺度特征信息的目的;也就是说,通过采用不同感受野对切片进行特征提取后,可以得到该切片的多个尺度的高层特征信息和多个尺度的低层特征信息。
其中,采用不同感受野对该切片进行特征提取的方式可以有多种,比如,可以通过残差网络来实现,即步骤“采用不同感受野对该切片对中每一切片进行特征提取,得到该切片对中每一切片的高层特征信息和低层特征信息”可以包括:
通过训练后分割模型中的残差网络对该切片对中每一切片进行特征提取,得到该切片对中每一切片的高层特征信息和低层特征信息。
比如,如图4所示,以该切片对包括第一切片和第二切片,残差网络包括并列且结构相同的第一残差网络分支和第二残差网络分支为例,则此时,可 以采用该残差网络中的第一残差网络分支对第一切片进行特征提取,得到第一切片所对应的高层特征信息和不同尺度的底层特征信息;以及采用该残差网络中的第二残差网络分支对第二切片进行特征提取,得到第二切片所对应的高层特征信息和不同尺度的底层特征信息。
其中,高层特征信息指的是该残差网络最终输出的特征图,所谓“高层特征”一般可以包含与类别以及高级抽象等相关的信息。而低层特征信息指的是残差网络在对待分割医学影像进行特征提取的过程中,所得到的特征图,所谓“低层特征”一般可以包含诸如边缘和纹理等影像细节。
例如,以该一个残差网络分支(比如第一残差网络分支或第二残差网络分支)包括多个串联的残差模块为例,则此时,高层特征信息指的是最后一块残差模块输出的特征图,而低层特征信息则指的是除第一残差模块和最后一块残差模块之外的其他残差模块所输出的特征图。
比如,参见图4,若每一残差网络分支包括残差模块1(Block1)、残差模块2(Block2)、残差模块3(Block3)、残差模块4(Block4)和残差模块5(Block5),则残差模块5输出的特征图为高层特征信息,而残差模块2、残差模块3和残差模块4输出的特征图为低层特征信息。
其中,第一残差网络分支和第二残差网络分支的网络结构具体可以根据实际应用的需求而定,比如可以采用ResNet-18(一种残差网络),另外,第一残差网络分支的参数与第二残差网络分支的参数可以共享,具体的参数设置可以根据实际应用的需求而定。
可选的,为了可以得到更多尺度的高层特征信息,还可以对得到的高层特征信息进行空间金字塔池化(Spatial Pyramid Pooling,SPP)处理。比如,参见图5,可以分别在第一残差网络分支和第二残差网络分支后,增加一空间金字塔池化模块,比如空洞卷积空间金字塔池化(Atrous Spatial Pyramid Pooling,ASPP)模块。由于ASPP使用了扩张卷积(Atrous Convolution),因此,能够在不牺牲特征空间分辨率的同时扩大特征接收野,所以,自然可以提取到更多尺度的高层特征信息。
需说明的是,第一残差网络分支所连接的ASPP的参数与第二残差网络分支所连接的ASPP的参数可以不共享,其具体的参数可以根据实际应用的需求而定,在此不作赘述。
还需说明的是,如图5所示,在本申请实施例中,残差网络部分可以看成是训练后分割模型中编码模块部分。
103、针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果。
例如,针对切片对中每一切片,可以根据该切片的低层特征信息和高层特征信息,通过训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果。具体可以如下:
针对切片对中每一切片,通过训练后分割模型中的分割网络,分别对该切片的低层特征信息和高层特征信息进行卷积(Conv)处理,将卷积处理后的高层特征信息上采样(Upsample)至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接(Concat),得到连接后特征信息,根据该连接后特征信息筛选属于该切片中的目标对象的像素点,得到切片的初始分割结果。
其中,该分割网络可以看成是该训练后分割模型中解码模块部分。
比如,还是以该切片对包括第一切片和第二切片为例,若该分割网络包括并列且结构相同的第一分割网络分支(解码模块A)和第二分割网络分支(解码模块B),则此时,如图5所示,具体可以如下:
(1)通过第一分割网络分支对第一切片的低层特征信息和第一切片的高层特征信息进行卷积处理,比如进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第一切片的连接后特征信息,然后,根据该连接后特征信息筛选属于第一切片中的目标对象的像素点,得到第一切片的初始分割结果,比如,具体可以对该连接后特征信息进行卷积核为“3×3”的卷积处理后,上采样至第一切片的尺寸大小,便可以得到第一切片的初始分割结果。
(2)通过第二分割网络分支对第二切片的低层特征信息和第二切片的高层特征信息进行卷积处理,比如进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第二切片的连接后特征信息,然后,根据该连接后特征信息筛选属于第二切片中的目标对象的像素点,得到第二切片的初始分割结果,比如,具体可以对该连接后特征信息进行卷积核为“3×3”的卷积处理后,上采样至第二切片的尺寸大小,便可以得到第二切片的初始分割结果。
104、将该切片对中各切片的低层特征信息和高层特征信息进行融合。
例如,可以通过训练后分割模型中的融合网络,将该切片对中各切片的低层特征信息和高层特征信息进行融合。
其中,将低层特征信息与高层特征信息进行融合的方法可以有多种,比如,可以采用“逐元素相加(Sum)”或者通道叠加的方式对其进行融合。以“逐元素相加”为例,则步骤“通过训练后分割模型中的融合网络,将该切片对中各切片的低层特征信息和高层特征信息进行融合”可以包括:
(1)将该切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息。
比如,如图5所示,以切片对包括第一切片和第二切片为例,则此时,可以将第一切片的低层特征信息与第二切片的低层特征信息进行逐元素相加, 得到融合后低层特征信息。
(2)将该切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息。
比如,如图5所示,还是以切片对包括第一切片和第二切片为例,则此时,可以将第一切片的高层特征信息与第二切片的高层特征信息进行逐元素相加,得到融合后高层特征信息。
(3)通过训练后分割模型中的融合网络,将该融合后低层特征信息和融合后高层特征信息进行融合,得到融合后特征信息。例如,具体可以采用如下任意一种方式来进行融合:
A、第一种方式:
通过训练后分割模型中的融合网络,将融合后低层特征信息和融合后高层特征信息进行逐元素相加,得到融合后特征信息。
可选的,由于不同特征在特定任务中所起到作用具有不同的权重,因此,为了有效地对不同特征赋予其应有的重要性,使得特征可以得到更好的利用,以提高影像分割的准确性,还可以使用注意力机制让网络自动对不同特征信息赋予不同的权重,使得网络可以对特征信息有选择性地进行融合。
即除了第一种方式之外,也可以采用第二种方式来对该融合后低层特征信息和融合后高层特征信息进行融合,如下:
B、第二种方式:
通过训练后分割模型中的融合网络中的通道注意力模块,根据融合后低层特征信息和融合后高层特征信息为融合后低层特征信息赋予权重,得到加权后特征信息;将该加权后处理特征和融合后低层特征信息进行逐元素相乘,得到处理后特征信息,将处理后特征信息和融合后高层特征信息进行逐元素相加,得到融合后特征信息,参见图5。
其中,通道注意力模块指的是采用了通道域的注意力机制的网络模块。在卷积神经网络中,每一张影像初始会由(R,G,B)三通道表示出来,之后经过不同的卷积核之后,每一个通道又会生成新的信号,比如影像特征的每个通道使用64核卷积,就会产生64个新通道的矩阵(H,W,64),H和W分别表示影像特征的高度和宽度,等等。每个通道的特征其实就表示该影像在不同卷积核上的分量,类似于时频变换,而这里面用卷积核的卷积类似于对信号做了傅里叶变换,从而能够将这个特征一个通道的信息给分解成64个卷积核上的信号分量,比如参见图6。由于每个信号都可以被分解成64个卷积核上的信号分量(相当于产生的64个通道),但是,这新的64个通道对于关键信息的贡献并不是一样的,而是有多有少,所以,可以为每个通道赋予一个权重,来代表该通道与关键信息(对分割任务起关键性作用的信息)的相关度,使其权重越大相关度越高,相关度越高的通道也就是越需要去注意的通道,为此,这种机制被称为“通道域的注意力机制”。
其中,该通道注意力模块的结构具体可以根据实际应用的需求而定,比如,如图7所示,具体的,可以通过训练后分割模型中的融合网络中的通道注意力模块,根据该融合后低层特征信息和融合后高层特征信息为融合后低层特征信息赋予权重,得到加权后特征信息,然后,将该加权后特征信息和该融合后低层特征信息进行逐元素相乘(Mul),得到处理后特征信息,再将处理后特征信息和融合后高层特征信息进行逐元素相加,得到融合后特征信息。
需说明的是,其中,步骤103和104的执行可以不分先后,在此不作赘述。
105、根据融合后特征信息确定该切片对中各切片之间的关联信息。
例如,可以通过训练后分割模型中的融合网络,根据融合后特征信息确定该切片对中各切片之间的关联信息,比如,具体可以从融合后特征信息中筛选出属于目标对象的特征,根据筛选出的特征(即属于目标对象的特征)确定该切片对中各切片之间的关联信息。
其中,目标对象指的是需要在切片中识别出的对象,比如在肝脏影像分割中的“肝脏”,在心脏影像分割中的“心脏”,等等。
其中,根据融合后特征信息确定切片之间的关联信息的方式可以有多种,比如,可以如下:
(1)根据筛选出的特征确定切片对中每一切片的背景区域和前景区域。
比如,以目标对象具体为肝脏为例,则此时,可以确定筛选出的属于肝脏的特征所在的区域为切片的前景区域,而切片中其他剩余的区域则为切片的背景区域。
又比如,以目标对象具体为心脏为例,则此时,可以确定筛选出的属于心脏的特征所在的区域为切片的前景区域,而切片中其他剩余的区域则为切片的背景区域。
(2)计算该切片对中各切片之间前景区域的差集像素和交集像素。
例如,具体可以将融合后特征信息中,只属于切片对中任一切片的前景区域的像素点组合得到差集区域的像素集,简称差集像素;以及将融合后特征信息中,同时属于切片对中两张切片的前景区域的像素点组合得到交集区域的像素集,简称交集像素。
其中,融合后特征信息可以看成是“对该切片对中的所有切片进行叠加”后,所得到的叠加后切片所对应的特征信息,所以,在该叠加后切片中,获取前景区域(两个切片的前景区域)不产生重叠的区域中的像素点,便可得到差集像素,类似的,获取前景区域产生重叠的区域中的像素点,便可得到交集像素。
(3)根据每一切片的背景区域、差集像素和交集像素生成该切片之间的关联信息。
例如,可以将融合后特征信息(叠加后切片)中,同时属于切片对中两张切片的背景区域的像素作为该切片对的背景区域,换而言之,即将所有切 片的背景区域的交集部分作为该切片对的背景区域,然后,对该切片对的背景区域、差集像素和交集像素进行像素类型标识,得到该切片之间的关联信息。
比如,可以采用不同的像素值来对这些区域进行像素类型标识,譬如,可以将切片对的背景区域的像素值设置为“0”,将差集像素的值设置为“1”,将交集像素的值设置为“2”;又或者,还可以将背景区域的像素值设置为“0”,将差集像素的值设置为“2”,将交集像素的值设置为“1”,等等。
可选的,也可以采用不同颜色来对这些区域进行像素类型标识,比如,可以将背景区域设置为“黑色”,将差集像素的值设置为“红色”,将交集像素的值设置为“绿色”;又或者,还可以将背景区域的像素值设置为“黑色”,将差集像素的值设置为“绿色”,将交集像素的值设置为“红色”,等等
106、基于切片对中各切片之间的关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果。
例如,以该切片对中包括第一切片和第二切片为例,则此时,步骤“基于切片对中各切片之间的关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果”可以包括:
(1)根据切片对中各切片之间的关联信息和第一切片的初始分割结果预测第二切片的分割结果,得到第二切片的预测分割结果。
由于此时切片之间的关联信息指的是第一切片和第二切片之间的关联信息,其可以体现第一切片与第二切片之间的差集像素和交集像素等,因此,根据该关联信息和第一切片的初始分割结果,便可以预测出第二切片的分割结果。
比如,第一切片和第二切片的差集像素为A区域,交集像素为B区域,第一切片的初始分割结果为C区域,则第二切片的预测分割结果为“{(A∪B)\C}∪B”,其中,“∪”指的是“并集”,“\”指的是差集。
(2)根据切片对中各切片之间的关联信息和第二切片的初始分割结果预测第一切片的分割结果,得到第一切片的预测分割结果。
与预测第二切片的分割结果类似,若第一切片和第二切片的差集像素为A区域,交集像素为B区域,第二切片的初始分割结果为D区域,则第一切片的预测分割结果为“{(A∪B)\D}∪B”。
(3)基于第一切片的预测分割结果对第一切片的初始分割结果进行调整,得到第一切片的调整后分割结果,具体可以如下:
对第一切片的预测分割结果和第一切片的初始分割结果进行平均处理,得到第一切片的调整后分割结果。
即,将第一切片的预测分割结果中的像素值与第一切片的初始分割结果中相同位置上的像素值作平均,并将该像素平均值作为第一切片的调整后分割结果中相同位置上的像素值。
(4)基于第二切片的预测分割结果对第二切片的初始分割结果进行调整,得到第二切片的调整后分割结果;具体可以如下:
对第二切片的预测分割结果和第二切片的初始分割结果进行平均处理,得到第二切片的调整后分割结果。
即,将第二切片的预测分割结果中的像素值与第二切片的初始分割结果中相同位置上的像素值作平均,并将该像素平均值作为第二切片的调整后分割结果中相同位置上的像素值。
(5)对第一切片的调整后分割结果和第二切片的调整后分割结果进行融合,得到该切片对的分割结果,具体可以如下:
对第一切片的调整后分割结果和第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到该切片对的分割结果。
即,将第一切片的调整后分割结果中的像素值与第二切片的调整后分割结果中相同位置上的像素值作平均,并将该像素平均值作为该切片对的分割结果中相同位置上的像素值。
其中,二值化指的是将影像上的像素点的灰度值设置为0或255,也就是将整个影像呈现出明显的只有黑和白的视觉效果。
由前面实施例的描述可知,本申请实施例中的训练后分割模型可以包括残差网络、分割网络和融合网络等。其中,残差网络可以包括并列的第一残差网络分支和第二残差网络分支,分割网络可以包括并列的第一分割网络分支和第二分割网络分支。需说明的,其中,残差网络部分可以看成是该训练后影像分割模型的编码器(encoder)部分,称为编码模块,用于进行特征信息提取,而分割网络则可以看成是该训练后分割模型的解码器(decoder)部分,称为解码模块,用于根据提取特征信息进行分类和分割。
可选的,该训练后分割模型可以由多对标注有真实值的切片对样本训练而成,具体的,可以由运维人员预先进行设置,或者,也可以由该影像分割装置自行训练得到。即在步骤“通过训练后分割模型中的残差网络对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息”之前,该医学影像分割方法还可以包括:
S1、采集多对标注有真实值的切片对样本,其中,该切片对样本包括两张从医学影像样本中采样所得的切片样本。
比如,具体可以采集多张医学影像作为原始数据集,比如从数据库或网络等获取该原始数据集,然后对该原始数据集里的医学影像进行预处理,以得到满足预设分割模型的输入标准的影像,即可得到医学影像样本。将得到的医学影像样本切分成切片(在本申请实施例中称为切片样本),对每一切片样本进行目标对象的标注(称为真实值标注),并两两组成一个集合,即可得到多对标注了真实值的切片对样本。
其中,预处理可以包括去重、裁剪、旋转和/或翻转等操作。譬如,以预 设分割网络的输入大小为“128*128*32(宽*高*深)”为例,则此时,可以将原始数据集里的影像裁剪为“128*128*32”大小的影像,当然,还可以进一步对这些影像进行其他的预处理操作。
S2、通过预设分割模型中的残差网络对该切片对样本中每一切片样本进行特征提取,得到每一切片样本的高层特征信息和低层特征信息。
例如,以该切片对样本包括第一切片样本和第二切片样本,残差网络包括并列的第一残差网络分支和第二残差网络分支为例,则此时,可以采用该残差网络中的第一残差网络分支对第一切片样本进行特征提取,得到第一切片样本对应的不同尺度的高层特征信息和不同尺度的底层特征信息;以及采用该残差网络中的第二残差网络分支对第二切片样本进行特征提取,得到第二切片样本对应的不同尺度的高层特征信息和不同尺度的底层特征信息。
S3、针对切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值(即预测的概率图)。
例如,以该切片对样本包括第一切片样本和第二切片样本为例,若该分割网络包括并列的第一分割网络分支和第二分割网络分支,则此时,可以执行如下操作:
A、通过第一分割网络分支对第一切片样本的低层特征信息和第一切片样本的高层特征信息进行卷积处理,比如进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第一切片样本的连接后特征信息,然后,根据该连接后特征信息筛选属于第一切片样本中的目标对象的像素点,得到第一切片样本的预测分割值,比如,具体可以对该连接后特征信息进行卷积核为“3×3”的卷积处理后,上采样至第一切片样本的尺寸大小,便可以得到第一切片样本的预测分割值。
B、通过第二分割网络分支对第二切片样本的低层特征信息和第二切片样本的高层特征信息进行卷积处理,比如进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第二切片样本的连接后特征信息,然后,根据该连接后特征信息筛选属于第二切片样本中的目标对象的像素点,得到第二切片样本的预测分割值,比如,具体可以对该连接后特征信息进行卷积核为“3×3”的卷积处理后,上采样至第二切片样本的尺寸大小,便可以得到第二切片样本的预测分割值。
S4、通过预设分割模型中的融合网络,将切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测该切片对样本中各切片样本之间的关联信息。
例如,可以将该切片对样本中各切片样本的低层特征信息进行逐元素相 加,得到融合后低层特征信息,以及将该切片对样本中各切片样本的高层特征信息进行逐元素相加,得到融合后高层特征信息;然后,通过预设分割模型中的融合网络,将该融合后低层特征信息和融合后高层特征信息进行融合,得到融合后特征信息,进而,便可以从融合后特征信息中筛选出属于目标对象的特征,并根据筛选出的特征确定该切片对样本中各切片样本之间的关联信息。
其中,将该融合后低层特征信息和融合后高层特征信息进行融合的方式的具体可参见前面的实施例,另外,计算切片对样本中各切片样本之间的关联信息的方式与计算切片对中各切片之间的关联信息的方式也是相同的,具体可参见前面的实施例,在此不作赘述。
S5、根据该真实值、切片对样本中各切片样本的预测分割值和预测的关联信息对该预设分割模型进行收敛,得到训练后分割模型。
例如,具体可以通过损失函数如Dice损失函数,根据该真实值、预测分割值和预测的关联信息对该预设分割模型进行收敛,得到训练后分割模型。
其中,该损失函数具体可以根据实际应用的需求进行设置,比如,以切片对样本包括第一切片样本x i和第二切片样本x j为例,若第一切片样本x i标注的真实值为y i,第二切片样本x j标注的真实值为y j,则第一分割网络分支的Dice损失函数可以如下:
Figure PCTCN2020081660-appb-000001
第二分割网络分支的Dice损失函数可以如下:
Figure PCTCN2020081660-appb-000002
其中,p i和p j分别为第一分割网络分支和第二分割网络分支的预测分割值,s和t分别为切片中行和列的位置索引,
Figure PCTCN2020081660-appb-000003
表示第一切片样本中位置索引为(s,t)的像素点所标注的真实值,
Figure PCTCN2020081660-appb-000004
表示第一切片样本中位置索引为(s,t)的像素点的预测分割值;
Figure PCTCN2020081660-appb-000005
表示第二切片样本中位置索引为(s,t)的像素点所标注的真实值,
Figure PCTCN2020081660-appb-000006
表示位置第二切片样本中位置索引为(s,t)的像素点的预测分割值。
以融合网络输出的切片之间的关联信息包括三种关系类型:背景区域、交集像素和差集像素为例,则根据上面的两个Dice损失函数,可以计算出该融合网络的Dice损失函数为:
Figure PCTCN2020081660-appb-000007
其中,y ij为该第一切片样本x i和第二切片样本x j之间的关联关系的真实值,该关联关系的真实值可以根据第一切片样本x i的标注的真实值和第二切片样本x j标注的真实值计算得到,比如,可以确定第一切片样本x i和第二切片样本x j叠加之后影像的背景区域、以及第一切片样本x i标注的真实值和第二切片样本x j标注的真实值之间的差集和交集,这里所得到的背景区域、差集和交集即为第一切片样本x i和第二切片样本x j叠加后“背景区域、差集像素和交集像素”的真实值,也即为本申请实施例所说的关联关系的真实值。
p ij为融合网络输出的第一切片样本x i和第二切片样本x j之间的关联关系,s和t分别为切片中行和列的位置索引,
Figure PCTCN2020081660-appb-000008
表示叠加后的切片样本中位置索引为(s,t)的叠加像素点之间关联关系的真实值,
Figure PCTCN2020081660-appb-000009
表示叠加后的切片样本中位置索引为(s,t)的叠加像素点之间的关联关系的预测值(即融合网络所输出的关联关系);l为上述三种关系类型(即背景区域、交集像素和差集像素)的类别索引。
根据第一分割网络分支的Dice损失函数、第二分割网络分支的Dice损失函数和融合网络的Dice损失函数,可以计算出该影像分割模型总体损失函数
Figure PCTCN2020081660-appb-000010
为:
Figure PCTCN2020081660-appb-000011
其中,λ 12和λ #是由人工设定的超参数,用于平衡各部分损失对总体损失的贡献。
由上可知,本实施例在获取到切片对后,可以采用不同感受野分别对该切片对中的切片进行特征提取,得到每一切片的高层特征信息和低层特征信息,然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,另一方面将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定切片对中各切片之间的关联信息,最终,基于切片对中各切片之间的关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果;本申请实施例提供的方法考虑到3D医学影像的切片之间具有关联性,因此,同时对两张切片(切片对)进行分割,并利用这两张切片之间的关联关系对分割结果作进一步调整,保证可以更加准确地捕捉到目标对象(如肝脏)的形状信息,使其分割精准性更高。
根据前面实施例所描述的方法,以下将举例作进一步详细说明。
在本实施例中,将以该影像分割装置集成在电子设备中,且其目标对象为肝脏为例进行说明。
(一)影像分割模型的训练。
如图5和图8所示,该影像分割模型可以包括残差网络、分割网络和融合网络,其中,残差网络可以包括两个并列且结构相同的残差网络分支——第一残差网络分支和第二残差网络分支,可选的,每一个残差网络分支后还可以连接一ASPP(空洞卷积空间金字塔池化)模块,该残差网络作为该影像分割模型的编码模块,可以用于对输入影像如切片对中的切片或切片对样本中的切片样本进行特征信息提取。
类似的,分割网络可以包括两个并列且结构相同的分割网络分支——第一分割网络分支和第二分割网络分支,该分割网络作为影像分割模型的解码模块,用于根据编码模块提取到的特征信息进行目标对象如肝脏的分割。
而融合网络则用于根据编码模块提取到的特征信息预测切片对中各切片(或切片对样本中各切片样本)之间的关联关系。基于该影像分割模型的结构,以下将对其训练方式进行详细说明。
首先,电子设备可以采集多张包含肝脏解结构的3D医学影像,比如从数据库或网络等获取多张包含肝脏结构的3D医学影像,然后对这些3D医学影像进行预处理,比如进行去重、裁剪、旋转和/或翻转等操作,以得到满足该预设分割模型的输入标准的影像,作为医学影像样本,然后,对医学影像样本沿z轴(3D坐标轴为{x,y,z})方向,按照一定的时间间隔进行采样,得到多个切片样本,此后,在每一切片样本中标注出肝脏区域等信息,并两两组成一个集合,即可得到多对标注了真实值的切片对样本。
需说明的是,在将切片样本组成切片对样本时,可以采用多种组合方式,比如将切片样本1与切片样本2组成切片对样本1,然后,再将切片样本1与切片样本3再组成切片对样本2,等等,这样,通过不同的组合方式,可以利用有限的切片样本增广得到更多的训练数据(即数据增广),使得即便使用少量人工标注数据也可以完成该影像分割模型的训练。
其次,在得到多对标注了真实值的切片对样本之后,电子设备可以将切片对样本输入至预设的影像分割模型,通过残差网络对切片对样本进行特征提取,比如,具体可以通过第一残差网络分支对该切片对样本中的第一切片样本进行特征提取,得到第一切片样本对应的不同尺度的高层特征信息和不同尺度的低层特征信息;以及通过第二残差网络分支对该切片对样本中的第二切片样本进行特征提取,得到第二切片样本对应的不同尺度的高层特征信息和不同尺度的低层特征信息。
可选的,还可以进一步利用ASPP对第一切片样本对应的高层特征信息以及第二切片样本对应的高层特征信息做进一步处理,以得到更多不同尺度的 高层特征信息,参见图8。
再者,在得到第一切片样本对应的高层特征信息和低层特征信息、以及第二切片样本对应的高层特征信息和低层特征信息之后,一方面,电子设备可以根据这些高层特征信息和低层特征信息,分别利用第一分割网络分支和第二分割网络分支对第一切片样本和第二切片样本进行肝脏的分割,得到第一切片样本的预测分割值和第二切片样本的预测分割值;另一方面,电子设备可以通过融合网络,将第一切片样本和第二切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测第一切片样本和第二切片样本之间的关联信息,比如,可以如下:
参见图8,可以将第一切片样本的低层特征信息和第二切片样本的低层特征信息进行逐元素相加,得到融合后低层特征信息,以及将第一切片样本的高层特征信息和第二切片样本的高层特征信息进行逐元素相加,得到融合后高层特征信息,然后,通过通道注意力模块根据该融合后低层特征信息和融合后高层特征信息,为融合后低层特征信息赋予权重,得到加权后特征信息,进而,将该加权后特征信息和该融合后低层特征信息进行逐元素相乘(Mul)得到处理后特征信息,并将该处理后特征信息和融合后高层特征信息进行逐元素相加,便可得到融合后特征信息。此后,便可以从融合后特征信息中筛选出属于肝脏的特征,并根据筛选出的属于肝脏的特征,预测第一切片样本和第二切片样本之间的关联信息,比如,可以预测第一切片样本和第二切片样本的背景区域,以及第一切片样本和第二切片样本之间的交集像素和差集像素等。
最后,可以利用切片对样本标注的真实值、切片对样本中各切片样本的预测分割值和预测的关联信息,对该预设的影像分割模型进行收敛,便得到训练后影像分割模型。
其中,切片对样本标注的真实值包括第一切片样本中标注的肝脏区域和第二切片样本中标注的肝脏区域。而通过第一切片样本中标注的肝脏区域和第二切片样本中标注的肝脏区域,可以进一步确定出第一切片样本和第二切片样本之间真实的关联关系,包括第一切片样本和第二切片样本所组成的切片对样本的背景区域、第一切片样本和第二切片样本之间真实的差集像素、以及第一切片样本和第二切片样本之间真实的交集像素等。
其中,第一切片样本和第二切片样本所组成的切片对样本的真实的背景区域,可以通过对第一切片样本和第二切片样本进行叠加后,获取第一切片样本的背景区域和第二切片样本的背景区域的交集部分来得到。而第一切片样本和第二切片样本之间真实的差集像素以及第一切片样本和第二切片样本之间真实的交集像素,则可以计算第一切片样本标注的肝脏区域和第二切片样本标注的肝脏区域之间的差集、以及计算第一切片样本标注的肝脏区域和第二切片样本标注的肝脏区域之间的交集来得到。
可选的,为了可以快速便捷地识别出第一切片样本和第二切片样本之间的真实的关联关系,可以在第一切片样本标注的肝脏区域和第二切片样本标注的肝脏区域的叠加图中,为不同类型的区域标识不同的颜色或像素值。比如,参见图9和图10,可以将切片对样本的背景区域的颜色标识为黑色、将交集像素的颜色标识为红色(图9和图10中为白色),将交集像素的颜色标识为绿色(图9和图10中为灰色),或者,也可以将切片对样本的背景区域的像素值设置为0、将交集像素的像素值设置为1,将交集像素的像素值设置为2,等等。
其中,图9和图10的中央的影像为第一切片样本标注的肝脏区域和第二切片样本标注的肝脏区域的叠加图。另外,图9中的第一切片样本和第二切片样本为由不同CT影像采样而来的,而图10中的第一切片样本和第二切片样本为由同一CT影像采样而来的。
具体收敛时,可以通过Dice损失函数来收敛,该Dice损失函数
Figure PCTCN2020081660-appb-000012
具体可以如下:
Figure PCTCN2020081660-appb-000013
其中,λ 12和λ #是由人工设定的超参数,用于平衡各部分损失对总体损失的贡献。
Figure PCTCN2020081660-appb-000014
和(y ij,p ij)具体可参见前面的实施例,在此不作赘述。
当采用该Dice损失函数,对该预设的影像分割模型收敛完毕后,便完成了一次训练,依次类推,经过多次训练,便可得到训练后影像分割模型。
由于在训练的过程中,除了可以利用切片样本中“肝脏”的真实标注来验证分割预测值之外,还可以利用切片样本之间关联关系的真实值来验证两个分割预测值之间的关系(即预测的关联关系),所以,能够起到进一步“监督”的作用,也就是说,可以进一步提高该训练后影像分割模型分割的准确性。
需说明的是,由于该影像分割模型中“用于确定切片样本之间关联关系”的部分,能够在训练过程中利用除切片样本本身对目标对象的标注信息之外的信息(即切片样子之间关联关系)对该影像分割模型进行训练,以学习形状的先验知识(Prior knowledge,指可以被机器学习算法利用的知识),所以,该“用于确定切片样本之间关联关系”的部分也可以称为代理监督(Proxy supervision)部分,在此不作赘述。
(二)通过该训练后影像分割模型,便可以对待分割医学影像进行分割。
其中,该训练后影像分割模型包括残差网络、分割网络和融合网络,其中,残差网络可以包括第一残差网络分支和第二残差网络分支,分割网络包括第一分割网络分支和第二分割网络分支。
如图11所示,一种医学影像分割方法,具体流程可以如下:
201、电子设备获取待分割医学影像。
例如,电子设备可以接收各医学影像采集设备,比如MRI或CT等对人体肝脏部位进行影像采集后发送的医学影像,将这些医学影像作为待分割医学影像。
可选的,还可以对接收到的医学影像进行预处理,比如进行去重、裁剪、旋转和/或翻转等操作。
202、电子设备从待分割医学影像中采样两张切片,得到当前需要分割的切片对。
例如,电子设备可以沿z轴方向,按照一定的时间间隔连续采样两张切片,组成切片对,或者,也可以沿z轴方向,按照一定的时间间隔随机采样两张切片,组成切片对,等等。
可选的,为了能够提供足够的感受野,在采样切片时,可以采用以带有重叠部分的方块(patch-wise)为单位来进行采样。
其中,patch-wise(方块)是一种影像的基本单位,影像的基本单位有多种,除了patch-wise之外,还可以是pixel-wise和image-wise等,pixel-wise为像素级别,即通常所说的“像素”,image-wise是影像级别(就是以一张影像作为单位),而patch-wise指的是介于像素级别和影像级别的区域,其中,每个patch都是由好多个像素组成的。
比如,如图12所示,在采样切片时,可以以方块(patch-wise)为单位,逐一进行采样,其中,当前所采样的方块与上一方块具有部分重叠区域,比如,方块2与方块1具有重叠区域,而方块3与方块2也具有重叠区域,等等。其中,重叠区域的大小可以按照实际应用的需求进行设置。
另外,需说明的是,采样到切片对中的两张切片可以不重叠,也可以具有部分重叠,或者,还可以是全部重叠(即为同一张切片)。应当理解的是,由于该训练后影像分割模型中,不同分支中相同网络结构的参数有可能是不同的(例如,不同分支中的ASPP的参数是不共享的),所以,对于相同的输入不同分支所输出的初始分割结果也可能是不同的,所以,即便输入的两张切片是一样的也是有意义的。
203、电子设备通过训练后影像分割模型中的残差网络对该切片对中各切片进行特征提取,得到每一切片的高层特征信息和低层特征信息。
例如,如图8所示,以该切片对包括第一切片和第二切片为例,则此时,步骤203具体可以如下:
电子设备采用该残差网络中的第一残差网络分支如ResNet-18,对第一切片进行特征提取,得到第一切片对应的高层特征信息和不同尺度的底层特征信息,然后再采用ASPP对第一切片对应的高层特征信息进行处理,以得到第一切片对应的多个尺度的高层特征信息。
以及,电子设备采用该残差网络中的第二残差网络分支如另一个 ResNet-18,对第二切片进行特征提取,得到第二切片对应的高层特征信息和不同尺度的底层特征信息,然后再采用另一个ASPP对第二切片对应的高层特征信息进行处理,以得到第二切片对应的多个尺度的高层特征信息。
需说明的是,第一残差网络分支和第二残差网络分支的参数可以共享(share),而两个分支连接的ASPP的参数可以不共享,其具体的参数可以根据实际应用的需求而定,在此不作赘述。
204、电子设备针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息,通过训练后影像分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果。
例如,还是以该切片对包括第一切片和第二切片为例,则此时,如图8所示,步骤204具体可以如下:
电子设备通过第一分割网络分支对第一切片的低层特征信息和高层特征信息进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第一切片的连接后特征信息,然后,对该连接后特征信息进行卷积核为“3×3”的卷积处理,并将卷积处理后的连接后特征信息上采样至第一切片的尺寸大小,便可得到第一切片的初始分割结果。
同理,另一分支也可以执行类似的操作,即电子设备通过第二分割网络分支对第二切片的低层特征信息和第二切片的高层特征信息进行卷积核为“1×1”的卷积处理,将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到第二切片的连接后特征信息,然后,对该连接后特征信息进行卷积核为“3×3”的卷积处理,并将卷积处理后的连接后特征信息上采样至第二切片的尺寸大小,便可以得到第二切片的初始分割结果。
205、电子设备通过训练后影像分割模型中的融合网络,将切片对中各切片的低层特征信息和高层特征信息进行融合。例如,如图8所示,具体可以如下:
一方面,将第一切片的低层特征信息与第二切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;另一方面,将第一切片的高层特征信息与第二切片的高层特征信息进行逐元素相加,得到融合后高层特征信息,然后,通过训练后分割模型中的融合网络中的通道注意力模块对融合后低层特征信息和融合后高层特征信息进行处理,得到处理后特征信息,进而,将处理后特征信息和融合后高层特征信息进行逐元素相加,便可得到融合后特征信息。
需说明的是,其中,步骤204和205的执行可以不分先后。
206、电子设备通过训练后影像分割模型中的融合网络,根据融合后特征信息确定该切片对中各切片之间的关联信息。
例如,电子设备具体可以从融合后特征信息筛选出属于肝脏区域的特征,根据筛选出的特征分别确定第一切片的前景区域(即肝脏所在的区域)、以及第二切片中的前景区域(即肝脏所在的区域),并将第一切片中除两个切片的前景区域的并集以外的其它剩余的区域作为切片对的背景区域,然后,将融合后特征信息中只属于两张切片中任一切片的前景区域的像素作为该切片对的差集像素,以及将融合后特征信息中同时属于两张切片的像素作为交集像素,此后,对该切片对的背景区域、差集像素和交集像素进行像素类型标识,比如,以采用不同的像素值来对这些区域进行标识,或者采用不同颜色来对这些区域进行标识等,即可得到第一切片和第二切片的关联信息。
可选的,该根据融合后特征信息确定该切片对中各切片之间的关联信息的操作,可以通过多种网络结构来实现,比如,如图8所示,具体采用卷积核为3×3的卷积层对该融合后特征信息进行卷积处理,然后,将卷积处理后的融合后特征信息上采样至与输入的切片(第一切片和第二切片)具有相同的尺寸,便可以得到第一切片和第二切片之间的关联信息,如该切片对的背景区域、第一切片和第二切片的交集像素、以及第一切片和第二切片的差集像素等。
207、电子设备基于切片对中各切片之间的关联信息、以及切片对中各切片的初始分割结果,生成该切片对的分割结果。
例如,还是以切片对中包括第一切片和第二切片为例,则此时,可以根据切片之间的关联信息和第一切片的初始分割结果预测第二切片的分割结果,得到第二切片的预测分割结果,以及根据切片之间的关联信息和第二切片的初始分割结果预测第一切片的分割结果,得到第一切片的预测分割结果,然后,对第一切片的预测分割结果和第一切片的初始分割结果进行平均处理,得到第一切片的调整后分割结果,以及对第二切片的预测分割结果和第二切片的初始分割结果进行平均处理,得到第二切片的调整后分割结果,进而,对第一切片的调整后分割结果和第二切片的调整后分割结果进行融合,比如进行平均处理,将平均处理后的结果进行二值化处理,便可得到该切片对的分割结果。
此后,电子设备可以返回执行步骤202,以从待分割医学影像中采样另外两张切片作为当前需要分割的切片对,并按照步骤203~207的方式进行处理,得到其对应的分割结果,依次类推,得到该待分割医学影像中所有切片对的分割结果后,将这些切片对的分割结果按照切片的次序进行组合,即可得到该待分割医学影像的分割结果(即3D的分割结果)。
由上可知,本实施例可以预先利用切片对样本,以及切片对样本中切片样本之间的关联关系(先验知识等信息)训练一影像分割模型,然后,在获取到待分割医学影像后,可以通过该影像分割模型,采用不同感受野对该待分割医学影像的切片对进行特征提取,得到切片对中每一切片的高层特征信 息和低层特征信息,然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对切片中的肝脏区域进行分割,得到该切片的初始分割结果,另一方面将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定各切片之间的关联信息,再然后,利用得到的关联信息对切片对中各切片的初始分割结果进行调整,得到最终所需的分割结果;由于在训练时,除了考虑单张切片中的标注信息之外,还将其他先验知识如切片之间关联关系等信息也作为学习数据之一,对切片分割的准确性起到监督作用,所以,可以提高该影像分割模型分割的精准性;另外,融合网络的引入,还可以避免当训练样本较少时,由于训练样本中分割对象的形状变化所引发的模型过拟合的情况的发生。
此外,在使用该训练后影像分割模型时,考虑到3D医学影像的切片之间具有关联性,因此可以利用训练后影像分割模型同时对两张切片(切片对)进行分割,并利用切片之间的关联关系对分割结果作进一步调整,保证可以更加准确地捕捉到目标对象(如肝脏)的形状信息,使其分割精准性更高。
为了更好地实施以上方法,本申请实施例还提供一种医学影像分割装置,该医学影像分割装置可以集成在电子设备,比如服务器或终端等设备中,该终端可以包括平板电脑、笔记本电脑、个人计算机、医学影像采集设备、或者电子医疗设备等。
例如,如图13所示,该医学影像分割装置可以获取单元301、提取单元302、分割单元303、融合单元304、确定单元305和生成单元306等,如下:
(1)获取单元301;
获取单元301,用于获取切片对,该切片对包括两张从待分割医学影像中采样所得的切片。
例如,获取单元301,具体可以用于获取待分割医学影像,从该待分割医学影像中采样两张切片,组成切片对。
其中,该待分割医学影像可以由各医学影像采集设备如MRI或CT等来对生物组织如心脏或肝脏等进行影像采集后,提供给该获取单元301。
(2)提取单元302;
提取单元302,用于采用不同感受野对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息。
其中,采用不同感受野对该切片进行特征提取的方式可以多种,例如,可以通过残差网络来实现,即:
该提取单元302,具体可以用于通过训练后分割模型中的残差网络对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息。
比如,以该切片对包括第一切片和第二切片,残差网络包括并列且结构相同的第一残差网络分支和第二残差网络分支为例,则此时,提取单元302 可以采用该残差网络中的第一残差网络分支对第一切片进行特征提取,得到第一切片对应的不同尺度的高层特征信息和不同尺度的底层特征信息;以及采用该残差网络中的第二残差网络分支对第二切片进行特征提取,得到第二切片对应的不同尺度的高层特征信息和不同尺度的底层特征信息。
其中,第一残差网络分支和第二残差网络分支的网络结构具体可以根据实际应用的需求而定,比如可以采用ResNet-18,另外,第一残差网络分支的参数与第二残差网络分支的参数可以共享,具体的参数设置可以根据实际应用的需求而定。
可选的,为了可以得到更多尺度的高层特征信息,还可以对得到的高层特征信息进行空间金字塔池化如ASPP处理,详见前面的方法实施例,在此不作赘述。
(3)分割单元303;
分割单元303,用于针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果。
例如,该分割单元303,具体可以用于针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息,通过训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;比如,具体用于如下:
针对切片对中每一切片,通过训练后分割模型中的分割网络分别对该切片的低层特征信息和高层特征信息进行卷积处理;将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到连接后特征信息;根据该连接后特征信息筛选属于该切片中的目标对象的像素点,得到该切片的初始分割结果,具体可参见前面的方法实施例,在此不作赘述。
(4)融合单元304;
融合单元304,用于将该切片对中各切片的低层特征信息和高层特征信息进行融合。
例如,该融合单元304,具体可以用于通过训练后分割模型中的融合网络,将该切片对中各切片的低层特征信息和高层特征信息进行融合。
其中,将低层特征信息与高层特征信息进行融合的方法可以有多种,比如,可以采用“逐元素相加(Sum)”或者通道叠加的方式对其进行融合。以“逐元素相加”为例,则该融合单元304,具体可以用于:
将该切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;将该切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息;通过训练后分割模型中的融合网络,将该融合后低层特征信息和融合后高层特征信息进行融合,得到融合后特征信息。
可选的,将该融合后低层特征信息和融合后高层特征信息进行融合的方式可以有多种,比如,可以如下:
该融合单元304,具体可以用于通过训练后分割模型中的融合网络,将融合后低层特征信息和融合后高层特征信息进行逐元素相加,得到融合后特征信息。
可选的,由于不同特征在特定任务中所起到的作用不同,因此,为了使得特征可以得到更好的利用,以提高影像分割的准确性,还可以使用注意力机制让网络自动对不同特征信息赋予不同的权重,使得网络可以对特征信息有选择性地进行融合。即:
该融合单元304,具体可以用于通过训练后分割模型中的融合网络中的通道注意力模块,根据该融合后低层特征信息和融合后高层特征信息为融合后低层特征信息赋予权重,得到加权后特征信息;将该加权后特征信息和该融合后低层特征信息进行逐元素相乘,得到处理后特征信息;将处理后特征信息和融合后高层特征信息进行逐元素相加,得到融合后特征信息。
其中,该通道注意力模块的结构具体可以根据实际应用的需求而定,在此不作赘述。
(5)确定单元305;
确定单元305,用于根据融合后特征信息确定该切片对中各切片之间的关联信息。
其中,目标对象指的是需要在切片中识别出的对象,比如在肝脏影像分割中的“肝脏”,在心脏影像分割中的“心脏”,等等。
例如,该确定单元305可以包括筛选子单元和确定子单元,如下:
筛选子单元,可以用于从融合后特征信息中筛选出属于目标对象的特征。
确定子单元,可以用于根据筛选出的特征确定该切片之间的关联信息。比如,具体可以如下:
该确定子单元,具体可以用于根据筛选出的特征确定切片对中每一切片的背景区域和前景区域,计算该切片之间前景区域的差集像素和交集像素,根据该背景区域、差集像素和交集像素生成该切片对中各切片之间的关联信息。
譬如,该确定子单元,具体可以用于将该融合后特征信息中,只属于切片对中任一切片的前景区域的像素点作为差集像素;以及将该融合后特征信息中,同时属于切片对中两张切片的前景区域的像素点作为交集像素。
又譬如,该确定子单元,具体用于对该背景区域、差集像素和交集像素进行像素类型标识,得到该切片之间的关联信息,具体可参见前面的方法实施例,在此不作赘述。
(6)生成单元306;
生成单元306,用于基于该关联信息和切片对中各切片的初始分割结果, 生成该切片对的分割结果。
例如,以该切片对中包括第一切片和第二切片为例,则此时,该生成单元306,具体可以用于:
根据该关联信息和第一切片的初始分割结果预测第二切片的分割结果,得到第二切片的预测分割结果;根据该关联信息和第二切片的初始分割结果预测第一切片的分割结果,得到第一切片的预测分割结果;基于第一切片的预测分割结果对第一切片的初始分割结果进行调整,得到第一切片的调整后分割结果;基于第二切片的预测分割结果对第二切片的初始分割结果进行调整,得到第二切片的调整后分割结果;对第一切片的调整后分割结果和第二切片的调整后分割结果进行融合,得到该切片对的分割结果。
比如,该生成单元306,具体可以用于:对第一切片的预测分割结果和第一切片的初始分割结果进行平均处理,得到第一切片的调整后分割结果;以及对第二切片的预测分割结果和第二切片的初始分割结果进行平均处理,得到第二切片的调整后分割结果。
又比如,该生成单元306,具体可以用于对第一切片的调整后分割结果和第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到该切片对的分割结果。
可选的,该训练后影像分割模型可以由多对标注有真实值的切片对样本训练而成,具体的,可以由运维人员预先进行设置,或者,也可以由该影像分割装置自行训练来得到。即如图14所示,该影像分割装置还可以包括采集单元307和训练单元308;
采集单元307,可以用于采集多对标注有真实值的切片对样本。
其中,该切片对样本包括两张从医学影像样本中采样所得的切片样本,具体可参见前面的实施例,在此不作赘述。
训练单元308,可以用于通过预设分割模型中的残差网络对该切片对样本中每一切片样本进行特征提取,得到每一切片样本的高层特征信息和低层特征信息;针对切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;通过预设分割模型中的融合网络,将切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测该切片对样本中各切片样本之间的关联信息;根据该真实值、切片对样本中各切片样本的预测分割值和预测的关联信息对该预设分割模型进行收敛,得到训练后分割模型。
比如,该训练单元308,具体可以用于通过Dice损失函数,根据该真实值、切片对样本中各切片样本的预测分割值和预测的关联信息对该分割模型进行收敛,得到训练后分割模型。
其中,该Dice损失函数具体可参见前面的方法实施例,在此不作赘述。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
由上可知,本实施例在获取到切片对后,可以由提取单元302采用不同感受野分别对该切片对中各切片进行特征提取,得到每一切片的高层特征信息和低层特征信息,然后,一方面由分割单元303针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,另一方面由融合单元304将切片对中各切片的低层特征信息和高层特征信息进行融合,并由确定单元305根据融合后特征信息确定切片对中各切片之间的关联信息,进而,由生成单元306基于切片对中各切片之间的关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果;考虑到3D医学影像的切片之间具有关联性,因此,本申请实施例提供的装置同时对两张切片(切片对)进行分割,并利用切片之间的关联关系对分割结果作进一步调整,保证可以更加准确地捕捉到目标对象(如肝脏)的形状信息,使其分割精准性更高。
本申请实施例还提供一种电子设备,如图15所示,其示出了本申请实施例所涉及的电子设备的结构示意图,具体来讲:
该电子设备可以包括一个或者一个以上处理核心的处理器401、一个或一个以上计算机可读存储介质的存储器402、电源403和输入单元404等部件。本领域技术人员可以理解,图15中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器401是该电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据,从而对电子设备进行整体监控。可选的,处理器401可包括一个或多个处理核心;优选的,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、影像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地, 存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
电子设备还包括给各个部件供电的电源403,优选的,电源403可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源403还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该电子设备还可包括输入单元404,该输入单元404可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
尽管未示出,电子设备还可以包括显示单元等,在此不再赘述。具体在本实施例中,电子设备中的处理器401会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现各种功能,如下:
获取切片对,该切片对包括两张从待分割医学影像中采样所得的切片;
采用不同感受野对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;
针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
将该切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定该切片对中各切片之间的关联信息;
基于该关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果。
例如,具体可以通过训练后分割模型中的残差网络对该切片对每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;然后,针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息,通过该训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;以及通过该训练后分割模型中的融合网络,将该切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定该切片对中各切片之间的关联信息,再然后,基于该关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果。
可选的,该训练后分割模型可以由多对标注有真实值的切片对样本训练而成,具体的,可以由运维人员预先进行设置,或者,也可以由该影像分割装置自行训练来得到。即处理器401还可以运行存储在存储器402中的应用程序,从而实现以下功能:
采集多对标注有真实值的切片对样本;
通过预设分割模型中的残差网络对该切片对样本每一切片样本进行特征 提取,得到每一切片样本的高层特征信息和低层特征信息;
针对切片对样本每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;
通过预设分割模型中的融合网络,将切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测该切片对样本中各切片样本之间的关联信息;
根据该真实值、切片对样本中各切片样本的预测分割值和预测的关联信息对该预设分割模型进行收敛,得到训练后分割模型。
以上各个操作具体可参见前面的实施例,在此不作赘述。
由上可知,本实施例的电子设备在获取到切片对后,可以采用不同感受野分别对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息,然后,一方面针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,另一方面将切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定切片对中各切片之间的关联信息,再然后,利用得到的关联信息对切片对中各切片的初始分割结果进行调整,得到最终所需的分割结果;考虑到3D医学影像的切片之间具有关联性,因此本申请实施例提供的方法同时对两张切片(切片对)进行分割,并利用切片之间的关联关系对分割结果作进一步调整,保证可以更加准确地捕捉到目标对象(如肝脏)的形状信息,使其分割精准性更高。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例提供一种存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种医学影像分割方法中的步骤。例如,该指令可以执行如下步骤:
获取切片对,该切片对包括两张从待分割医学影像中采样所得的切片;
采用不同感受野对该切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;
针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
将该切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定该切片对中各切片之间的关联信息;
基于该关联信息和切片对中各切片的初始分割结果,生成该切片对的分割结果。
例如,具体可以通过训练后分割模型中的残差网络对该切片对中每一切 片进行特征提取,得到每一切片的高层特征信息和低层特征信息;然后,针对切片对中每一切片,根据该切片的低层特征信息和高层特征信息,通过该训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;以及通过该训练后分割模型中的融合网络,将该切片对中各切片的低层特征信息和高层特征信息进行融合,并根据融合后特征信息确定该切片对中各切片之间的关联信息,再然后,基于该关联信息和切片对中各切片的初始分割结果生成该切片对的分割结果。
可选的,该训练后分割模型可以由多对标注有真实值的切片对样本训练而成,具体的,可以由运维人员预先进行设置,或者,也可以由该影像分割装置自行训练来得到。即该指令还可以执行如下步骤:
采集多对标注有真实值的切片对样本;
通过预设分割模型中的残差网络对该切片对样本中每一切片样本进行特征提取,得到每一切片样本的高层特征信息和低层特征信息;
针对切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;
通过分割模型中的融合网络,将切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测该切片对样本中各切片样本之间的关联信息;
根据该真实值、切片对样本中各切片样本的预测分割值和预测的关联信息对该预设分割模型进行收敛,得到训练后分割模型。
以上各个操作的具体实施可参见前面的实施例,在此不再赘述。
其中,该存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM,Random Access Memory)、磁盘或光盘等。
由于该存储介质中所存储的指令,可以执行本申请实施例所提供的任一种医学影像分割方法中的步骤,因此,可以实现本申请实施例所提供的任一种医学影像分割方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
以上对本申请实施例所提供的一种医学影像分割方法、装置、电子设备和存储介质进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (29)

  1. 一种医学影像分割方法,由电子设备执行,所述方法包括:
    获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;
    采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息;
    针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
    将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息;
    基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果。
  2. 根据权利要求1所述的方法,所述根据融合后特征信息确定所述切片对中各切片之间的关联信息,包括:
    从所述融合后特征信息中筛选出属于目标对象的特征;
    根据筛选出的特征确定所述切片对中各切片之间的关联信息。
  3. 根据权利要求2所述的方法,所述根据筛选出的特征确定所述切片对中各切片之间的关联信息,包括:
    根据筛选出的特征确定所述切片对中每一切片的背景区域和前景区域;
    计算所述切片对中各切片之间前景区域的差集像素和交集像素;
    根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息。
  4. 根据权利要求3所述的方法,所述计算所述切片对各切片之间前景区域的差集像素和交集像素,包括:
    将所述融合后特征信息中,只属于所述切片对中任一切片的前景区域的像素点作为所述差集像素;
    将所述融合后特征信息中,同时属于所述切片对中两张切片的前景区域的像素点作为所述交集像素。
  5. 根据权利要求3所述的方法,所述根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息,包括:
    对所述背景区域、所述差集像素和所述交集像素进行像素类型标识,得到所述切片对中各切片之间的关联信息。
  6. 根据权利要求1至5任一项所述的方法,所述切片对中包括第一切片和第二切片;所述基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果,包括:
    根据所述关联信息和所述第一切片的初始分割结果预测所述第二切片的分割结果,得到所述第二切片的预测分割结果;
    根据所述关联信息和所述第二切片的初始分割结果预测所述第一切片的 分割结果,得到所述第一切片的预测分割结果;
    基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果;
    基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果;
    对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果。
  7. 根据权利要求6所述的方法,所述基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果,包括:
    对所述第一切片的预测分割结果和所述第一切片的初始分割结果进行平均处理,得到所述第一切片的调整后分割结果;
    所述基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果,包括:
    对所述第二切片的预测分割结果和所述第二切片的初始分割结果进行平均处理,得到所述第二切片的调整后分割结果。
  8. 根据权利要求6所述的方法,所述对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果,包括:
    对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到所述切片对的分割结果。
  9. 根据权利要求1至5任一项所述的方法,所述采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息,包括:
    通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;
    所述针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果,包括:
    针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;
    所述将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息,包括:
    通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述 低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息。
  10. 根据权利要求9所述的方法,所述通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,包括:
    将所述切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;
    将所述切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息;
    通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行融合,得到所述融合后特征信息。
  11. 根据权利要求10所述的方法,所述通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行融合,得到所述融合后特征信息,包括:
    通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息;或者,
    通过所述训练后分割模型中的融合网络中的通道注意力模块,根据所述融合后低层特征信息和所述融合后高层特征信息为所述融合后低层特征信息赋予权重,得到加权后特征信息;将所述加权后特征信息和所述融合后低层特征信息进行逐元素相乘,得到处理后特征信息;将所述处理后特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息。
  12. 根据权利要求9所述的方法,所述针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果,包括:
    针对所述切片对中每一切片,通过所述训练后分割模型中的分割网络分别对该切片的所述低层特征信息和所述高层特征信息进行卷积处理;将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到连接后特征信息;根据所述连接后特征信息筛选属于该切片中的目标对象的像素点,得到该切片的初始分割结果。
  13. 根据权利要求9所述的方法,在所述通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息之前,所述方法还包括:
    采集多对标注有真实值的切片对样本,所述切片对样本包括两张从医学影像样本中采样所得的切片样本;
    通过预设分割模型中的残差网络对所述切片对样本中每一切片样本进行 特征提取,得到每一切片样本的高层特征信息和低层特征信息;
    针对所述切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过所述预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;
    通过所述预设分割模型中的融合网络,将所述切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测所述切片对样本中各切片样本之间的关联信息;
    根据所述真实值、所述切片对样本中各切片样本的预测分割值和预测的关联信息对所述预设分割模型进行收敛,得到所述训练后分割模型。
  14. 一种医学影像分割装置,所述装置包括:
    获取单元,用于获取切片对,所述切片对包括两张从待分割医学影像中采样所得的切片;
    提取单元,用于采用不同感受野对所述切片对中每一切片进行特征提取,得到所述切片对中每一切片的高层特征信息和低层特征信息;
    分割单元,用于针对所述切片对中每一切片,根据该切片的所述低层特征信息和高层特征信息对该切片中的目标对象进行分割,得到该切片的初始分割结果;
    融合单元,用于将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合;
    确定单元,用于根据融合后特征信息确定所述切片对中各切片之间的关联信息;
    生成单元,用于基于所述关联信息和所述切片对中各切片的初始分割结果,生成所述切片对的分割结果。
  15. 根据权利要求14所述的装置,所述确定单元包括:筛选子单元和确定子单元;
    所述筛选子单元,用于从所述融合后特征信息中筛选出属于目标对象的特征;
    所述确定子单元,用于根据筛选出的特征确定所述切片对中各切片之间的关联信息。
  16. 根据权利要求15所述的装置,所述确定子单元具体用于:
    根据筛选出的特征确定所述切片对中每一切片的背景区域和前景区域;
    计算所述切片对中各切片之间前景区域的差集像素和交集像素;
    根据所述背景区域、所述差集像素和所述交集像素生成所述切片对中各切片之间的关联信息。
  17. 根据权利要求16所述的装置,所述确定子单元具体用于:
    将所述融合后特征信息中,只属于所述切片对中任一切片的前景区域的像素点作为所述差集像素;
    将所述融合后特征信息中,同时属于所述切片对中两张切片的前景区域的像素点作为所述交集像素。
  18. 根据权利要求16所述的装置,所述确定子单元具体用于:
    对所述背景区域、所述差集像素和所述交集像素进行像素类型标识,得到所述切片对中各切片之间的关联信息。
  19. 根据权利要求14至18任一项所述的装置,所述切片对中包括第一切片和第二切片;所述生成单元具体用于:
    根据所述关联信息和所述第一切片的初始分割结果预测所述第二切片的分割结果,得到所述第二切片的预测分割结果;
    根据所述关联信息和所述第二切片的初始分割结果预测所述第一切片的分割结果,得到所述第一切片的预测分割结果;
    基于所述第一切片的预测分割结果对所述第一切片的初始分割结果进行调整,得到所述第一切片的调整后分割结果;
    基于所述第二切片的预测分割结果对所述第二切片的初始分割结果进行调整,得到所述第二切片的调整后分割结果;
    对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行融合,得到所述切片对的分割结果。
  20. 根据权利要求19所述的装置,所述生成单元具体用于:
    对所述第一切片的预测分割结果和所述第一切片的初始分割结果进行平均处理,得到所述第一切片的调整后分割结果;
    对所述第二切片的预测分割结果和所述第二切片的初始分割结果进行平均处理,得到所述第二切片的调整后分割结果。
  21. 根据权利要求19所述的装置,所述生成单元具体用于:
    对所述第一切片的调整后分割结果和所述第二切片的调整后分割结果进行平均处理,将平均处理后的结果进行二值化处理,得到所述切片对的分割结果。
  22. 根据权利要求14至18任一项所述的装置,所述提取单元具体用于:
    通过训练后分割模型中的残差网络对所述切片对中每一切片进行特征提取,得到每一切片的高层特征信息和低层特征信息;
    则所述分割单元具体用于:
    针对所述切片对中每一切片,根据该切片的所述低层特征信息和所述高层特征信息,通过所述训练后分割模型中的分割网络对该切片中的目标对象进行分割,得到该切片的初始分割结果;
    则所述融合单元具体用于:
    通过所述训练后分割模型中的融合网络,将所述切片对中各切片的所述低层特征信息和所述高层特征信息进行融合,并根据融合后特征信息确定所述切片对中各切片之间的关联信息。
  23. 根据权利要求22所述的装置,所述融合单元具体用于:
    将所述切片对中各切片的低层特征信息进行逐元素相加,得到融合后低层特征信息;
    将所述切片对中各切片的高层特征信息进行逐元素相加,得到融合后高层特征信息;
    通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和融合后高层特征信息进行融合,得到所述融合后特征信息。
  24. 根据权利要求23所述的装置,所述融合单元具体用于:
    通过所述训练后分割模型中的融合网络,将所述融合后低层特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息;或者,
    通过所述训练后分割模型中的融合网络中的通道注意力模块,根据所述融合后低层特征信息和融合后高层特征信息为所述融合后低层特征信息赋予权重,得到加权后特征信息;将所述加权后特征信息和所述融合后低层特征信息进行逐元素相乘,得到处理后特征信息;将所述处理后特征信息和所述融合后高层特征信息进行逐元素相加,得到所述融合后特征信息。
  25. 根据权利要求22所述的装置,所述分割单元具体用于:
    针对所述切片对中每一切片,通过所述训练后分割模型中的分割网络分别对该切片的所述低层特征信息和所述高层特征信息进行卷积处理;将卷积处理后的高层特征信息上采样至与卷积处理后的低层特征信息具有相同的尺寸后,与卷积处理后的低层特征信息进行连接,得到连接后特征信息;根据所述连接后特征信息筛选属于该切片中的目标对象的像素点,得到该切片的初始分割结果。
  26. 根据权利要求22所述的装置,所述医学影像分割装置还包括:采集单元和训练单元;
    所述采集单元,用于采集多对标注有真实值的切片对样本,所述切片对样本包括两张从医学影像样本中采样所得的切片样本;
    所述训练单元,用于通过预设分割模型中的残差网络对所述切片对样本中每一切片样本进行特征提取,得到每一切片样本的高层特征信息和低层特征信息;针对所述切片对样本中每一切片样本,根据该切片样本的低层特征信息和高层特征信息,通过所述预设分割模型中的分割网络对该切片样本中的目标对象进行分割,得到该切片样本的预测分割值;通过所述预设分割模型中的融合网络,将所述切片对样本中各切片样本的低层特征信息和高层特征信息进行融合,并根据融合后特征信息预测所述切片对样本中各切片样本之间的关联信息;根据所述真实值、所述切片对样本中各切片样本的预测分割值和预测的关联信息对所述预设分割模型进行收敛,得到所述训练后分割模型。
  27. 一种电子设备,包括存储器和处理器;所述存储器存储有应用程序, 所述处理器用于运行所述存储器内的应用程序,以执行权利要求1至13任一项所述的医学影像分割方法中的操作。
  28. 一种存储介质,所述存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至13任一项所述的医学影像分割方法中的步骤。
  29. 一种计算机程序产品,包括指令,当其在计算机上运行时,使得计算机执行权利要求1至13任一项中所述的医学影像分割方法的步骤。
PCT/CN2020/081660 2019-04-22 2020-03-27 医学影像分割方法、装置、电子设备和存储介质 WO2020215985A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20793969.5A EP3961484A4 (en) 2019-04-22 2020-03-27 METHOD AND DEVICE FOR SEGMENTING MEDICAL IMAGES, ELECTRONIC DEVICE AND STORAGE MEDIA
KR1020217020738A KR102607800B1 (ko) 2019-04-22 2020-03-27 의료 영상 세그먼트화 방법 및 디바이스, 전자 디바이스 및 저장 매체
JP2021541593A JP7180004B2 (ja) 2019-04-22 2020-03-27 医用画像分割方法、医用画像分割装置、電子機器及びコンピュータプログラム
US17/388,249 US11887311B2 (en) 2019-04-22 2021-07-29 Method and apparatus for segmenting a medical image, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910322783.8A CN110110617B (zh) 2019-04-22 2019-04-22 医学影像分割方法、装置、电子设备和存储介质
CN201910322783.8 2019-04-22

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/388,249 Continuation US11887311B2 (en) 2019-04-22 2021-07-29 Method and apparatus for segmenting a medical image, and storage medium

Publications (1)

Publication Number Publication Date
WO2020215985A1 true WO2020215985A1 (zh) 2020-10-29

Family

ID=67486110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/081660 WO2020215985A1 (zh) 2019-04-22 2020-03-27 医学影像分割方法、装置、电子设备和存储介质

Country Status (6)

Country Link
US (1) US11887311B2 (zh)
EP (1) EP3961484A4 (zh)
JP (1) JP7180004B2 (zh)
KR (1) KR102607800B1 (zh)
CN (1) CN110110617B (zh)
WO (1) WO2020215985A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699950A (zh) * 2021-01-06 2021-04-23 腾讯科技(深圳)有限公司 医学图像分类方法、图像分类网络处理方法、装置和设备
CN113223017A (zh) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 目标分割模型的训练方法、目标分割方法及设备
CN113222012A (zh) * 2021-05-11 2021-08-06 北京知见生命科技有限公司 一种肺部数字病理图像自动定量分析方法及系统
CN113378855A (zh) * 2021-06-22 2021-09-10 北京百度网讯科技有限公司 用于处理多任务的方法、相关装置及计算机程序产品
CN113793345A (zh) * 2021-09-07 2021-12-14 复旦大学附属华山医院 一种基于改进注意力模块的医疗影像分割方法及装置
WO2023273956A1 (zh) * 2021-06-29 2023-01-05 华为技术有限公司 一种基于多任务网络模型的通信方法、装置及系统
WO2023276750A1 (ja) * 2021-06-29 2023-01-05 富士フイルム株式会社 学習方法、画像処理方法、学習装置、画像処理装置、学習プログラム、及び画像処理プログラム
CN116628457A (zh) * 2023-07-26 2023-08-22 武汉华康世纪医疗股份有限公司 一种磁共振设备运行中的有害气体检测方法及装置
CN117095447A (zh) * 2023-10-18 2023-11-21 杭州宇泛智能科技有限公司 一种跨域人脸识别方法、装置、计算机设备及存储介质

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110617B (zh) 2019-04-22 2021-04-20 腾讯科技(深圳)有限公司 医学影像分割方法、装置、电子设备和存储介质
CN110598714B (zh) * 2019-08-19 2022-05-17 中国科学院深圳先进技术研究院 一种软骨图像分割方法、装置、可读存储介质及终端设备
WO2021031066A1 (zh) * 2019-08-19 2021-02-25 中国科学院深圳先进技术研究院 一种软骨图像分割方法、装置、可读存储介质及终端设备
CN110516678B (zh) * 2019-08-27 2022-05-06 北京百度网讯科技有限公司 图像处理方法和装置
CN110705381A (zh) * 2019-09-09 2020-01-17 北京工业大学 遥感影像道路提取方法及装置
CN110766643A (zh) * 2019-10-28 2020-02-07 电子科技大学 一种面向眼底图像的微动脉瘤检测方法
CN110852325B (zh) * 2019-10-31 2023-03-31 上海商汤智能科技有限公司 图像的分割方法及装置、电子设备和存储介质
CN111028246A (zh) * 2019-12-09 2020-04-17 北京推想科技有限公司 一种医学图像分割方法、装置、存储介质及电子设备
CN111091091A (zh) * 2019-12-16 2020-05-01 北京迈格威科技有限公司 目标对象重识别特征的提取方法、装置、设备及存储介质
EP3843038B1 (en) * 2019-12-23 2023-09-20 HTC Corporation Image processing method and system
CN111260055B (zh) * 2020-01-13 2023-09-01 腾讯科技(深圳)有限公司 基于三维图像识别的模型训练方法、存储介质和设备
CN111461130B (zh) * 2020-04-10 2021-02-09 视研智能科技(广州)有限公司 一种高精度图像语义分割算法模型及分割方法
CN111583282B (zh) * 2020-05-18 2024-04-23 联想(北京)有限公司 图像分割方法、装置、设备及存储介质
CN113724181A (zh) * 2020-05-21 2021-11-30 国网智能科技股份有限公司 一种输电线路螺栓语义分割方法及系统
CN111967538B (zh) * 2020-09-25 2024-03-15 北京康夫子健康技术有限公司 应用于小目标检测的特征融合方法、装置、设备以及存储介质
CN111968137A (zh) * 2020-10-22 2020-11-20 平安科技(深圳)有限公司 头部ct图像分割方法、装置、电子设备及存储介质
US11776128B2 (en) * 2020-12-11 2023-10-03 Siemens Healthcare Gmbh Automatic detection of lesions in medical images using 2D and 3D deep learning networks
US11715276B2 (en) * 2020-12-22 2023-08-01 Sixgill, LLC System and method of generating bounding polygons
CN112820412B (zh) * 2021-02-03 2024-03-08 东软集团股份有限公司 用户信息的处理方法、装置、存储介质和电子设备
CN113470048B (zh) * 2021-07-06 2023-04-25 北京深睿博联科技有限责任公司 场景分割方法、装置、设备及计算机可读存储介质
CN113627292B (zh) * 2021-07-28 2024-04-30 广东海启星海洋科技有限公司 基于融合网络的遥感图像识别方法及装置
CN113936220B (zh) * 2021-12-14 2022-03-04 深圳致星科技有限公司 图像处理方法、存储介质、电子设备及图像处理装置
CN113989305B (zh) * 2021-12-27 2022-04-22 城云科技(中国)有限公司 目标语义分割方法及应用其的街道目标异常检测方法
CN113989498B (zh) * 2021-12-27 2022-07-12 北京文安智能技术股份有限公司 一种用于多类别垃圾场景识别的目标检测模型的训练方法
CN115830001B (zh) * 2022-12-22 2023-09-08 抖音视界有限公司 肠道图像处理方法、装置、存储介质及电子设备
CN117456191B (zh) * 2023-12-15 2024-03-08 武汉纺织大学 一种基于三分支网络结构的复杂环境下语义分割方法
CN117635962B (zh) * 2024-01-25 2024-04-12 云南大学 基于多频率融合的通道注意力图像处理方法
CN117853858A (zh) * 2024-03-07 2024-04-09 烟台大学 基于全局和局部信息的磁共振图像合成方法、系统和设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108447052A (zh) * 2018-03-15 2018-08-24 深圳市唯特视科技有限公司 一种基于神经网络的对称性脑肿瘤分割方法
CN109427052A (zh) * 2017-08-29 2019-03-05 中国移动通信有限公司研究院 基于深度学习处理眼底图像的相关方法及设备
CN109598732A (zh) * 2018-12-11 2019-04-09 厦门大学 一种基于三维空间加权的医学图像分割方法
CN110110617A (zh) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 医学影像分割方法、装置、电子设备和存储介质

Family Cites Families (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379576B2 (en) * 2003-11-03 2008-05-27 Siemens Medical Solutions Usa, Inc. Method and system for patient identification in 3D digital medical images
JP2005245830A (ja) 2004-03-05 2005-09-15 Jgs:Kk 腫瘍検出方法、腫瘍検出装置及びプログラム
WO2005106773A2 (en) * 2004-04-15 2005-11-10 Edda Technology, Inc. Spatial-temporal lesion detection, segmentation, and diagnostic information extraction system and method
US8913830B2 (en) * 2005-01-18 2014-12-16 Siemens Aktiengesellschaft Multilevel image segmentation
DE102007028895B4 (de) * 2007-06-22 2010-07-15 Siemens Ag Verfahren zur Segmentierung von Strukturen in 3D-Bilddatensätzen
FR2919747B1 (fr) * 2007-08-02 2009-11-06 Gen Electric Procede et systeme d'affichage d'images de tomosynthese
JP5138431B2 (ja) 2008-03-17 2013-02-06 富士フイルム株式会社 画像解析装置および方法並びにプログラム
WO2011046511A1 (en) * 2009-10-13 2011-04-21 Agency For Science, Technology And Research A method and system for segmenting a liver object in an image
EP2498222B1 (en) * 2011-03-09 2019-08-14 Siemens Healthcare GmbH Method and system for regression-based 4D mitral valve segmentation from 2D+T magnetic resonance imaging slices
EP2729066B1 (en) * 2011-07-07 2021-01-27 The Board of Trustees of the Leland Stanford Junior University Comprehensive cardiovascular analysis with volumetric phase-contrast mri
EP2751779B1 (en) * 2011-10-11 2018-09-05 Koninklijke Philips N.V. A workflow for ambiguity guided interactive segmentation of lung lobes
KR102204437B1 (ko) * 2013-10-24 2021-01-18 삼성전자주식회사 컴퓨터 보조 진단 방법 및 장치
WO2017019833A1 (en) * 2015-07-29 2017-02-02 Medivation Technologies, Inc. Compositions containing repair cells and cationic dyes
DE102015217948B4 (de) * 2015-09-18 2017-10-05 Ernst-Moritz-Arndt-Universität Greifswald Verfahren zur Segmentierung eines Organs und/oder Organbereiches in Volumendatensätzen der Magnetresonanztomographie
JP6993334B2 (ja) 2015-11-29 2022-01-13 アーテリーズ インコーポレイテッド 自動化された心臓ボリュームセグメンテーション
WO2017210690A1 (en) * 2016-06-03 2017-12-07 Lu Le Spatial aggregation of holistically-nested convolutional neural networks for automated organ localization and segmentation in 3d medical scans
US10667778B2 (en) * 2016-09-14 2020-06-02 University Of Louisville Research Foundation, Inc. Accurate detection and assessment of radiation induced lung injury based on a computational model and computed tomography imaging
US10580131B2 (en) * 2017-02-23 2020-03-03 Zebra Medical Vision Ltd. Convolutional neural network for segmentation of medical anatomical images
CN108229455B (zh) * 2017-02-23 2020-10-16 北京市商汤科技开发有限公司 物体检测方法、神经网络的训练方法、装置和电子设备
US20200085382A1 (en) * 2017-05-30 2020-03-19 Arterys Inc. Automated lesion detection, segmentation, and longitudinal identification
GB201709672D0 (en) * 2017-06-16 2017-08-02 Ucl Business Plc A system and computer-implemented method for segmenting an image
US9968257B1 (en) * 2017-07-06 2018-05-15 Halsa Labs, LLC Volumetric quantification of cardiovascular structures from medical imaging
JP6888484B2 (ja) * 2017-08-29 2021-06-16 富士通株式会社 検索プログラム、検索方法、及び、検索プログラムが動作する情報処理装置
CN109377496B (zh) * 2017-10-30 2020-10-02 北京昆仑医云科技有限公司 用于分割医学图像的系统和方法及介质
US10783640B2 (en) * 2017-10-30 2020-09-22 Beijing Keya Medical Technology Co., Ltd. Systems and methods for image segmentation using a scalable and compact convolutional neural network
US11551353B2 (en) * 2017-11-22 2023-01-10 Arterys Inc. Content based image retrieval for lesion analysis
CN108268870B (zh) * 2018-01-29 2020-10-09 重庆师范大学 基于对抗学习的多尺度特征融合超声图像语义分割方法
US10902288B2 (en) * 2018-05-11 2021-01-26 Microsoft Technology Licensing, Llc Training set sufficiency for image analysis
US10964012B2 (en) * 2018-06-14 2021-03-30 Sony Corporation Automatic liver segmentation in CT
CN109191472A (zh) * 2018-08-28 2019-01-11 杭州电子科技大学 基于改进U-Net网络的胸腺细胞图像分割方法
CN113506310B (zh) 2021-07-16 2022-03-01 首都医科大学附属北京天坛医院 医学图像的处理方法、装置、电子设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109427052A (zh) * 2017-08-29 2019-03-05 中国移动通信有限公司研究院 基于深度学习处理眼底图像的相关方法及设备
CN108447052A (zh) * 2018-03-15 2018-08-24 深圳市唯特视科技有限公司 一种基于神经网络的对称性脑肿瘤分割方法
CN109598732A (zh) * 2018-12-11 2019-04-09 厦门大学 一种基于三维空间加权的医学图像分割方法
CN110110617A (zh) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 医学影像分割方法、装置、电子设备和存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3961484A4

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112699950A (zh) * 2021-01-06 2021-04-23 腾讯科技(深圳)有限公司 医学图像分类方法、图像分类网络处理方法、装置和设备
CN113222012A (zh) * 2021-05-11 2021-08-06 北京知见生命科技有限公司 一种肺部数字病理图像自动定量分析方法及系统
CN113223017A (zh) * 2021-05-18 2021-08-06 北京达佳互联信息技术有限公司 目标分割模型的训练方法、目标分割方法及设备
CN113378855A (zh) * 2021-06-22 2021-09-10 北京百度网讯科技有限公司 用于处理多任务的方法、相关装置及计算机程序产品
WO2023273956A1 (zh) * 2021-06-29 2023-01-05 华为技术有限公司 一种基于多任务网络模型的通信方法、装置及系统
WO2023276750A1 (ja) * 2021-06-29 2023-01-05 富士フイルム株式会社 学習方法、画像処理方法、学習装置、画像処理装置、学習プログラム、及び画像処理プログラム
CN113793345A (zh) * 2021-09-07 2021-12-14 复旦大学附属华山医院 一种基于改进注意力模块的医疗影像分割方法及装置
CN113793345B (zh) * 2021-09-07 2023-10-31 复旦大学附属华山医院 一种基于改进注意力模块的医疗影像分割方法及装置
CN116628457A (zh) * 2023-07-26 2023-08-22 武汉华康世纪医疗股份有限公司 一种磁共振设备运行中的有害气体检测方法及装置
CN116628457B (zh) * 2023-07-26 2023-09-29 武汉华康世纪医疗股份有限公司 一种磁共振设备运行中的有害气体检测方法及装置
CN117095447A (zh) * 2023-10-18 2023-11-21 杭州宇泛智能科技有限公司 一种跨域人脸识别方法、装置、计算机设备及存储介质
CN117095447B (zh) * 2023-10-18 2024-01-12 杭州宇泛智能科技有限公司 一种跨域人脸识别方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
EP3961484A4 (en) 2022-08-03
CN110110617B (zh) 2021-04-20
EP3961484A1 (en) 2022-03-02
KR20210097772A (ko) 2021-08-09
KR102607800B1 (ko) 2023-11-29
JP2022529557A (ja) 2022-06-23
CN110110617A (zh) 2019-08-09
US20210365717A1 (en) 2021-11-25
JP7180004B2 (ja) 2022-11-29
US11887311B2 (en) 2024-01-30

Similar Documents

Publication Publication Date Title
WO2020215985A1 (zh) 医学影像分割方法、装置、电子设备和存储介质
CN110111313B (zh) 基于深度学习的医学图像检测方法及相关设备
CN109508681B (zh) 生成人体关键点检测模型的方法和装置
CN112017189B (zh) 图像分割方法、装置、计算机设备和存储介质
EP4002198A1 (en) Posture acquisition method and device, and key point coordinate positioning model training method and device
Gao et al. Classification of CT brain images based on deep learning networks
CN107274402A (zh) 一种基于胸部ct影像的肺结节自动检测方法及系统
CN104484886B (zh) 一种mr图像的分割方法及装置
WO2021238548A1 (zh) 区域识别方法、装置、设备及可读存储介质
CN111667459B (zh) 一种基于3d可变卷积和时序特征融合的医学征象检测方法、系统、终端及存储介质
CN108052909B (zh) 一种基于心血管oct影像的薄纤维帽斑块自动检测方法和装置
Ryou et al. Automated 3D ultrasound biometry planes extraction for first trimester fetal assessment
CN113159200B (zh) 对象分析方法、装置及存储介质
CN113424222A (zh) 用于使用条件生成对抗网络提供中风病灶分割的系统和方法
CN112215217B (zh) 模拟医师阅片的数字图像识别方法及装置
CN114219855A (zh) 点云法向量的估计方法、装置、计算机设备和存储介质
CN117237351B (zh) 一种超声图像分析方法以及相关装置
CN113822323A (zh) 脑部扫描图像的识别处理方法、装置、设备及存储介质
CN113724185A (zh) 用于图像分类的模型处理方法、装置及存储介质
CN113610746A (zh) 一种图像处理方法、装置、计算机设备及存储介质
CN113096080A (zh) 图像分析方法及系统
WO2023160157A1 (zh) 三维医学图像的识别方法、装置、设备、存储介质及产品
CN116703837A (zh) 一种基于mri图像的肩袖损伤智能识别方法及装置
Khan et al. A review of benchmark datasets and training loss functions in neural depth estimation
CN115170401A (zh) 图像补全方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20793969

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20217020738

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2021541593

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020793969

Country of ref document: EP

Effective date: 20211122