WO2020093435A1 - 腹部图像分割方法、计算机设备及存储介质 - Google Patents

腹部图像分割方法、计算机设备及存储介质 Download PDF

Info

Publication number
WO2020093435A1
WO2020093435A1 PCT/CN2018/115798 CN2018115798W WO2020093435A1 WO 2020093435 A1 WO2020093435 A1 WO 2020093435A1 CN 2018115798 W CN2018115798 W CN 2018115798W WO 2020093435 A1 WO2020093435 A1 WO 2020093435A1
Authority
WO
WIPO (PCT)
Prior art keywords
training
image
abdominal
neural network
convolutional neural
Prior art date
Application number
PCT/CN2018/115798
Other languages
English (en)
French (fr)
Inventor
贾伟平
盛斌
李华婷
潘思源
侯旭宏
吴量
Original Assignee
上海市第六人民医院
上海交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海市第六人民医院, 上海交通大学 filed Critical 上海市第六人民医院
Priority to US16/471,819 priority Critical patent/US11302014B2/en
Publication of WO2020093435A1 publication Critical patent/WO2020093435A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/771Feature selection, e.g. selecting representative features from a multi-dimensional feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2431Multiple classes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present application relates to the field of medical technology, and in particular, to a method of segmenting an abdomen image, a computer device, and a storage medium.
  • Human fat is divided into subcutaneous fat and intra-abdominal fat. Determining the content of subcutaneous fat and intra-abdominal fat in the human body is an important indicator to measure people's health, and it is also a reference indicator for detecting some diseases (such as diabetes, etc.).
  • MRI magnetic resonance imaging
  • the first is to manually segment fat by personnel with relevant medical knowledge.
  • the second is to segment intra-abdominal fat by computer algorithms.
  • a abdominal image segmentation method a computer device, and a storage medium are proposed.
  • An abdominal image segmentation method including the following steps:
  • the trained fully convolutional neural network Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested, and determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained fully convolutional neural network is based on the first
  • the training set and the second training set are determined by training.
  • the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each The two-sample abdominal images and the second-sample abdominal images correspond to the number of pixels belonging to each category.
  • a computer device includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor causes the processor to perform the following steps:
  • the trained fully convolutional neural network Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested, and determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained fully convolutional neural network is based on the first
  • the training set and the second training set are determined by training.
  • the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each The two-sample abdominal images and the second-sample abdominal images correspond to the number of pixels belonging to each category.
  • One or more non-volatile storage media storing computer-readable instructions, which when executed by one or more processors, cause the one or more processors to perform the following steps:
  • the trained fully convolutional neural network Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested, and determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained fully convolutional neural network is based on the first
  • the training set and the second training set are determined by training.
  • the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each The two-sample abdominal images and the second-sample abdominal images correspond to the number of pixels belonging to each category.
  • FIG. 1 is a schematic flowchart of an abdominal image segmentation method according to an embodiment
  • FIG. 2 is a schematic diagram of the affine change in the abdominal image segmentation method of another embodiment
  • FIG. 3 is a schematic diagram of a fully convolutional neural network according to another embodiment
  • FIG. 4 is a structural block diagram of a computer device in an embodiment.
  • a method for segmenting an abdomen image is provided.
  • This embodiment is mainly exemplified by applying the method to a computer device (that is, the method can be executed by the computer device).
  • the abdominal image segmentation method specifically includes the following steps:
  • each pixel in the abdominal image needs to be classified, that is, to determine the category of each pixel, the abdominal image to be tested needs to be obtained first.
  • the abdominal image to be measured is the abdominal mri image of the user to be measured.
  • S120 Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested, and determine the segmented image corresponding to the abdominal image to be tested.
  • the trained full convolutional neural network is determined based on the first training set and the second training set.
  • the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each second sample abdominal image and each second sample abdominal image corresponding to the number of pixels belonging to each category.
  • the trained fully convolutional neural network is determined in advance.
  • the trained fully convolutional neural network can be used to classify each pixel in the abdominal image to be tested, so as to achieve image segmentation and determine the corresponding abdominal image to be tested.
  • Split the image For example, the abdomen image is an image with M rows and N columns, and there are M * N pixels in the abdomen image.
  • M * N pixels need to be classified separately to determine each pixel category, which can be
  • Each pixel is assigned a value corresponding to its category, then the values corresponding to the pixels of the same category are the same, so as to achieve image segmentation.
  • the abdominal image of the first sample and the abdominal image of the second user are both abdominal mri images.
  • the first training set includes pixel classification label images of each first sample abdominal image
  • the second training set does not include pixel classification label images corresponding to each second sample abdominal image, but includes each second sample abdominal image. Corresponds to the number of pixels belonging to each category.
  • the second training set includes the abdomen image of the second sample user A and the abdomen image of the second sample user B, then the number of pixels belonging to each category in the abdomen image of the second sample user A (for example, each category includes A category, a second category and a third category, the number of pixels belonging to the first category in each pixel in the abdominal image of the second sample user A is S1, the number of pixels belonging to the second category is S2, and the number of pixels belonging to the third category S3) and the number of pixels belonging to each category in the abdomen image of the second sample user B.
  • each category includes A category, a second category and a third category, the number of pixels belonging to the first category in each pixel in the abdominal image of the second sample user A is S1, the number of pixels belonging to the second category is S2, and the number of pixels belonging to the third category S3 and the number of pixels belonging to each category in the abdomen image of the second sample user B.
  • the above categories include intra-abdominal fat category, subcutaneous fat category, and background category.
  • the second sample abdominal image corresponds to the number of pixels belonging to each category respectively, which is the number of pixels corresponding to the second sample abdominal image belonging to the intra-abdominal fat category, subcutaneous fat category, and background category. That is, in this embodiment, each pixel in the abdominal image is classified into intra-abdominal fat, subcutaneous fat, and background, and the abdominal image is divided into intra-abdominal fat, subcutaneous fat, and background.
  • the above abdominal image segmentation method uses the trained full convolutional neural network to segment the abdominal image to be measured to obtain a segmented image, which can effectively segment the image.
  • the trained fully convolutional neural network is determined by two different training set trainings, that is, based on the first training set and the second training set.
  • the first training set includes each first sample abdominal image and each first The pixel classification label map corresponding to the abdominal image (the size is the same as the abdominal image, each value in the pixel classification label map corresponds to the classification label of the pixel corresponding to the abdominal image)
  • the second training set includes each second sample abdominal image and Each second sample abdominal image corresponds to the number of pixels belonging to each category. Training through different training sets to determine the trained fully convolutional neural network can improve the accuracy of the fully convolutional neural network and further improve the segmentation of the abdominal image Accuracy.
  • the method for determining the trained fully convolutional neural network includes: obtaining the first training set and the second training set; initializing to obtain the initial fully convolutional neural network, the initial fully convolutional neural network includes a convolutional layer and Classification layer; based on the first training set and the second training set, the initial fully convolutional neural network is trained to obtain the trained fully convolutional neural network.
  • the abdominal image in the training set is input to the convolutional layer for convolution processing.
  • the obtained result is output to the classification layer, and the convolutional layer of the fully convolutional neural network convolves the abdominal image
  • the obtained convolution result is a feature map (that is, a convolution image), and the classification layer performs classification to obtain a training pixel classification label map corresponding to the abdomen image.
  • the initial fully convolutional neural network In the process of training the initial fully convolutional neural network based on the first training set, the initial fully convolutional neural network is updated according to the first training error, and the first training error is input according to the first sample abdominal image in the first training set
  • the initial fully convolutional neural network determines the training pixel classification label image output from the classification layer and the pixel classification label image of the first sample abdominal image.
  • the initial fully convolutional neural network is updated according to the second training error, and the second training error is input to the initial full convolution according to the second sample abdominal image in the second training set
  • the output of the training convolution image output from the convolution layer after passing through the fully connected layer is determined by the number of pixels belonging to each category in the abdomen image of the second sample.
  • the convolutional neural network includes various parameters, such as weights and offsets.
  • the training process is to continuously update these parameters, thereby updating the convolutional network. It can be understood that updating the initial fully convolutional neural network is to update the parameters in the initial fully convolutional neural network. After the training is completed, the obtained parameters are the latest, that is, the parameters in the trained convolutional neural network are the latest.
  • the data on which the convolutional neural network is updated is different.
  • the initial fully convolutional neural network is updated according to the first training error.
  • the first training set includes the pixel classification label map corresponding to the abdominal image of the first sample, that is, the classification label of each pixel
  • the initial fully convolutional neural network can perform training after convolution processing and classification of the abdominal image of the first sample Pixel classification label image, however, the training pixel classification label image may be different from the pixel classification label image corresponding to the first sample abdominal image.
  • the first training error is input to the initial fully convolutional nerve according to the first sample abdominal image in the first training set
  • the network determines the training pixel classification label image output from the classification layer and the pixel classification label image of the abdominal image of the first sample, and the first training error represents the difference between the training pixel classification label image and the pixel classification label image.
  • the initial fully convolutional neural network is updated according to the second training error. Since the second training set includes the second sample abdomen image and the number of pixels in the second sample abdomen image that belong to each category, the initial fully convolutional neural network performs convolution on the second sample abdomen image. The output of the fully-connected layer after processing the convolution result may be different from the number of pixels in the abdominal image of the second sample that belong to each category.
  • the second training error is based on the input of the second sample abdominal image in the second training set.
  • the neural network outputs the training convolutional image from the convolutional layer.
  • the training convolutional image is used as the input of the fully connected layer.
  • the second sample abdominal image output through the fully connected layer belongs to the number of training pixels of each category and the second sample abdominal image belongs to The number of pixels in each category may be different.
  • the second training error indicates the difference between the number of training pixels belonging to each category and the number of pixels belonging to each category.
  • the fully connected layer includes three nodes, corresponding to the number of categories, and each node corresponds to a corresponding category.
  • the fully connected layer can output three values, that is, the number of pixels belonging to each category.
  • the step of training the initial fully convolutional neural network to obtain the trained fully convolutional neural network includes: determining each first training based on the first training set Subset, determine each second training subset based on the second training set; in turn, select an untrained standard training subset from each first training subset and each second training subset, based on each standard training subset
  • the full convolutional neural network is trained to obtain the trained full convolutional neural network.
  • the standard training subsets selected in the next two rounds are from different training sets. It can be understood that when the standard training subsets selected in the previous two rounds of the adjacent two rounds come from the first training set (that is, from each first training subset ), The standard subset selected in the next round comes from the second training set (that is, from each second training subset), and the standard training subset selected in the previous two rounds of the adjacent two rounds comes from the second training set (that is, from each second Two training subsets), the standard subset selected in the next round comes from each first training set (ie from each first training subset).
  • the first training subset and the second training subset are used for training in turn, instead of using the first training subset for training without interruption or the second training subset for training without interruption .
  • one training subset in each first training subset can be used for training first, and then one training subset in each second training subset can be used for training, and then one untrained training in each first training subset can be used.
  • the subset is trained, and then an untrained training subset in each second training subset is used for training. In this way, the cyclic selection is performed in turn to realize the training of the initial fully convolutional neural network.
  • a method for obtaining a trained fully convolutional neural network includes: selecting a training subset from each first training subset as a standard training subset; based on the standard training subset, the initial fully convolutional neural network Perform training to update the initial fully convolutional neural network; when the network training stop condition is not met, select an untrained training sub-set from the training set outside the training set to which the first training set and the second training set, the standard training subset belongs Set as the standard training subset, and return to the step of training the initial full convolutional neural network based on the standard training subset, update the initial full convolutional neural network, until the network training stop condition is met, the updated initial full convolutional neural network As a trained fully convolutional neural network.
  • each first training subset in the first training set includes J11, J12, J13, and J14
  • each second training subset in the second training set includes J21, J22, J23, and J24.
  • one of the training subsets J11, J12, J13, and J14 can be arbitrarily selected as the standard training subset.
  • the training subset J11 can be selected as the standard training subset.
  • J11 includes each first sample user At least part of the first sample abdomen image and the corresponding pixel classification label image in the image, and then input J11 into the initial fully convolutional neural network for training, that is, update the initial fully convolutional neural network.
  • the network training stop condition is not met, you need to select an untrained training subset as the standard training subset from the first training set and the second training set, and the second training set other than the first training set to which J11 belongs.
  • Set that is, update the standard training subset. For example, you can arbitrarily select an untrained training subset from the training subsets J21, J22, J23, and J24 as the standard training subset. For example, you can select J21 as the new standard training subset and use the updated standard training.
  • the subset J21 trains the updated initial fully convolutional neural network, and updates the initial fully convolutional neural network again.
  • the network training stop condition you need to select an untrained training subset as the standard training subset from the first training set and the second training set, the first training set other than the second training set to which J21 belongs , That is, update the standard training subset.
  • the updated initial fully convolutional neural network is trained, and the initial fully convolutional neural network is updated again.
  • the training subset is cyclically selected for network training, and the training is stopped when the network training stopping condition is satisfied, and the updated initial fully convolutional neural network obtained at this time is the trained fully convolutional neural network.
  • the number of iterations exceeds the preset number, it means that the network training stop condition is met, where, at the initial time (that is, training has not yet started), the number of iterations is zero, and the initial fully convolutional neural network After the training is completed, the number of iterations is increased by one.
  • the method further includes: marking the standard training subset as trained; when each first training subset is marked as trained , Each first training subset is marked as untrained; when each second training subset is marked as trained, each second training subset is marked as untrained.
  • each first training subset or each second training subset has been trained on the network, that is, each first training subset has been marked as trained or each second training subset Have been marked as trained, but the training stop condition has not been met at this time, that is, if the training is not stopped, then each first training subset needs to be marked as untrained
  • each second training subset is marked as trained
  • each second training subset is marked as untrained. In this way, before the training stop condition is satisfied, a training subset can be selected to ensure the normal training of the network.
  • J11, J12, J13, and J14 have been trained on the network in each first training subset, that is, they are all marked as trained. At this time, there is no untrained training subset in each first training subset. , Affecting the normal training of the next network, you can re-mark J11, J12, J13 and J14 as untrained, you can choose any one of them as a standard training subset for the next network training.
  • J21, J22, J23, and J24 have been trained on the network, that is, they are all marked as trained. At this time, there is no untrained training subset in each second training subset. In the next normal training of the network, J21, J22, J23, and J24 can be relabeled as untrained, and any one of them can be selected as a standard training subset for the next network training. Until the network training stop condition is met.
  • the network training stop condition when the network training stop condition is not satisfied, and in the training set outside the training set to which the standard training subset belongs from the first training set and the second training set, an untrained is selected Before using the training subset as the standard training subset, it also includes: acquiring the training error of the initial fully convolutional neural network; and adjusting the learning rate of the initial fully convolutional neural network when the training error is greater than the preset error.
  • the training error may be the sum of the first training errors corresponding to the abdominal images of the first samples, that is, the training error is the sum of the first training errors.
  • the training error may be the sum of the second training errors corresponding to the second sample abdominal images, that is, the training error is the sum of the second training errors. That is, in this embodiment, during the network training process, the learning rate of the network can also be adjusted according to the error, so that the network training is more accurate.
  • the method of acquiring the first training set includes: acquiring each first sample original abdominal grayscale image and the pixel classification label map corresponding to each first sample original abdominal grayscale image; Transform the original abdominal grayscale image to obtain each first grayscale transformed image, and perform the same transformation on the pixel classification label map corresponding to each of the first sample original abdominal image to obtain each of the first grayscale transformation Pixel classification label transformation map corresponding to the image; based on each of the first identical original abdominal grayscale images, the pixel classification label map corresponding to each of the first sample original abdominal grayscale images, and each of the first grayscale conversion images And a pixel classification label transformation map corresponding to each of the first grayscale transformation images to generate the first training set.
  • each first sample abdominal image in the first training set includes each first sample original abdominal grayscale image and each first grayscale transformed image.
  • the above transformation may include flipping or rotation, so that the number of training sample images can be increased on the basis of the original sample grayscale image of the first abdomen.
  • the grayscale image of the original abdomen of the first sample is an mri image.
  • the method of generating the first training set includes: acquiring a first channel image of each first sample original abdominal grayscale image on each color channel and each of the first grayscales Transform the second channel image of the image on each color channel; normalize the first channel image and the second image channel image respectively to determine the first normalized channel image and the second normalized channel image Generating a first training set based on each first normalized channel image and each second normalized channel image.
  • each color channel may include an R color channel, a G color channel, and a B color channel.
  • each of the first sample abdominal images in the generated first training set includes the first normalized image of each first sample abdominal grayscale image on each color channel and each first gray The second normalized image of each degree-converted image on each color channel.
  • the first channel image and the second channel image can be normalized separately according to the preset variance and mean value to ensure that the obtained first normalized channel image and each second normalized The pixel values in the normalized channel image meet the preset requirements. That is, in this embodiment, each first sample abdominal image in the first training set includes each first normalized channel image and each second normalized channel image.
  • the method for obtaining the second training set includes: obtaining each second sample original abdominal grayscale image and each second sample original abdominal grayscale image corresponding to the number of pixels belonging to each category; Transforming the second original abdominal grayscale image to obtain each second grayscale converted image; based on each second sample original abdominal grayscale image, each second grayscale converted image, and each second sample The original abdominal grayscale images respectively correspond to the number of pixels belonging to each category, and the second training set is generated.
  • each second sample abdominal image in the generated second training set includes each second sample original abdominal grayscale image and corresponding second grayscale transformed images.
  • the method of generating the second training set includes: acquiring a third channel image of each second sample original abdominal grayscale image on each color channel and each second grayscale transformation The fourth channel image of the image on each color channel; normalize each second channel image and each second channel image respectively to determine each third normalized channel image and each fourth normalized channel image; Based on each third normalized channel image and each fourth normalized channel image, a second training set is generated.
  • each second sample abdominal image in the second training set includes each third normalized channel image and each fourth normalized channel image.
  • the method further includes: Each abdominal image of the first sample in the training set to determine the squares corresponding to the abdominal image of the first sample; the intersection of the four squares corresponding to the abdominal image of the first sample is taken as each movable point; for each Move the movable points, update the squares, and obtain the quadrilaterals; perform affine transformation on the area of the first sample abdominal image within each quadrilateral to obtain each affine subgraph; stitch each affine subgraph, Obtain the updated first sample abdominal image; update the first training set based on each updated first sample abdominal image.
  • Switching the abdominal image of the first sample can determine the squares.
  • the abdomen image of the first sample can be segmented based on the row segmentation lines along the row direction of the image and the column segmentation lines along the column direction of the image to obtain squares (ie square grids) ). That is, the size of each square is the same as the size of the abdominal image of the first sample.
  • the row cutting line may be a row in the abdominal image of the first sample
  • the column cutting line is a row in the abdominal image of the first sample.
  • intersection point There is a coincidence line between two adjacent squares, which belongs to a row cut line or a column cut line.
  • the intersection point must belong to a point in the coincidence line between the two squares in the four squares, specifically the intersection point in the coincidence line between the two squares in the four squares.
  • the shape corresponding to each square also changes, that is, move During the intersection process, the square shape can be updated to obtain the quadrilaterals.
  • the movement is random movement
  • the movement distance is within a preset distance range
  • the preset movement rule is a rule that moves within a preset distance range.
  • the distance between two adjacent movable points is 100
  • the movable range of each movable point is 30, that is, the preset distance range indicates a range of 30 from the movable point.
  • Each of the first sample abdominal images is subjected to the above process, so as to update the squares of each first sample abdominal image. Then, perform affine transformation on the areas of the first sample abdominal image in each quadrilateral to obtain each affine sub-picture, to achieve the expansion of the image data, and the affine sub-corresponding to each first sample abdominal image
  • the images are stitched to obtain the updated first sample abdominal images, and the first training set is updated based on the updated first sample abdominal images, that is, the updated first training set includes each updated first sample
  • the abdominal image is subsequently trained using the updated first training set.
  • the method further includes Each abdominal image of the second sample in the training set to determine the respective cut squares corresponding to the abdominal images of the second sample; the intersection points of the four divided squares corresponding to the abdominal images of the second sample are used as the movable intersection points;
  • the second training set is updated based on each updated second sample abdominal image.
  • Switching the abdominal image of the second sample can determine each segmentation grid, and the process is similar to the foregoing segmentation process of the abdominal image of the first sample.
  • the updated second training set includes each updated second sample abdominal image update, and subsequent training is performed using the updated second training set.
  • the images can be divided into training images, verification images and test images according to the ratio of 7: 2: 1.
  • the trained full convolutional neural network needs to be used to determine the trained full convolutional neural network as follows:
  • FIG. 2 (a) is a pixel classification label image
  • FIG. 2 (a) is a pixel classification label image
  • the second training set includes each first sample abdominal image (for example, 300 images) and pixel classification label maps corresponding to each first sample abdominal image.
  • the second training set includes each second sample abdominal image (for example, 9000 images) and each second sample abdominal image corresponds to the number of pixels belonging to each category.
  • the initial fully convolutional neural network can be obtained initially.
  • the network configuration parameters are: the learning rate is 0.001, the learning rate adopts a gradual descent strategy, the configuration parameter gamma is 0.1, and the stepsize (step size) is 1500, impulse momentum is set to 0.9, the maximum number of iterations (that is, the preset number of iterations) is 50000 times, each training input 32 images as a sequence to learn.
  • the pytorch deep learning framework is used in the training code, the multi-label input layer and the multi-label sigmoid cross-entropy function layer in the DRML public code are added, compiled in the Ubuntu system, the algorithm network parameters are configured, and training is performed.
  • FIG. 3 it is a schematic diagram of a fully convolutional neural network. After four convolution, pooling, and normalization operations on the left side of the fully convolutional neural network, the image gradually decreases. At the back, after deconvolution, pooling and normalization operations, the picture gradually increases. Before each deconvolution, the network will make a connection on the channel between the image passed from the previous layer and the previous convolution image of the same size. The purpose of this is to extract the detailed features of semantic segmentation while taking into account the overall information. After the last convolutional layer, a softmax layer (classification layer) is connected, and a fully connected layer can also be connected. The output has three neurons (nodes) corresponding to the number of pixels of subcutaneous fat, intra-abdominal fat, and background.
  • the picture gradually decreases.
  • the picture gradually increases.
  • the fully convolutional neural network will make a connection on the channel between the image passed from the previous layer and the previous convolution picture of the same size. The purpose of this is to extract the detailed features of semantic segmentation while taking into account the overall information.
  • each first training subset corresponding to the first training set is determined, and each second training subset corresponding to the second training set is determined.
  • a preset number for example, 32
  • first sample abdominal images and corresponding pixel classification label images may be selected from the first training set as a first training subset, and the union of each first training subset is The first training set, and the intersection is empty.
  • a preset number of second sample abdominal images and the number of pixels whose pixels belong to each category can be selected from the second training set as a second training subset, and the union of each second training subset is the second training set, and the intersection Is empty.
  • the initial full convolutional neural network training process includes forward propagation, calculation error, and back propagation. Each forward propagation includes convolution, pooling, and normalization steps.
  • the initial fully convolutional neural network is updated, and the number of iterations is increased by one. At this time, you can use the verification image to verify the updated initial full convolutional neural network. If the verification result is higher than the training result of the initial full convolutional neural network updated after the previous training on the verification image, the updated initial The fully convolutional neural network is saved and can be used for the test of the test image in the future.
  • the training ends, and the test image can be tested. If it is not reached, it is necessary to detect whether the training error of the fully convolutional neural network is effectively reduced after this iteration (that is, after the training of the standard training subset is completed), that is, whether it is less than or equal to the preset error, and if so, then It indicates that it is effectively reduced. If not, the marking error is not effectively reduced. At this time, the learning rate of the network can be adjusted. Subsequently, the test image can be used to test the trained full convolutional neural network to obtain a test result, and the segmentation accuracy rate of the trained full convolutional neural network can be determined according to the test result.
  • the image in each square is affine transformed to achieve data expansion. It is beneficial to the full convolutional neural network to extract the texture features of the image. Existing solutions cannot achieve the learning corresponding to each pixel. They need to input the entire surrounding of each pixel at the same time, but this will ignore the impact of the overall image on this pixel.
  • the segmentation task is applied to unet. Through end-to-end learning and the way of backpropagation to modify parameters, the network adaptively learns the features of the picture.
  • this embodiment uses the architecture of the front feature extraction part of the fully convolutional neural network (that is, the convolution and pooling architecture), and performs more on the back-end feature combination part. Label training (corresponding to the classification layer and the fully connected layer) can make the neural network converge well. In addition, for different populations (thin, normal, and obese), it is not necessary to train the network separately, and the trained fully convolutional neural network can well match the existing data and obtain a higher accuracy rate.
  • the trained full convolutional neural network is used to segment it, which can improve the segmentation accuracy.
  • Table 1 it is the accuracy result of segmenting each abdominal image to be tested by using the segmentation method of this embodiment.
  • an abdomen image segmentation device includes: an image to be measured acquisition module and a segmented image determination module, wherein:
  • the image-under-test acquisition module is used to acquire the image of the abdomen under test
  • a segmented image determination module used to classify each pixel in the abdominal image to be tested based on the trained full convolutional neural network, and determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained full The convolutional neural network is determined based on the first training set and the second training set.
  • the first training set includes each first sample abdominal image and a pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each second sample abdominal image and each second sample abdominal image corresponding to the number of pixels belonging to each category.
  • the above device further includes:
  • a training set acquisition module for acquiring the first training set and the second training set
  • the initialization module is used to initialize an initial fully convolutional neural network, and the initial fully convolutional neural network includes a convolutional layer and a classification layer;
  • a training module configured to train the initial fully convolutional neural network to obtain the trained fully convolutional neural network based on the first training set and the second training set;
  • the initial fully convolutional neural network is updated according to the first training error.
  • the first training error is based on the first A first sample abdominal image in a training set is input to the initial fully convolutional neural network, and the training pixel classification label image output from the classification layer is determined with the pixel classification label image of the first sample abdominal image;
  • the initial fully convolutional neural network is updated according to the second training error.
  • the second training error is based on the second training set
  • the training module includes:
  • a subset determination module configured to determine each first training subset based on the first training set, and determine each second training subset based on the second training set;
  • the neural network training module is used to alternately select an untrained standard training subset from each of the first training subset and each of the second training subsets, based on each of the standard training subsets
  • the convolutional neural network is trained to obtain the trained full-convolutional neural network; wherein, the standard training subsets selected in two adjacent rounds are from different training sets.
  • the neural network training module includes:
  • a selection module for selecting a training subset from each of the first training subsets as a standard training subset
  • An update module configured to train the initial fully convolutional neural network based on the standard training subset and update the initial fully convolutional neural network
  • the trained full convolutional neural network determination module is also used for training outside the training set to which the standard training subset belongs from the first training set and the second training set when the network training stop condition is not satisfied Select an untrained training subset as the standard training subset and return to the update module to perform training on the initial fully convolutional neural network based on the standard training subset and update the initial fully convolutional neural network In the network step, until the network training stop condition is satisfied, the updated initial fully convolutional neural network is used as the trained fully convolutional neural network.
  • the above device further includes:
  • a standard labeling module used by the update module to update the initial fully convolutional neural network and mark the standard training subset as trained before the network training stop condition is satisfied;
  • a first subset labeling module configured to mark each first training subset as untrained when each of the first training subsets is marked as trained
  • the second subset labeling module is configured to mark each second training subset as untrained when each of the second training subsets is marked as trained respectively.
  • the training set acquisition module includes:
  • a first image acquisition module used to acquire each first sample original abdominal grayscale image and the pixel classification label map corresponding to each first sample original abdominal grayscale image
  • a first image transformation module configured to transform each of the first sample original abdominal grayscale images to obtain each first grayscale transformed image, and classify the pixel classification label map corresponding to each of the first sample original abdominal grayscale images Perform the same transformation to obtain a pixel classification label transformation map corresponding to each of the first grayscale transformation images;
  • a first training set generation module which is based on each of the first sample original abdominal grayscale images, the pixel classification label map corresponding to each of the first sample original abdominal grayscale images, and each of the first grayscale transformations The image and the pixel classification label transformation map corresponding to each of the first grayscale transformation images generate the first training set.
  • the first training set generation module includes:
  • the first channel image acquisition module is configured to respectively acquire a first channel image of each first sample original abdominal grayscale image on each color channel and a first channel of each first grayscale converted image on each color channel Two-channel image;
  • the first normalization module is used to normalize each first channel image and each second channel image respectively to determine each first normalized channel image and each second normalized channel image;
  • the first training set determination module is configured to generate a first training set based on each first normalized channel image and each second normalized channel image.
  • the training set acquisition module includes:
  • a second image acquisition module configured to acquire each second sample original abdominal grayscale image and each second sample original abdominal grayscale image corresponding to the number of pixels belonging to each category;
  • a second image conversion module configured to transform each of the second original abdominal grayscale images to obtain each second grayscale converted image
  • a second training set generation module for corresponding to pixels belonging to each category based on each second sample original abdominal grayscale image, each second grayscale transformed image, and each second sample original abdominal grayscale image Number, the second training set is generated.
  • the above device further includes:
  • the grid determination module is used by the training module to perform training on the initial fully convolutional neural network based on the first training set and the second training set before obtaining the trained fully convolutional neural network, based on Each first sample abdominal image in the first training set to determine the squares corresponding to each first sample abdominal image;
  • a movable point determination module which is used to take the intersection points of the four squares corresponding to the first sample abdominal image as the movable points;
  • the quadrilateral determination module is used to move each movable point, update the squares, and obtain each quadrilateral;
  • the first affine transformation module is used to perform affine transformation on the regions of the first sample abdominal image in each quadrilateral to obtain each affine sub-picture;
  • the first stitching module is used to stitch each affine sub-picture to obtain the updated first sample abdominal image
  • the first updating module is configured to update the first training set based on the updated first sample abdominal images.
  • the above device further includes:
  • the split square determination module is used by the training module to perform training on the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained fully convolutional neural network , Based on the second sample abdominal images of the second training set, determine the respective cut squares corresponding to the second sample abdominal images;
  • a movable intersection point determination module which is used to take the intersection points of the four divided squares corresponding to the second sample abdominal image as the movable intersection points;
  • the segmented quadrilateral determination module is used to move each movable intersection point, update each segmented square to obtain each segmented quadrilateral;
  • the second affine transformation module is used to perform affine transformation on the area of the second sample abdominal image in each divided quadrilateral to obtain each abdominal affine sub-picture;
  • the second stitching module is used to stitch each abdomen affine sub-picture to obtain the updated second sample abdominal image
  • the second update module is used to update the second training set based on the updated second sample abdominal images.
  • Switching the abdominal image of the second sample can determine each segmentation grid, and the process is similar to the foregoing segmentation process of the abdominal image of the first sample.
  • the updated second training set includes each updated second sample abdominal image update, and subsequent training is performed using the updated second training set.
  • the abdomen image segmentation device provided by the present application may be implemented in the form of a computer program, which may be run on a computer device as shown in FIG. 4, the non-volatile storage of the computer device
  • the medium may store various program modules constituting the abdomen image segmentation device, for example, an image-to-be-measured module and a segmented image determination module.
  • Each program module includes computer-readable instructions for causing the computer device to perform the steps in the abdominal image segmentation methods of various embodiments of the present application described in this specification.
  • the computer device may pass The measured image acquisition module acquires the abdominal image to be tested, and then the segmented image determination module classifies each pixel in the abdominal image to be tested based on the trained full convolutional neural network to determine the segmented image corresponding to the abdominal image to be tested ;
  • the trained fully convolutional neural network is determined based on a first training set and a second training set, the first training set includes each first sample abdominal image and each first sample abdominal image Corresponding pixel classification label images, the second training set includes each second sample abdominal image and each second sample abdominal image corresponds to the number of pixels belonging to each category.
  • FIG. 4 shows an internal structure diagram of a computer device in an embodiment.
  • the computer device includes a processor, memory, and network interface connected by a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system, and may also store computer-readable instructions.
  • the processor may enable the processor to implement the abdomen image segmentation method described above.
  • the internal memory may also store computer-readable instructions.
  • the processor may cause the processor to perform the abdominal image segmentation method.
  • the computer device may further include an input device and a display screen.
  • the display screen of the computer device may be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer device may be a touch layer covered on the display screen, or it may be
  • the buttons, trackball or touchpad provided on the computer device casing can also be an external keyboard, touchpad or mouse.
  • FIG. 4 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer equipment to which the solution of the present application is applied.
  • the specific computer equipment may It includes more or fewer components than shown in the figure, or some components are combined, or have a different component arrangement.
  • a computer device which includes a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor is caused to perform the following steps: acquire an abdominal image to be measured ; Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested to determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained fully convolutional neural network is based on the A training set and a second training set are determined by training, the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image, and the second training set includes each The second sample abdomen image and each second sample abdomen image respectively correspond to the number of pixels belonging to each category.
  • the method for determining the trained fully convolutional neural network includes: obtaining the first training set and the second training set; initializing to obtain the initial fully convolutional neural network, the initial fully convolutional neural network includes a convolutional layer and Classification layer; based on the first training set and the second training set, the initial fully convolutional neural network is trained to obtain the trained fully convolutional neural network.
  • the initial fully convolutional neural network is updated according to the first training error.
  • the first training error is based on the first
  • the first sample abdominal image in a training set is input to the initial fully convolutional neural network, and the training pixel classification label image output from the classification layer is determined with the pixel classification label image of the first sample abdominal image.
  • the initial fully convolutional neural network is updated according to the second training error.
  • the second training error is based on the second training set
  • the step of training the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained fully convolutional neural network includes: Each first training subset is determined based on the first training set, and each second training subset is determined based on the second training set; alternately selected from each of the first training subsets and each of the second training subsets An untrained standard training subset, based on each of the standard training subsets, trains the initial fully convolutional neural network to obtain the trained fully convolutional neural network.
  • the standard training subsets selected in two adjacent rounds are from different training sets.
  • the method for obtaining the trained fully convolutional neural network includes: selecting a training subset from each of the first training subsets as a standard training subset; based on the standard training sub Training the initial fully convolutional neural network to update the initial fully convolutional neural network; when the network training stop condition is not met, from the first training set and the second training set, the standard The training set outside the training set to which the training subset belongs selects an untrained training subset as the standard training subset, and returns to training the initial fully convolutional neural network based on the standard training subset, and updates the initial In the step of the fully convolutional neural network, until the network training stop condition is satisfied, the updated initial fully convolutional neural network is used as the trained fully convolutional neural network.
  • the processor when the computer-readable instructions are executed by the processor, the processor is executed to update the initial fully convolutional neural network, and before the network training stop condition is satisfied, the processor is also executed: the standard training The subset is marked as trained; when each of the first training subsets is marked as trained, each of the first training subsets is marked as untrained; and each of the second training subsets is marked as When trained, each of the second training subsets is marked as untrained.
  • the method for acquiring the first training set includes: acquiring each first sample original abdominal grayscale image and the pixel classification label map corresponding to each first sample original abdominal grayscale image; Transform the first abdominal grayscale image of each first sample to obtain each first grayscale transformed image, and perform the same transformation on the pixel classification label map corresponding to the primary abdominal image of each first sample to obtain each of the first A pixel classification label transformation map corresponding to a grayscale transformation image; based on each first sample original abdominal grayscale image, each pixel classification labeling map corresponding to each first sample original abdominal grayscale image, each of the first A grayscale transformed image and a pixel classification label transformed map corresponding to each of the first grayscale transformed images generate the first training set.
  • the method of generating the first training set includes: acquiring a first channel image of each first sample original abdominal grayscale image on each color channel and each of the first grayscales Transform the second channel image of the image on each color channel; normalize the first channel image and the second image channel image respectively to determine the first normalized channel image and the second normalized channel image Generating a first training set based on each first normalized channel image and each second normalized channel image.
  • the method for acquiring the second training set includes: acquiring each second sample original abdominal grayscale image and each second sample original abdominal grayscale image corresponding to the number of pixels belonging to each category Transforming each of the second original abdominal grayscale images to obtain each second grayscale transformed image; based on each of the second sample original abdominal grayscale images, each of the second grayscale transformed images, and each The second sample original abdominal grayscale image corresponds to the number of pixels belonging to each category, respectively, and the second training set is generated.
  • the processor when the computer readable instructions are executed by the processor, the processor is executed to train the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained Before the fully convolutional neural network, it also caused the processor to execute: based on each first sample abdominal image in the first training set, determine the squares corresponding to each first sample abdominal image; map the first sample abdominal image to The intersection of each of the four squares is used as each movable point; move each movable point, update the squares to obtain each quadrilateral; respectively perform affine transformation on the area of the first sample abdominal image within each quadrilateral, Obtain each affine sub-picture; stitch each affine sub-picture to obtain an updated first sample abdominal image; update the first training set based on each updated first sample abdominal image.
  • the processor when the computer readable instructions are executed by the processor, the processor is executed to train the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained Before the fully convolutional neural network, it also caused the processor to execute: based on the second sample abdominal images in the second training set, determine the respective cut squares corresponding to the second sample abdominal images; The intersection points of the four divided squares are used as the movable intersection points; move the movable intersection points to update the divided squares to obtain the divided quadrilaterals; the abdominal images of the second sample in the divided quadrilaterals respectively The region is affine transformed to obtain each abdomen affine subgraph; each abdomen affine subgraph is stitched to obtain an updated second sample abdominal image; and the second training set is updated based on each updated second sample abdominal image.
  • Switching the abdominal image of the second sample can determine each segmentation grid, and the process is similar to the foregoing segmentation process of the abdominal image of the first sample.
  • the updated second training set includes each updated second sample abdominal image update, and subsequent training is performed using the updated second training set.
  • a storage medium storing computer-readable instructions, when the computer-readable instructions are executed by one or more processors, causing the one or more processors to perform the following steps: acquiring an abdominal image to be tested;
  • the trained fully convolutional neural network Based on the trained fully convolutional neural network, classify each pixel in the abdominal image to be tested, and determine the segmented image corresponding to the abdominal image to be tested; wherein, the trained fully convolutional neural network is based on the first
  • the training set and the second training set are determined by training.
  • the first training set includes each first sample abdominal image and the pixel classification label map corresponding to each first sample abdominal image.
  • the second training set includes each The two-sample abdominal images and the second-sample abdominal images correspond to the number of pixels belonging to each category.
  • the method of determining the trained fully convolutional neural network includes: obtaining the first training set and the second training set; initializing to obtain the initial fully convolutional neural network, the initial fully convolutional neural network
  • the network includes a convolutional layer and a classification layer; based on the first training set and the second training set, the initial fully convolutional neural network is trained to obtain the trained fully convolutional neural network.
  • the initial fully convolutional neural network is updated according to the first training error.
  • the first training error is based on the first
  • the first sample abdominal image in a training set is input to the initial fully convolutional neural network, and the training pixel classification label image output from the classification layer is determined with the pixel classification label image of the first sample abdominal image.
  • the initial fully convolutional neural network is updated according to the second training error.
  • the second training error is based on the second training set
  • the step of training the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained fully convolutional neural network includes: Each first training subset is determined based on the first training set, and each second training subset is determined based on the second training set; alternately selected from each of the first training subsets and each of the second training subsets An untrained standard training subset, based on each of the standard training subsets, trains the initial fully convolutional neural network to obtain the trained fully convolutional neural network.
  • the standard training subsets selected in two adjacent rounds are from different training sets.
  • the method for obtaining the trained fully convolutional neural network includes: selecting a training subset from each of the first training subsets as a standard training subset; based on the standard training sub Training the initial fully convolutional neural network to update the initial fully convolutional neural network; when the network training stop condition is not met, from the first training set and the second training set, the standard The training set outside the training set to which the training subset belongs selects an untrained training subset as the standard training subset, and returns to training the initial fully convolutional neural network based on the standard training subset, and updates the initial In the step of the fully convolutional neural network, until the network training stop condition is satisfied, the updated initial fully convolutional neural network is used as the trained fully convolutional neural network.
  • the processor when the computer-readable instructions are executed by the processor, the processor is executed to update the initial fully convolutional neural network, and before the network training stop condition is satisfied, the processor is also executed: the standard training The subset is marked as trained; when each of the first training subsets is marked as trained, each of the first training subsets is marked as untrained; and each of the second training subsets is marked as When trained, each of the second training subsets is marked as untrained.
  • the method for acquiring the first training set includes: acquiring each first sample original abdominal grayscale image and the pixel classification label map corresponding to each first sample original abdominal grayscale image; Transform the first abdominal grayscale image of each first sample to obtain each first grayscale transformed image, and perform the same transformation on the pixel classification label map corresponding to the primary abdominal image of each first sample to obtain each of the first A pixel classification label transformation map corresponding to a grayscale transformation image; based on each first sample original abdominal grayscale image, each pixel classification labeling map corresponding to each first sample original abdominal grayscale image, each of the first A grayscale transformed image and a pixel classification label transformed map corresponding to each of the first grayscale transformed images generate the first training set.
  • the method of generating the first training set includes: acquiring a first channel image of each first sample original abdominal grayscale image on each color channel and each of the first grayscales Transform the second channel image of the image on each color channel; normalize the first channel image and the second image channel image respectively to determine the first normalized channel image and the second normalized channel image Generating a first training set based on each first normalized channel image and each second normalized channel image.
  • the method for acquiring the second training set includes: acquiring each second sample original abdominal grayscale image and each second sample original abdominal grayscale image corresponding to the number of pixels belonging to each category Transforming each of the second original abdominal grayscale images to obtain each second grayscale transformed image; based on each of the second sample original abdominal grayscale images, each of the second grayscale transformed images, and each The second sample original abdominal grayscale image corresponds to the number of pixels belonging to each category, respectively, and the second training set is generated.
  • the processor when the computer readable instructions are executed by the processor, the processor is executed to train the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained Before the fully convolutional neural network, it also caused the processor to execute: based on each first sample abdominal image in the first training set, determine the squares corresponding to each first sample abdominal image; map the first sample abdominal image to The intersection of each of the four squares is used as each movable point; move each movable point, update the squares to obtain each quadrilateral; respectively perform affine transformation on the area of the first sample abdominal image within each quadrilateral, Obtain each affine sub-picture; stitch each affine sub-picture to obtain an updated first sample abdominal image; update the first training set based on each updated first sample abdominal image.
  • the processor when the computer readable instructions are executed by the processor, the processor is executed to train the initial fully convolutional neural network based on the first training set and the second training set to obtain the trained Before the fully convolutional neural network, it also caused the processor to execute: based on the second sample abdominal images in the second training set, determine the respective cut squares corresponding to the second sample abdominal images; The intersection points of the four divided squares are used as the movable intersection points; move the movable intersection points to update the divided squares to obtain the divided quadrilaterals; the abdominal images of the second sample in the divided quadrilaterals respectively The region is affine transformed to obtain each abdomen affine subgraph; each abdomen affine subgraph is stitched to obtain an updated second sample abdominal image; and the second training set is updated based on each updated second sample abdominal image.
  • Switching the abdominal image of the second sample can determine each segmentation grid, and the process is similar to the foregoing segmentation process of the abdominal image of the first sample.
  • the updated second training set includes each updated second sample abdominal image update, and subsequent training is performed using the updated second training set.
  • steps in the embodiments of the present application are not necessarily executed in the order indicated by the step numbers. Unless clearly stated in this article, the execution of these steps is not strictly limited in order, and these steps can be executed in other orders. Moreover, at least a part of the steps in each embodiment may include multiple sub-steps or multiple stages. These sub-steps or stages are not necessarily executed and completed at the same time, but may be executed at different times. The execution of these sub-steps or stages The order is not necessarily sequential, but may be executed in turn or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • Non-volatile memory may include read-only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • SLDRAM synchronous chain (Synchlink) DRAM
  • Rambus direct RAM
  • DRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

一种腹部图像分割方法、计算机设备及存储介质,该方法包括:获取待测腹部图像;基于已训练的全卷积神经网络,对待测腹部图像中各像素进行分类,确定待测腹部图像对应的分割图像;其中,已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,第一训练集包括各第一样本腹部图像以及各第一样本腹部图像对应的像素分类标签图,第二训练集包括各第二样本腹部图像以及各第二样本腹部图像分别对应属于各类别的像素数量。采用本方法,可提高分割准确性。

Description

腹部图像分割方法、计算机设备及存储介质
本申请要求于2018年11月08日提交中国专利局,申请号为2018113249010,申请名称为“腹部图像分割方法、计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及医疗技术领域,特别涉及一种腹部图像分割方法、计算机设备及存储介质。
背景技术
人的脂肪分成皮下脂肪和腹内脂肪,确定皮下脂肪和腹内脂肪在人体中的含量是衡量人们健康水平的重要指标,同时也是检测一些疾病(比如糖尿病等)的参考指标。目前,对mri(磁共振成像)腹部图像进行脂肪分割主要有两种方案,第一种是具有相关医学知识的人员手动对脂肪进行分割。第二种是通过计算机算法对腹内脂肪进行分割。
然而,由于腹内脂肪往往会跟一些非脂肪区域灰度相近,通过第一种方案不易区分,易导致分割准确性不高。上述第二种方案的缺陷在于利用算法分割图像的好坏取决于图像的质量,对灰度信息的过分依赖,不能很好地对图像进行分割,导致分割结果准确性不高。
发明内容
根据本申请提供的各种实施例,提出一种腹部图像分割方法、计算机设备及存储介质。
一种腹部图像分割方法,包括以下步骤:
获取待测腹部图像;
基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
一种计算机设备,包括存储器以及处理器,所述存储器存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得处理器执行如下步骤:
获取待测腹部图像;
基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
获取待测腹部图像;
基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征、目的和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为一个实施例的腹部图像分割方法的流程示意图;
图2为另一个实施例的腹部图像分割方法中仿射变化原理图;
图3为另一个实施例的全卷积神经网络的原理图;
图4为一个实施例中计算机设备的结构框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
如图1所示,在一个实施例中,提供了一种腹部图像分割方法。本实施例主要以该方法应用于计算机设备来举例说明(即该方法可由计算机设备执行)。该腹部图像分割方法具体包括如下步骤:
S110:获取待测腹部图像。
对待测腹部图像进行分割,需对腹部图像中各像素均进行分类,即确定各像素的类别,则首先需要获取待测腹部图像。在一个示例中,待测腹部图像为待测用户的腹部mri图像。
S120:基于已训练的全卷积神经网络,对待测腹部图像中各像素进行分类,确定待测腹部图像对应的分割图像。
其中,已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,第一训练集包括各第一样本腹部图像以及各第一样本腹部图像对应的像素分类标签图,第二训练集包括各第二样本腹部图像以及各第二样本腹部图像分别对应属于各类别的像素数量。
预先确定已训练的全卷积神经网络,在进行图像分割时,可利用已训练的全卷积神经网络,对待测腹部图像中各像素进行分类,从而实现图像分割,确定待测腹部图像对应的分割图像。例如,腹部图像为一个具有M行N列的图像,则腹部图像中有M*N个像素,在分割过程中,需要对M*N个像素分别进行分类,确定每个像素类别,可为每个像素赋予其所在类别对应的值,则相同类别的像素对应的值是相同的,从而实现图像的分割。在一个实施例中,第一样本腹部图像和第二用本用户的腹部图像均为腹部mri图像。
在本实施例中,在进行网络训练过程中,利用了两种不同训练集,即第一训练集和第二训练集。其中,第一训练集中包括各第一样本腹部图像分别像素分类标签图,而第二训练集中不包括各第二样本腹部图像对应的像素分类标签图,而是包括各第二样本腹部图像分别对应属于各类别的像素数量。例如,第二训练集包括第二样本用户A的腹部图像和第二样本用户B的腹部图像,则还包括第二样本用户A的腹部图像中属于各类别的像素数量(比如,各类别包括第一类别、第二类别和第三类别,第二样本用户A的腹部图像中各像素中属于第一类别的像素数量为S1,属于第二类别的像素数量为S2,属于第三类别的像素数量为S3)以及第二样本用户B的腹部图像中属于各类别的像素数量。
在一个实施例中,上述各类别包括腹内脂肪类别、皮下脂肪类别以及背景类别。则第二样本腹部图像分别对应属于各类别的像素数量,为该第二样本腹部图像对应属于腹内脂肪类别、皮下脂肪类别以及背景类别的像素数量。即在本实施例中,是将腹部图像中各像素分类为腹内脂肪、皮下脂肪和背景,实现将腹部图像分成腹内脂肪、皮下脂肪和背景。
上述腹部图像分割方法,通过已训练的全卷积神经网络对待测腹部图像进行分割得到分割图像,能有效对图像进行分割。且已训练的全卷积神经网络通过两种不同的训练集训练确定,即基于第一训练集以及第二训练集训练确定,其中,第一训练集包括各第一样本腹部图像以及各第一样本腹部图像 对应的像素分类标签图(大小与腹部图像大小相同,像素分类标签图中每个值对应腹部图像对应的像素的分类标签),第二训练集包括各第二样本腹部图像以及各第二样本腹部图像分别对应属于各类别的像素数量,通过不同的训练集进行训练确定已训练的全卷积神经网络,可提高全卷积神经网络的准确性,进而提高对腹部图像进行分割的准确性。
在一个实施例中,确定已训练的全卷积神经网络的方式包括:获取第一训练集以及第二训练集;初始化得到初始全卷积神经网络,初始全卷积神经网络包括卷积层和分类层;基于第一训练集以及第二训练集,对初始全卷积神经网络进行训练得到已训练的全卷积神经网络。
即训练集中的腹部图像是输入到卷积层进行卷积处理,卷积层进行卷积处理后将得到的结果输出至分类层,全卷积神经网络的卷积层对腹部图像进行卷积后得到的卷积结果为特征图(即卷积图像),分类层进行分类可得到腹部图像对应的训练像素分类标签图。
其中,在基于第一训练集训练初始全卷积神经网络过程中,根据第一训练误差对初始全卷积神经网络进行更新,第一训练误差,根据第一训练集中第一样本腹部图像输入初始全卷积神经网络,从分类层输出的训练像素分类标签图,与第一样本腹部图像的像素分类标签图确定。
在基于第二训练集训练初始全卷积神经网络过程中,根据第二训练误差对初始全卷积神经网络进行更新,第二训练误差,根据第二训练集中第二样本腹部图像输入初始全卷积神经网络,从卷积层输出的训练卷积图像通过全连接层后的输出,与第二样本腹部图像中属于各类别的像素数量确定。
卷积神经网络中包括各参数,例如,权重以及偏置等,训练过程即是对这些参数的不断更新,从而更新卷积网络。可以理解,对初始全卷积神经网络进行更新,即是对初始全卷积神经网络中的参数进行更新。训练完成后,得到的参数是最新的,即已训练的卷积神经网络中的参数是最新的。
由于第一训练集和第二训练集中内容不同,则在训练过程中,卷积神经网络更新所依据的数据不同。例如,在利用第一训练集训练初始全卷积神经网络过程中,根据第一训练误差对初始全卷积神经网络进行更新。由于第一训练集中包括第一样本腹部图像对应的像素分类标签图,即每个像素的分类标签,初始全卷积神经网络对第一样本腹部图像进行卷积处理并分类后可得到训练像素分类标签图,然而,训练像素分类标签图与第一样本腹部图像对应的像素分类标签图可能存在差异,第一训练误差根据第一训练集中第一样本腹部图像输入初始全卷积神经网络,从分类层输出的训练像素分类标签图,与第一样本腹部图像的像素分类标签图确定,第一训练误差即表示训练像素分类标签图和像素分类标签图之间存在的差异。
而在利用第二训练集训练初始全卷积神经网络过程中,根据第二训练误差对初始全卷积神经网络进行更新。由于第二训练集中包括第二样本腹部图像以及第二样本腹部图像中像素属于各类别的像素数量,初始全卷积神经网络对第二样本腹部图像进行卷积处理后的卷积结果通过全连接层,全连接层对卷积结果处理后的输出,与第二样本腹部图像中像素属于各类别的数量可能存在差异,第二训练误差根据第二训练集中第二样本腹部图像输入初始全卷积神经网络,从卷积层输出训练卷积图像,训练卷积图像作为全连接层的输入,通过全连接层输出的第二样本腹部图像属于各类别的训练像素数量与第二样本腹部图像中属于各类别的像素数量可能存在差异,第二训练误差即表示属于各类别的训练像素数量和属于各类别的像素数量之间存在的差异。
在一个示例中,全连接层包括三个节点,与类别的数量对应,每个节点分别与对应的类别对应,全连接层可输出三个数值,即对应为属于各类别的像素数量。
在一个实施例中,基于第一训练集以及第二训练集,对初始全卷积神经网络进行训练得到已训练的全卷积神经网络的步骤,包括:基于第一训练集确定各第一训练子集,基于第二训练集确定各第二训练子集;轮流从各第一训练子集以及各第二训练子集中选择一个未训练过的标准训练子集,基于各标准训练子集对初始全卷积神经网络进行训练,得到已训练的全卷积神经网络。
其中,相邻两轮选择的标准训练子集分别来自不同训练集,可以理解,相邻两轮中前一轮选择的标准训练子集来自第一训练集时(即来自各第一训练子集),后一轮选择的标准子集来自第二训练集(即来自各第二训练子集),相邻两轮中前一轮选择的标准训练子集来自第二训练集(即来自各第二训练子集),后一轮选择的标准子集来自各第一训练集(即来自各第一训练子集)。
即在训练过程中,轮流使用第一训练子集和第二训练子集进行训练,而不是一直不间断地采用各第一训练子集进行训练或不间断地采用各第二训练子集进行训练。例如,可首先采用各第一训练子集中的一个训练子集进行训练,然后采用各第二训练子集中的一个训练子集进行训练,再利用各第一训练子集中的一个未训练过的训练子集进行训练,再利用各第二训练子集中的一个未训练过的训练子集进行训练,如此轮流循环选择,实现对初始全卷积神经网络的训练。
在一个实施例中,得到已训练的全卷积神经网络的方式,包括:从各第一训练子集中选择一个训练子集作为标准训练子集;基于标准训练子集对初始全卷积神经网络进行训练,更新初始全卷积神经网络;在未满足网络训练停止条件时,从第一训练集和第二训练集中,标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并返回基于标准训练子集对初始全卷积神经网络进行训练,更新初始全卷积神经网络的步骤,直到满足网络训练停止条件,将更新的初始全卷积神经网络作为已训练的全卷积神经网络。
例如,第一训练集中各第一训练子集包括J11、J12、J13和J14,第二训练集中各第二训练子集包括J21、J22、J23和J24。首先,可从训练子集J11、J12、J13和J14中任意选择一个训练子集作为标准训练子集,比如,可选择训练子集J11作为标准训练子集,J11中包括各第一样本用户中至少部分第一样本腹部图像以及对应的像素分类标签图,然后将J11输入初始全卷积神经网络进行训练,即对初始全卷积神经网络进行更新。此时,未满足网络训练停止条件,则需要从第一训练集和第二训练集中,J11所属的第一训练集之外的第二训练集中选择一个未训练过的训练子集作为标准训练子集,即更新标准训练子集。比如,可从训练子集J21、J22、J23和J24中任意选择一个未训练过的训练子集作为标准训练子集,比如,可选择J21作为新的标准训练子集,利用更新后的标准训练子集J21再对已更新的初始全卷积神经网络进行训练,再次更新初始全卷积神经网络。此时,满足网络训练停止条件,则需要从第一训练集和第二训练集中,J21所属的第二训练集之外的第一训练集中选择一个未训练过的训练子集作为标准训练子集,即更新标准训练子集。已利用J11训练过,则可从J12、J13和J14任意选择一个训练子集作为标准训练子集,比如,可选择J12作为新的标准训练子集,利用更新后的标准训练子集J12再对已更新的初始全卷积神经网络进行训练,再次更新初始全卷积神经网络。如此,循环选择训练子集进行网络训练,在满足网络训练停止条件时,停止训练,此时得到的已更新的初始全卷积神经网络即为上述已训练的全卷积神经网络。
在一个示例中,迭代次数超过预设次数时,表示满足网络训练停止条件,其中,初始时(即还未开始训练),迭代次数为零,在一个标准训练子集对初始全卷积神经网络训练完成后,迭代次数增一。
在一个实施例中,更新初始全卷积神经网络之后,在未满足网络训练停止条件之前,还包括:对标准训练子集标记为已训练;在各第一训练子集分别标记为已训练时,将各第一训练子集分别标记为未训练;在各第二训练子集分别标记为已训练时,将各第二训练子集分别标记为未训练。
由于在满足训练停止条件之前,各第一训练子集或各第二训练子集均已对网络进行了训练,即各第一训练子集均已被标记为已训练或各第二训练子集均已被标记为已训练,但此时还未满足训练停止条件,即不停止训练,则需要在各第一训练子集分别标记为已训练时,将各第一训练子集分别标记为未训练,在各第二训练子集分别标记为已训练时,将各第二训练子集分别标记为未训练。如此,可在满足训练停止条件之前,确保能有训练子集可选,从而确保对网络的正常训练。
例如,在各第一训练子集中J11、J12、J13和J14均已对网络进行了训练,即均标记为已训练,此时,在各第一训练子集中没有未训练的训练子集可选,影响下一步的网络正常训练,则可将J11、J12、J13和J14重新标记为未训练,则可从中任意选一个作为标准训练子集进行下一步的网络训练。在各第二训练子集中J21、J22、J23和J24均已对网络进行了训练,即均标记为已训练,此时,在各第二训练子集中没有未训练的训练子集可选,影响下一步的网络正常训练,则可将J21、J22、J23和J24重新标记为未训练,则可从中任意选一个作为标准训练子集进行下一步的网络训练。直到满足网络训练停止条件。
在一个实施例中,在未满足网络训练停止条件时,且在从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集之前,还 包括:获取初始全卷积神经网络的训练误差;在训练误差大于预设误差时,调整所述初始全卷积神经网络的学习率。其中,在标准训练子集为第一训练子集时,训练误差可以为上述各第一样本腹部图像对应的第一训练误差之和,即该训练误差为各第一训练误差之和。在标准训练子集为第二训练子集时,训练误差可以为上述各第二样本腹部图像对应的第二训练误差之和,即该训练误差为各第二训练误差之和。即在本实施例中,在网络训练过程中,还可根据误差调整网络的学习率,使网络训练得更加准确。
在一个实施例中,获取第一训练集的方式,包括:获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的像素分类标签图;对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换图;基于各所述第一样原本腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
即对第一样本原腹部灰度图像对应的像素分类标签图进行与对第一样本原腹部灰度图像进行相同变换,确保像素分类标签变换图与第一灰度变换图像对应。可以理解,在本实施例中,上述第一训练集中各第一样本腹部图像包括各所述第一样本原腹部灰度图像和各第一灰度变换图像。上述变换可以包括翻转或旋转,如此,可在第一样本原腹部灰度图像的基础上,增加训练样本图像的数量。在本实施例中,第一样本原腹部灰度图像为mri图像。
在一个实施例中,生成所述第一训练集的方式,包括:分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
在本实施例中,各颜色通道可以包括R颜色通道、G颜色通道和B颜色通道。可以理解,在本实施例中,生成的第一训练集中各第一样本腹部图像包括各第一样本腹部灰度图像分别在各颜色通道上的第一归一化图像以及各第一灰度变换图像分别在各颜色通道上的第二归一化图像。在一个示例中,可根据预设的方差以及均值,对各第一通道图像以及各第二图通道像分别进行归一化,以确保得到的各第一归一化通道图像以及各第二归一化通道图像中的像素值满足预设要求。即在本实施例中,第一训练集中各第一样本腹部图像包括各各第一归一化通道图像以及各第二归一化通道图像。
在一个实施例中,获取第二训练集的方式,包括:获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
即生成的第二训练集中各第二样本腹部图像包括各第二样本原腹部灰度图像以及对应的各第二灰度变换图像。
在一个实施例中,生成所述第二训练集的方式,包括:分别获取各所述第二样本原腹部灰度图像在各颜色通道上的第三通道图像以及各所述第二灰度变换图像在各颜色通道上的第四通道图像;对各第二通道图像以及各第二图通道像分别进行归一化,确定各第三归一化通道图像以及各第四归一化通道图像;基于各第三归一化通道图像以及各第四归一化通道图像,生成第二训练集。
即在本实施例中,第二训练集中各第二样本腹部图像包括各第三归一化通道图像以及各第四归一化通道图像。
在一个实施例中,基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还包括:基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;将第一样本腹部图像对应的各四个方格的交点作为各可移动点;对各可移动点进行移动,更新各方格,得到各四边形;分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;对各仿射子图进行拼接,得到更新后的第一样本腹部图像;基于各更新后的第一样本腹部图像更新第一训练集。
对第一样本腹部图像进行切换可确定各方格。在本实施例中,可基于沿图像的行方向的各行切分线以及沿图像的列方向的各列切分线,对第一样本腹部图像进行切分,得到各方格(即正方形格)。即各方格组成的大小与第一样本腹部图像大小相同。可以理解,行切分线可以为第一样本腹部图像中一行,列切分线为第一样本腹部图像中一列。
相邻两个方格之间存在重合线,属于行切分线或列切分线。如此,在四个方格存在交点时,该交点一定属于四个方格中两两之间重合线中的一点,具体为四个方格中两两之间重合线中的交点。将第一样本腹部图像中的各交点作为各可移动点,然后基于预设移动规则对各可移动点其进行移动,各交点移动,则各方格对应的形状也随之改变,即移动交点过程中,可实现对各方格形状的更新,得到各四边形。在一个示例中,移动为随机移动,且移动的距离在预设距离范围内,即预设移动规则为在预设距离范围内移动的规则。例如,相邻两个可移动点之间的距离为100,每个可移动点可以移动的范围为30,即预设距离范围即表示与可移动点的距离为30的范围。
每一个第一样本腹部图像均进行上述过程,则实现对每一个第一样本腹部图像的各方格的更新。然后,分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图,实现图像数据的增广,并将各第一样本腹部图像对应的各仿射子图进行拼接,得到各更新后的第一样本腹部图像,基于各更新后的第一样本腹部图像更新第一训练集,即更新后的第一训练集中包括各更新后的第一样本腹部图像,后续利用更新后的第一训练集进行训练。
在一个实施例中,基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还包括基于第二训练集中的各第二样本腹部图像,确定各第二样本腹部图像对应的各切分方格;将第二样本腹部图像对应的各四个切分方格的交点作为各可移动交点;
对各可移动交点进行移动,更新各切分方格,得到各切分四边形;
分别对第二样本腹部图像在各切分四边形内的区域进行仿射变换,得到各腹部仿射子图;
对各腹部仿射子图进行拼接,得到更新后的第二样本腹部图像;
基于各更新后的第二样本腹部图像更新第二训练集。
对第二样本腹部图像进行切换可确定各切分方格,过程与上述对第一样本腹部图像进行切分过程类似。更新后的第二训练集包括各更新后的第二样本腹部图像更新,后续利用更新后的第二训练集进行训练。
下面以一具体实施例对上述腹部图像分割方法加以具体说明。具体过程如下:
可将图像按照7:2:1的比例分为训练图像、验证图像以及测试图像。
在对待测腹部图像进行分割过程中,需要利用已训练的全卷积神经网络,确定已训练的全卷积神经网络的过程如下:
首先,对图像进行预处理,即对原图像进行随机翻转或旋转,并将图像切成8*8的方格,通过移动方格中49个交点(每四个方格的交点),各方格变成凸四边形,再对图像对应在各凸四边形内的像素进行仿射变换。目的在于针对于腹内脂肪的图像,纹理信息的提取是很重要的,但是由于数据量的缺失,仅仅对图像做线性的变换并不利于提取纹理信息,适当的做一些变换可以起到数据增广的作用。如图2所示,图2(a)为一个像素分类标签图,图2(b)为对2(a)中像素分类标签图在各凸四边形内的区域进行仿射变换后的图像。另外,还可对上述变换后的图像由单通道变成三通道(颜色通道),给出方差和均值,将每一个通道的图像的像素值都归一化。最后生成第一训练集和第二训练集,所述第一训练集包括各第一样本腹部图像(例如,300张)以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像(例如,9000张)以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
然后,可初始得到初始全卷积神经网络,在初始全卷积神经网络中,网络配置参数为:学习率为0.001,学习率采用逐步下降策略,配置参数gamma为0.1,stepsize(步长)为1500,冲量momentum设置为0.9,最大迭代次数(即预设迭代次数)为50000次,每次训练输入32张图像作为一个序列进行学习。本实施例中,训练代码中采用pytorch深度学习框架,加入DRML公开代码中的多标签输入 层和多标签sigmoid交叉熵函数层,在Ubuntu系统中编译,配置好算法网络参数,进行训练。
如图3所示,为全卷积神经网络的原理图,在全卷积神经网络网络的左边经过四次卷积、池化和归一化操作之后,图像逐渐减小。在后边,经过解卷积,池化和归一化操作,图片逐渐增大。网络在每一次解卷积之前,会将上一层传过来的图片和以前同样大小的卷积图片做一个在通道上的连接。这样做的目的是为了对语义分割在提取细节特征的同时还能兼顾到总体信息。在最后一层卷积层之后,连接softmax层(分类层),另外还可连接一个全连接层,输出有三个神经元(节点),分别对应着皮下脂肪、腹内脂肪以及背景的像素数量。
具体地,在全卷积神经网络的左边经过四次卷积、池化和归一化操作之后,图片逐渐减小。在右边,经过解卷积、池化和归一化操作,图片逐渐增大。全卷积神经网络在每一次解卷积之前,会将上一层传过来的图像和以前同样大小的卷积图片做一个在通道上的连接。这样做的目的是为了对语义分割在提取细节特征的同时还能兼顾到总体信息。
其次,确定第一训练集对应的各第一训练子集,确定第二训练集对应的各第二训练子集。具体地,可从第一训练集中选取预设数量(例如,32)个第一样本腹部图像以及对应的像素分类标签图作为一个第一训练子集,各第一训练子集的并集为第一训练集,且交集为空。可从第二训练集中选取预设数量个第二样本腹部图像以及其像素属于各类别的像素数量作为一个第二训练子集,各第二训练子集的并集为第二训练集,且交集为空。轮流从第一训练集和第二训练集中选取训练子集作为标准训练子集,并将其输入到已经架构好的初始全卷积神经网络(unet)中进行训练,直到满足网络训练停止条件。初始全卷积神经网络训练过程中,包括前向传播、计算误差以及反向传播过程,每一次前向传播包括卷积、池化和归一化步骤。
在一个训练子集对网络训练完成后,即更新初始全卷积神经网络,且迭代次数增一。此时,可利用验证图像对更新后的初始全卷积神经网络进行验证,如果验证结果高于之前训练后更新的初始全卷积神经网络对验证图像的训练结果,则将该更新后的初始全卷积神经网络保存下来,可用于以后的测试图像的测试。
如果迭代次数达到预设迭代次数,表示满足网络训练停止条件,则训练结束,可对测试图像进行测试。如果未达到,则需检测本次迭代之后(即针对本次标准训练子集训练完毕之后),全卷积神经网络的训练误差是否有效减小,即是否小于或等于预设误差,若是,则表示有效减小,若否,则标识误差没有有效减小,此时,可对网络的学习率经营调整。后续可利用测试图像对已训练的全卷积神经网络进行测试得到测试结果,并根据测试结果确定已训练的全卷积神经网络的分割准确率。
在上述腹部图像分割方法中,通过将图片切成各方格,通过移动方格中的每四个方格的交点,把每一个方格中的图像进行仿射变换,实现数据增广,有利于全卷积神经网络提取图像的纹理特征。现有方案不能做到每一个像素对应的学习,他们需要把每一个像素周围的一整同时输入,但这样就会忽略整体图像对这个像素带来的影响。在本实施例中,将分割任务运用到unet中。通过端到端的学习,以及反向传播修改参数的方式使网络自适应的学习到图片的特征。
而且,本实施例结合原有的神经网络中对两种同样风格却不同标签的数据进行联合协同学习。对于像素级别的图像标注,需要大量的人力和时间,如此,本实施例采用全卷积神经网络前面特征提取部分的架构(即卷积和池化的架构),对后端特征组合部分进行多标签训练(对应分类层以及全连接层),可以使神经网络收敛良好。另外,对于不同人群(偏瘦,正常,肥胖)不用单独训练网络,训练的全卷积神经网络可以良好匹配现有数据,并且得到较高的准确率。
在对待测腹部图像进行分割过程中,利用已训练的全卷积神经网络对其进行分割,可提高分割准确性。如下表1所示,为采用本实施例的分割方法对各待测腹部图像进行分割的准确率结果。
表1
  皮下脂肪 腹内脂肪
有像素级标签(具有像素分类标签图) 94.3% 90.8%
只有像素数量标签(具有属于各类别的像素数量) 93.6% 88.7%
在一个实施例中,提供了一种腹部图像分割装置,该装置包括:待测图像获取模块和分割图像确 定模块,其中:
待测图像获取模块,用于获取待测腹部图像;
分割图像确定模块,用于基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
在一个实施例中,上述装置还包括:
训练集获取模块,用于获取所述第一训练集以及所述第二训练集;
初始化模块,用于初始化得到初始全卷积神经网络,所述初始全卷积神经网络包括卷积层和分类层;
训练模块,用于基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络;
其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定;
在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属于各所述类别的像素数量确定。
在一个实施例中,所述训练模块,包括:
子集确定模块,用于基于所述第一训练集确定各第一训练子集,基于所述第二训练集确定各第二训练子集;
神经网络训练模块,用于轮流从各所述第一训练子集以及各所述第二训练子集中选择一个未训练过的标准训练子集,基于各所述标准训练子集对所述初始全卷积神经网络进行训练,得到所述已训练的全卷积神经网络;其中,相邻两轮选择的标准训练子集分别来自不同训练集。
在一个实施例中,所述神经网络训练模块,包括:
选择模块,用于从各所述第一训练子集中选择一个训练子集作为标准训练子集;
更新模块,用于基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络;
已训练的全卷积神经网络确定模块,还用于在未满足网络训练停止条件时,从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并返回所述更新模块执行基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络的步骤,直到满足所述网络训练停止条件,将更新的所述初始全卷积神经网络作为所述已训练的全卷积神经网络。
在一个实施例中,上述装置还包括:
标准标记模块,用于更新模块更新所述初始全卷积神经网络之后,在未满足网络训练停止条件之前,对所述标准训练子集标记为已训练;
第一子集标记模块,用于在各所述第一训练子集分别标记为已训练时,将各所述第一训练子集分别标记为未训练;
第二子集标记模块,用于在各所述第二训练子集分别标记为已训练时,将各所述第二训练子集分别标记为未训练。
在一个实施例中,所述训练集获取模块,包括:
第一图像获取模块,用于获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的 像素分类标签图;
第一图像变换模块,用于对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换图;
第一训练集生成模块,用于基于各所述第一样本原腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
在一个实施例中,第一训练集生成模块,包括:
第一通道图像获取模块,用于分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;
第一归一化模块,用于对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;
第一训练集确定模块,用于基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
在一个实施例中,所述训练集获取模块,包括:
第二图像获取模块,用于获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;
第二图像变换模块,用于对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;
第二训练集生成模块,用于基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
在一个实施例中,上述装置还包括:
方格确定模块,用于训练模块执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;
可移动点确定模块,用于将第一样本腹部图像对应的各四个方格的交点作为各可移动点;
四边形确定模块,用于对各可移动点进行移动,更新各方格,得到各四边形;
第一仿射变换模块,用于分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;
第一拼接模块,用于对各仿射子图进行拼接,得到更新后的第一样本腹部图像;
第一更新模块,用于基于各更新后的第一样本腹部图像更新第一训练集。
在一个实施例中,上述装置还包括:
切分方格确定模块,用于训练模块执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,基于第二训练集中的各第二样本腹部图像,确定各第二样本腹部图像对应的各切分方格;
可移动交点确定模块,用于将第二样本腹部图像对应的各四个切分方格的交点作为各可移动交点;
切分四边形确定模块,用于对各可移动交点进行移动,更新各切分方格,得到各切分四边形;
第二仿射变换模块,用于分别对第二样本腹部图像在各切分四边形内的区域进行仿射变换,得到各腹部仿射子图;
第二拼接模块,用于对各腹部仿射子图进行拼接,得到更新后的第二样本腹部图像;
第二更新模块,用于基于各更新后的第二样本腹部图像更新第二训练集。
对第二样本腹部图像进行切换可确定各切分方格,过程与上述对第一样本腹部图像进行切分过程类似。更新后的第二训练集包括各更新后的第二样本腹部图像更新,后续利用更新后的第二训练集进行训练。
在一个实施例中,本申请提供的腹部图像分割装置可以实现为一种计算机程序的形式,该计算机程序可在如图4所示的计算机设备上运行,所述计算机设备的非易失性存储介质可存储组成该腹部图 像分割装置的各个程序模块,比如,待测图像获取模块和分割图像确定模块。各个程序模块中包括计算机可读指令,所述计算机可读指令用于使所述计算机设备执行本说明书中描述的本申请各个实施例的腹部图像分割方法中的步骤,例如,计算机设备可以通过待测图像获取模块获取待测腹部图像,再通过分割图像确定模块基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
图4示出了一个实施例中计算机设备的内部结构图。如图4所示,该计算机设备包括通过系统总线连接的处理器、存储器和网络接口。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器实现上述腹部图像分割方法。该内存储器中也可储存有计算机可读指令,该计算机可读指令被处理器执行时,可使得处理器执行腹部图像分割方法。在一个示例中,计算机设备还可以包括输入装置和显示屏,计算机设备的显示屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图4中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机可读指令,计算机可读指令被处理器执行时,使得处理器执行如下步骤:获取待测腹部图像;基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
在一个实施例中,确定已训练的全卷积神经网络的方式包括:获取第一训练集以及第二训练集;初始化得到初始全卷积神经网络,初始全卷积神经网络包括卷积层和分类层;基于第一训练集以及第二训练集,对初始全卷积神经网络进行训练得到已训练的全卷积神经网络。
其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定。
在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属于各所述类别的像素数量确定。
在一个实施例中,所述基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络的步骤,包括:基于所述第一训练集确定各第一训练子集,基于所述第二训练集确定各第二训练子集;轮流从各所述第一训练子集以及各所述第二训练子集中选择一个未训练过的标准训练子集,基于各所述标准训练子集对所述初始全卷积神经网络进行训练,得到所述已训练的全卷积神经网络。其中,相邻两轮选择的标准训练子集分别来自不同训练集。
在一个实施例中,所述得到所述已训练的全卷积神经网络的方式,包括:从各所述第一训练子集中选择一个训练子集作为标准训练子集;基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络;在未满足网络训练停止条件时,从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并 返回基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络的步骤,直到满足所述网络训练停止条件,将更新的所述初始全卷积神经网络作为所述已训练的全卷积神经网络。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行更新所述初始全卷积神经网络之后,在未满足网络训练停止条件之前,还使得处理器执行:对所述标准训练子集标记为已训练;在各所述第一训练子集分别标记为已训练时,将各所述第一训练子集分别标记为未训练;在各所述第二训练子集分别标记为已训练时,将各所述第二训练子集分别标记为未训练。
在一个实施例中,所述获取所述第一训练集的方式,包括:获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的像素分类标签图;对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换图;基于各所述第一样本原腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
在一个实施例中,生成所述第一训练集的方式,包括:分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
在一个实施例中,所述获取所述第二训练集的方式,包括:获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还使得处理器执行:基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;将第一样本腹部图像对应的各四个方格的交点作为各可移动点;对各可移动点进行移动,更新各方格,得到各四边形;分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;对各仿射子图进行拼接,得到更新后的第一样本腹部图像;基于各更新后的第一样本腹部图像更新第一训练集。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还使得处理器执行:基于第二训练集中的各第二样本腹部图像,确定各第二样本腹部图像对应的各切分方格;将第二样本腹部图像对应的各四个切分方格的交点作为各可移动交点;对各可移动交点进行移动,更新各切分方格,得到各切分四边形;分别对第二样本腹部图像在各切分四边形内的区域进行仿射变换,得到各腹部仿射子图;对各腹部仿射子图进行拼接,得到更新后的第二样本腹部图像;基于各更新后的第二样本腹部图像更新第二训练集。
对第二样本腹部图像进行切换可确定各切分方格,过程与上述对第一样本腹部图像进行切分过程类似。更新后的第二训练集包括各更新后的第二样本腹部图像更新,后续利用更新后的第二训练集进行训练。
一种存储有计算机可读指令的存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:获取待测腹部图像;
基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
在一个实施例中,在一个实施例中,确定已训练的全卷积神经网络的方式包括:获取第一训练集 以及第二训练集;初始化得到初始全卷积神经网络,初始全卷积神经网络包括卷积层和分类层;基于第一训练集以及第二训练集,对初始全卷积神经网络进行训练得到已训练的全卷积神经网络。
其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定。
在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属于各所述类别的像素数量确定。
在一个实施例中,所述基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络的步骤,包括:基于所述第一训练集确定各第一训练子集,基于所述第二训练集确定各第二训练子集;轮流从各所述第一训练子集以及各所述第二训练子集中选择一个未训练过的标准训练子集,基于各所述标准训练子集对所述初始全卷积神经网络进行训练,得到所述已训练的全卷积神经网络。其中,相邻两轮选择的标准训练子集分别来自不同训练集。
在一个实施例中,所述得到所述已训练的全卷积神经网络的方式,包括:从各所述第一训练子集中选择一个训练子集作为标准训练子集;基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络;在未满足网络训练停止条件时,从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并返回基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络的步骤,直到满足所述网络训练停止条件,将更新的所述初始全卷积神经网络作为所述已训练的全卷积神经网络。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行更新所述初始全卷积神经网络之后,在未满足网络训练停止条件之前,还使得处理器执行:对所述标准训练子集标记为已训练;在各所述第一训练子集分别标记为已训练时,将各所述第一训练子集分别标记为未训练;在各所述第二训练子集分别标记为已训练时,将各所述第二训练子集分别标记为未训练。
在一个实施例中,所述获取所述第一训练集的方式,包括:获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的像素分类标签图;对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换图;基于各所述第一样本原腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
在一个实施例中,生成所述第一训练集的方式,包括:分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
在一个实施例中,所述获取所述第二训练集的方式,包括:获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还使得处理器执行:基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;将第一样本腹部图像对应的各四个方格的交点作为各可移动点;对各可移动点进行移动,更新各方格,得到各四边形;分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;对各仿 射子图进行拼接,得到更新后的第一样本腹部图像;基于各更新后的第一样本腹部图像更新第一训练集。
在一个实施例中,计算机可读指令被处理器执行时使得处理器执行基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还使得处理器执行:基于第二训练集中的各第二样本腹部图像,确定各第二样本腹部图像对应的各切分方格;将第二样本腹部图像对应的各四个切分方格的交点作为各可移动交点;对各可移动交点进行移动,更新各切分方格,得到各切分四边形;分别对第二样本腹部图像在各切分四边形内的区域进行仿射变换,得到各腹部仿射子图;对各腹部仿射子图进行拼接,得到更新后的第二样本腹部图像;基于各更新后的第二样本腹部图像更新第二训练集。
对第二样本腹部图像进行切换可确定各切分方格,过程与上述对第一样本腹部图像进行切分过程类似。更新后的第二训练集包括各更新后的第二样本腹部图像更新,后续利用更新后的第二训练集进行训练。
应该理解的是,虽然本申请各实施例中的各个步骤并不是必然按照步骤标号指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各实施例中至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种腹部图像分割方法,其特征在于,该方法由计算机设备执行,包括以下步骤:
    获取待测腹部图像;
    基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
  2. 根据权利要求1所述的腹部图像分割方法,其特征在于,所述确定所述已训练的全卷积神经网络的方式包括:
    获取所述第一训练集以及所述第二训练集;
    初始化得到初始全卷积神经网络,所述初始全卷积神经网络包括卷积层和分类层;
    基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络;
    其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定;
    在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属于各所述类别的像素数量确定。
  3. 根据权利要求2所述的腹部图像分割方法,其特征在于,所述基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络的步骤,包括:
    基于所述第一训练集确定各第一训练子集,基于所述第二训练集确定各第二训练子集;
    轮流从各所述第一训练子集以及各所述第二训练子集中选择一个未训练过的标准训练子集,基于各所述标准训练子集对所述初始全卷积神经网络进行训练,得到所述已训练的全卷积神经网络;
    其中,相邻两轮选择的标准训练子集分别来自不同训练集。
  4. 根据权利要求3所述的腹部图像分割方法,其特征在于,所述得到所述已训练的全卷积神经网络的方式,包括:
    从各所述第一训练子集中选择一个训练子集作为标准训练子集;
    基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络;
    在未满足网络训练停止条件时,从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并返回基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络的步骤,直到满足所述网络训练停止条件,将更新的所述初始全卷积神经网络作为所述已训练的全卷积神经网络。
  5. 根据权利要求4所述的腹部图像分割方法,其特征在于,更新所述初始全卷积神经网络之后,在未满足网络训练停止条件之前,还包括:
    对所述标准训练子集标记为已训练;
    在各所述第一训练子集分别标记为已训练时,将各所述第一训练子集分别标记为未训练;
    在各所述第二训练子集分别标记为已训练时,将各所述第二训练子集分别标记为未训练。
  6. 根据权利要求2所述的腹部图像分割方法,其特征在于,所述获取所述第一训练集的方式,包括:
    获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的像素分类标签图;
    对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换 图;
    基于各所述第一样本原腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
  7. 根据权利要求6所述的腹部图像分割方法,其特征在于,生成所述第一训练集的方式,包括:
    分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;
    对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;
    基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
  8. 根据权利要求6所述的腹部图像分割方法,其特征在于,所述获取所述第二训练集的方式,包括:
    获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;
    对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;
    基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
  9. 根据权利要求2所述的腹部图像分割方法,其特征在于,基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还包括
    基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;
    将第一样本腹部图像对应的各四个方格的交点作为各可移动点;
    对各可移动点进行移动,更新各方格,得到各四边形;
    分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;
    对各仿射子图进行拼接,得到更新后的第一样本腹部图像;
    基于各更新后的第一样本腹部图像更新第一训练集。
  10. 一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行如下步骤:
    获取待测腹部图像;
    基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
  11. 根据权利要求10所述的计算机设备,其特征在于,所述确定所述已训练的全卷积神经网络的方式包括:
    获取所述第一训练集以及所述第二训练集;
    初始化得到初始全卷积神经网络,所述初始全卷积神经网络包括卷积层和分类层;
    基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络;
    其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定;
    在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属 于各所述类别的像素数量确定。
  12. 根据权利要求11所述的计算机设备,其特征在于,所述基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络的步骤,包括:
    基于所述第一训练集确定各第一训练子集,基于所述第二训练集确定各第二训练子集;
    轮流从各所述第一训练子集以及各所述第二训练子集中选择一个未训练过的标准训练子集,基于各所述标准训练子集对所述初始全卷积神经网络进行训练,得到所述已训练的全卷积神经网络。
    其中,相邻两轮选择的标准训练子集分别来自不同训练集。
  13. 根据权利要求12所述的计算机设备,其特征在于,所述得到所述已训练的全卷积神经网络的方式,包括:
    从各所述第一训练子集中选择一个训练子集作为标准训练子集;
    基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络;
    在未满足网络训练停止条件时,从所述第一训练集和所述第二训练集中,所述标准训练子集所属训练集外的训练集中选择一个未训练过的训练子集作为标准训练子集,并返回基于所述标准训练子集对所述初始全卷积神经网络进行训练,更新所述初始全卷积神经网络的步骤,直到满足所述网络训练停止条件,将更新的所述初始全卷积神经网络作为所述已训练的全卷积神经网络。
  14. 根据权利要求13所述的计算机设备,其特征在于,更新所述初始全卷积神经网络之后,在未满足网络训练停止条件之前,还包括:
    对所述标准训练子集标记为已训练;
    在各所述第一训练子集分别标记为已训练时,将各所述第一训练子集分别标记为未训练;
    在各所述第二训练子集分别标记为已训练时,将各所述第二训练子集分别标记为未训练。
  15. 根据权利要求11所述的计算机设备,其特征在于,所述获取所述第一训练集的方式,包括:
    获取各第一样本原腹部灰度图像以及各第一样本原腹部灰度图像对应的像素分类标签图;
    对各所述第一样本原腹部灰度图像进行变换,获得各第一灰度变换图像,对各所述第一样本原腹部图像对应的像素分类标签图进行相同变换,获得各所述第一灰度变换图像对应的像素分类标签变换图;
    基于各所述第一样本原腹部灰度图像、各所述第一样本原腹部灰度图像对应的像素分类标签图、各所述第一灰度变换图像以及各所述第一灰度变换图像对应的像素分类标签变换图,生成所述第一训练集。
  16. 根据权利要求15所述的计算机设备,其特征在于,生成所述第一训练集的方式,包括:
    分别获取各所述第一样本原腹部灰度图像在各颜色通道上的第一通道图像以及各所述第一灰度变换图像在各颜色通道上的第二通道图像;
    对各第一通道图像以及各第二图通道像分别进行归一化,确定各第一归一化通道图像以及各第二归一化通道图像;
    基于各第一归一化通道图像以及各第二归一化通道图像,生成第一训练集。
  17. 根据权利要求15所述的计算机设备,其特征在于,所述获取所述第二训练集的方式,包括:
    获取各第二样本原腹部灰度图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量;
    对各所述第二样原腹部灰度图像进行变换,获得各第二灰度变换图像;
    基于各所述第二样本原腹部灰度图像、各所述第二灰度变换图像以及各所述第二样本原腹部灰度图像分别对应属于各类别的像素数量,生成所述第二训练集。
  18. 根据权利要求11所述的计算机设备,其特征在于,基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络之前,还包括
    基于第一训练集中的各第一样本腹部图像,确定各第一样本腹部图像对应的各方格;
    将第一样本腹部图像对应的各四个方格的交点作为各可移动点;
    对各可移动点进行移动,更新各方格,得到各四边形;
    分别对第一样本腹部图像在各四边形内的区域进行仿射变换,得到各仿射子图;
    对各仿射子图进行拼接,得到更新后的第一样本腹部图像;
    基于各更新后的第一样本腹部图像更新第一训练集。
  19. 一个或多个存储有计算机可读指令的非易失性存储介质,所述计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行如下步骤:
    获取待测腹部图像;
    基于已训练的全卷积神经网络,对所述待测腹部图像中各像素进行分类,确定所述待测腹部图像对应的分割图像;其中,所述已训练的全卷积神经网络基于第一训练集以及第二训练集训练确定,所述第一训练集包括各第一样本腹部图像以及各所述第一样本腹部图像对应的像素分类标签图,所述第二训练集包括各第二样本腹部图像以及各所述第二样本腹部图像分别对应属于各类别的像素数量。
  20. 根据权利要求19所述的计算机存储介质,其特征在于,所述确定所述已训练的全卷积神经网络的方式包括:
    获取所述第一训练集以及所述第二训练集;
    初始化得到初始全卷积神经网络,所述初始全卷积神经网络包括卷积层和分类层;
    基于所述第一训练集以及所述第二训练集,对所述初始全卷积神经网络进行训练得到所述已训练的全卷积神经网络;
    其中,在基于所述第一训练集训练所述初始全卷积神经网络过程中,根据第一训练误差对所述初始全卷积神经网络进行更新,所述第一训练误差,根据所述第一训练集中第一样本腹部图像输入所述初始全卷积神经网络,从所述分类层输出的训练像素分类标签图,与所述第一样本腹部图像的像素分类标签图确定;
    在基于第二训练集训练所述初始全卷积神经网络过程中,根据第二训练误差对所述初始全卷积神经网络进行更新,所述第二训练误差,根据所述第二训练集中第二样本腹部图像输入所述初始全卷积神经网络,从所述卷积层输出的训练卷积图像通过全连接层后的输出,与所述第二样本腹部图像中属于各所述类别的像素数量确定。
PCT/CN2018/115798 2018-11-08 2018-11-16 腹部图像分割方法、计算机设备及存储介质 WO2020093435A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/471,819 US11302014B2 (en) 2018-11-08 2018-11-16 Methods of segmenting an abdominal image, computer apparatuses, and storage mediums

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811324901.0A CN111161274B (zh) 2018-11-08 2018-11-08 腹部图像分割方法、计算机设备
CN201811324901.0 2018-11-08

Publications (1)

Publication Number Publication Date
WO2020093435A1 true WO2020093435A1 (zh) 2020-05-14

Family

ID=70554818

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/115798 WO2020093435A1 (zh) 2018-11-08 2018-11-16 腹部图像分割方法、计算机设备及存储介质

Country Status (3)

Country Link
US (1) US11302014B2 (zh)
CN (1) CN111161274B (zh)
WO (1) WO2020093435A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594627A (zh) * 2023-05-18 2023-08-15 湖北大学 一种基于多标签学习的群体软件开发中服务匹配方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020148810A1 (ja) * 2019-01-15 2020-07-23 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置
CN110472531B (zh) * 2019-07-29 2023-09-01 腾讯科技(深圳)有限公司 视频处理方法、装置、电子设备及存储介质
US11494608B2 (en) * 2019-08-14 2022-11-08 Intel Corporation Methods and apparatus to tile walk a tensor for convolution operations
WO2021178320A1 (en) * 2020-03-05 2021-09-10 Stryker Corporation Systems and methods for automatic detection of surgical specialty type and procedure type

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894517A (zh) * 2016-04-22 2016-08-24 北京理工大学 基于特征学习的ct图像肝脏分割方法及系统
CN107316307A (zh) * 2017-06-27 2017-11-03 北京工业大学 一种基于深度卷积神经网络的中医舌图像自动分割方法
CN108305260A (zh) * 2018-03-02 2018-07-20 苏州大学 一种图像中角点的检测方法、装置及设备
CN108335303A (zh) * 2018-01-28 2018-07-27 浙江大学 一种应用于手掌x光片的多尺度手掌骨骼分割方法
CN108765423A (zh) * 2018-06-20 2018-11-06 北京七鑫易维信息技术有限公司 一种卷积神经网络训练方法及装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9700219B2 (en) * 2013-10-17 2017-07-11 Siemens Healthcare Gmbh Method and system for machine learning based assessment of fractional flow reserve
US9462968B2 (en) * 2014-10-17 2016-10-11 General Electric Company System and method for assessing bowel health
CN109690554B (zh) * 2016-07-21 2023-12-05 西门子保健有限责任公司 用于基于人工智能的医学图像分割的方法和系统
JP6657132B2 (ja) * 2017-02-27 2020-03-04 富士フイルム株式会社 画像分類装置、方法およびプログラム
CN108572183B (zh) * 2017-03-08 2021-11-30 清华大学 检查设备和分割车辆图像的方法
US10849587B2 (en) * 2017-03-17 2020-12-01 Siemens Healthcare Gmbh Source of abdominal pain identification in medical imaging
US10140421B1 (en) * 2017-05-25 2018-11-27 Enlitic, Inc. Medical scan annotator system
CN108491776B (zh) * 2018-03-12 2020-05-19 青岛理工大学 基于像素分类的装配体零件识别方法、装置及监测系统
US10140544B1 (en) * 2018-04-02 2018-11-27 12 Sigma Technologies Enhanced convolutional neural network for image segmentation
US10706545B2 (en) * 2018-05-07 2020-07-07 Zebra Medical Vision Ltd. Systems and methods for analysis of anatomical images
US10891731B2 (en) * 2018-05-07 2021-01-12 Zebra Medical Vision Ltd. Systems and methods for pre-processing anatomical images for feeding into a classification neural network
CN108765412B (zh) 2018-06-08 2021-07-20 湖北工业大学 一种带钢表面缺陷分类方法
US10803591B2 (en) * 2018-08-28 2020-10-13 International Business Machines Corporation 3D segmentation with exponential logarithmic loss for highly unbalanced object sizes
CN109784424B (zh) * 2019-03-26 2021-02-09 腾讯科技(深圳)有限公司 一种图像分类模型训练的方法、图像处理的方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894517A (zh) * 2016-04-22 2016-08-24 北京理工大学 基于特征学习的ct图像肝脏分割方法及系统
CN107316307A (zh) * 2017-06-27 2017-11-03 北京工业大学 一种基于深度卷积神经网络的中医舌图像自动分割方法
CN108335303A (zh) * 2018-01-28 2018-07-27 浙江大学 一种应用于手掌x光片的多尺度手掌骨骼分割方法
CN108305260A (zh) * 2018-03-02 2018-07-20 苏州大学 一种图像中角点的检测方法、装置及设备
CN108765423A (zh) * 2018-06-20 2018-11-06 北京七鑫易维信息技术有限公司 一种卷积神经网络训练方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116594627A (zh) * 2023-05-18 2023-08-15 湖北大学 一种基于多标签学习的群体软件开发中服务匹配方法
CN116594627B (zh) * 2023-05-18 2023-12-12 湖北大学 一种基于多标签学习的群体软件开发中服务匹配方法

Also Published As

Publication number Publication date
CN111161274A (zh) 2020-05-15
CN111161274B (zh) 2023-07-07
US20210366125A1 (en) 2021-11-25
US11302014B2 (en) 2022-04-12

Similar Documents

Publication Publication Date Title
WO2020093435A1 (zh) 腹部图像分割方法、计算机设备及存储介质
WO2020253629A1 (zh) 检测模型训练方法、装置、计算机设备和存储介质
CN107591200B (zh) 基于深度学习及影像组学的骨龄标记识别评估方法及系统
CN110163260B (zh) 基于残差网络的图像识别方法、装置、设备及存储介质
CN110097003A (zh) 基于神经网络的课堂考勤方法、设备、存储介质及装置
CN109191476A (zh) 基于U-net网络结构的生物医学图像自动分割新方法
CN106250829A (zh) 基于唇部纹理结构的数字识别方法
CN104484886B (zh) 一种mr图像的分割方法及装置
CN109886944B (zh) 一种基于多图谱的脑白质高信号检测和定位方法
CN108573499A (zh) 一种基于尺度自适应和遮挡检测的视觉目标跟踪方法
CN110543906B (zh) 基于Mask R-CNN模型的肤质自动识别方法
CN108447057A (zh) 基于显著性和深度卷积网络的sar图像变化检测方法
KR20230125169A (ko) 조직의 이미지 처리 방법 및 조직의 이미지 처리 시스템
CN108664994A (zh) 一种遥感图像处理模型构建系统和方法
CN110930378A (zh) 基于低数据需求的肺气肿影像处理方法及系统
US20210326641A1 (en) Device and method for selecting a deep learning network for processing images
CN106446806B (zh) 基于模糊隶属度稀疏重构的半监督人脸识别方法及系统
CN115410059A (zh) 基于对比损失的遥感图像部分监督变化检测方法及设备
Zhang et al. Learning from multiple annotators for medical image segmentation
CN110084810A (zh) 一种肺结节图像检测方法、模型训练方法、装置及存储介质
CN112802072B (zh) 基于对抗学习的医学图像配准方法及系统
An et al. Patch loss: A generic multi-scale perceptual loss for single image super-resolution
CN113052236A (zh) 一种基于NASNet的肺炎图像分类方法
CN116309465B (zh) 一种基于改进的YOLOv5的自然环境下舌像检测定位方法
CN116091596A (zh) 一种自下而上的多人2d人体姿态估计方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18939754

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18939754

Country of ref document: EP

Kind code of ref document: A1