CN111951278A - Method for segmenting medical images and computer-readable storage medium - Google Patents

Method for segmenting medical images and computer-readable storage medium Download PDF

Info

Publication number
CN111951278A
CN111951278A CN202010756772.3A CN202010756772A CN111951278A CN 111951278 A CN111951278 A CN 111951278A CN 202010756772 A CN202010756772 A CN 202010756772A CN 111951278 A CN111951278 A CN 111951278A
Authority
CN
China
Prior art keywords
segmentation
organ
image
result
abnormal point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010756772.3A
Other languages
Chinese (zh)
Inventor
张阳
廖术
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Intelligent Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Intelligent Healthcare Co Ltd filed Critical Shanghai United Imaging Intelligent Healthcare Co Ltd
Priority to CN202010756772.3A priority Critical patent/CN111951278A/en
Publication of CN111951278A publication Critical patent/CN111951278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10104Positron emission tomography [PET]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present application relates to a method of segmentation of medical images and a computer readable storage medium, the method comprising: acquiring each organ image in an image to be segmented; respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network; and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image. The method can greatly improve the accuracy of the obtained segmentation result of the abnormal points of the whole body.

Description

Method for segmenting medical images and computer-readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a segmentation method for medical images and a computer-readable storage medium.
Background
As a nuclear medicine imaging technology, the PET-CT plays a crucial role in the detection, discovery, staging, treatment and follow-up of early lesions or abnormal points by increasing the accuracy of anatomical positioning in functional imaging; when detecting abnormal points in a PET-CT image, an image segmentation method is usually adopted to segment the abnormal point region in the PET-CT image to assist a doctor in a subsequent diagnosis process.
Early doctors generally only pay attention to local abnormal points of patients, and can only segment local abnormal point regions in PET-CT images; currently, in order to make doctors know the health condition of patients more comprehensively, the requirement of segmenting the abnormal point region of the whole body aiming at the PET-CT image is provided. In the conventional technology, a whole body abnormal point region segmentation model is adopted to perform abnormal point region segmentation on a PET-CT whole body image of an examined object.
However, when the segmentation of the outlier region is performed by using the whole-body outlier region segmentation model of the conventional technology, the accuracy of the segmentation result is low.
Disclosure of Invention
In view of the above, it is necessary to provide a segmentation method for medical images and a computer-readable storage medium, which solve the problem of low accuracy of segmentation results when performing segmentation of abnormal regions of the whole body in the conventional technology.
A method of segmentation of a medical image, the method comprising:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
In one embodiment, the organ anomaly segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image, wherein the abnormal point segmentation result comprises the following steps:
for each organ image, inputting the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputting a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result;
inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, and N is larger than or equal to 1.
In one embodiment, the split network further comprises the same number of encoders as the number of decoders; inputting the organ image and a target segmentation result of the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, comprising:
inputting the organ image and the last target segmentation result into a first encoder of a second segmentation network, and inputting the nth last target segmentation result into an nth encoder of the second segmentation network; wherein N is less than or equal to N.
In one embodiment, the step of inputting each organ image into the corresponding organ outlier segmentation model to obtain an outlier segmentation result corresponding to each organ image includes:
performing first conversion operation on the organ images aiming at each organ image to obtain a conversion image corresponding to the organ image;
inputting the organ image into an organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the converted image into an organ abnormal point segmentation model to obtain a second segmentation result corresponding to the converted image;
performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
In one embodiment, obtaining a segmentation result of a whole body outlier corresponding to an image to be segmented according to an outlier segmentation result corresponding to each organ image includes:
merging the first segmentation results corresponding to each organ image to obtain a first merged result;
merging the third segmentation results corresponding to each organ image to obtain a second merged result;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the first combination result and the second combination result.
In one embodiment, obtaining a segmentation result of a whole body outlier corresponding to an image to be segmented according to the first and second combination results includes:
and connecting the first combination result and the second combination result by channels, and inputting the first combination result and the second combination result into the whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
In one embodiment, acquiring images of each organ in an image to be segmented comprises:
respectively inputting the images to be segmented into the organ segmentation models of the organs to obtain organ segmentation results of the organs;
and aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ.
In one embodiment, the method further includes:
determining the maximum standard uptake value, the major axis, the minor axis and the volume of each abnormal point region according to the whole body abnormal point segmentation result;
and displaying at least one of the segmentation result of the abnormal points of the whole body, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume to the user terminal.
In one embodiment, the method further includes:
receiving an abnormal point deleting instruction and/or an abnormal point adding instruction input by a user through a user terminal;
and updating the segmentation result of the abnormal points of the whole body according to the abnormal point deleting instruction and/or the abnormal point adding instruction.
In one embodiment, the method further includes:
acquiring a plurality of whole body abnormal point segmentation results corresponding to the same detected object within a preset time period;
and comparing the segmentation results of the plurality of whole body abnormal points, and determining the regional change degree of the abnormal points of the detected object in the time period.
In one embodiment, the step of inputting each organ image into the corresponding organ outlier segmentation model to obtain an outlier segmentation result corresponding to each organ image includes:
if the organ type corresponding to each organ image belongs to the organ category set, respectively inputting each organ image into the corresponding organ abnormal point segmentation model to obtain the abnormal point segmentation result corresponding to each organ image; wherein the set of organ categories includes other organ types than brain, kidney, bladder and heart.
An apparatus for segmentation of medical images, the apparatus comprising:
the acquisition module is used for acquiring each organ image in the image to be segmented;
the segmentation module is used for respectively inputting each organ image into the corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and the determining module is used for obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
A computer device comprising a memory and a processor, the memory storing a computer program that when executed by the processor performs the steps of:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
According to the medical image segmentation method, the medical image segmentation device, the computer equipment and the storage medium, the obtained organ images can be respectively input into the corresponding organ abnormal point segmentation models to obtain the abnormal point segmentation results corresponding to the organ images, and then the whole body abnormal point segmentation results corresponding to the images to be segmented are obtained according to the abnormal point segmentation results corresponding to the organ images; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network. In the method, the segmentation process of the abnormal points of the whole body is decomposed into the segmentation process of each abnormal point of the organ, so that the segmentation of the whole body area is converted into the segmentation of each small area, the calculated amount of an abnormal point segmentation model of the organ is reduced, the accuracy of the abnormal point segmentation result is improved, the automatic segmentation of the abnormal points of the whole body is realized, and the statistical efficiency of the image clinical work is improved; the adopted organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and the former segmentation network provides multi-scale segmentation semantic information for the latter segmentation network, so that the accuracy of the organ anomaly point segmentation result is further improved, and the accuracy of the obtained whole body anomaly point segmentation result is further improved.
Drawings
FIG. 1 is a diagram illustrating an internal structure of a computer device according to an embodiment;
FIG. 2 is a flow diagram illustrating a method for segmentation of medical images in one embodiment;
FIG. 3 is a flow chart illustrating a method for segmenting a medical image according to another embodiment;
FIG. 3a is a diagram illustrating a structure of a segmentation model of an organ outlier in an embodiment;
FIG. 4 is a schematic flow chart illustrating a training method of the organ outlier segmentation model according to an embodiment;
FIG. 5 is a flow chart illustrating a method for segmenting a medical image according to yet another embodiment;
FIG. 6 is a flow chart illustrating a method for segmenting a medical image according to yet another embodiment;
FIG. 7 is a flowchart illustrating a method for segmenting a medical image according to yet another embodiment;
fig. 8 is a block diagram of a segmentation apparatus for medical images in an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The medical image segmentation method provided by the embodiment of the application can be applied to computer equipment shown in fig. 1. The computer device comprises a processor and a memory connected by a system bus, wherein a computer program is stored in the memory, and the steps of the method embodiments described below can be executed when the processor executes the computer program. Optionally, the computer device may further comprise a communication interface, a display screen and an input means. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a nonvolatile storage medium storing an operating system and a computer program, and an internal memory. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for connecting and communicating with an external terminal through a network. Optionally, the computer device may be a Personal Computer (PC), a personal digital assistant, other terminal devices such as a tablet computer (PAD), a mobile phone, and the like, and may also be a cloud or a remote server, where a specific form of the computer device is not limited in this embodiment of the application.
In one embodiment, as shown in fig. 2, a method for segmenting a medical image is provided, which is described by taking the method as an example applied to the computer device in fig. 1, and the embodiment relates to a specific process of performing whole-body outlier segmentation on an image to be segmented by the computer device. The method comprises the following steps:
s101, acquiring each organ image in the image to be segmented.
The image to be segmented can be a PCT image and a CT image obtained by carrying out whole-body scanning on a detected object by PCT-CT equipment, the image to be segmented comprises each organ region, and the computer equipment can distinguish according to the pixel value of the image to be segmented to segment each organ region and intercept each organ region to obtain each organ image. Optionally, the computer device may also receive an organ region marked by a user (e.g. a physician) on the image to be segmented to obtain an organ image. In addition, for each organ image, the organ types of the organ image, such as the heart, the kidney, the pancreas and the like, can be marked.
S102, respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network.
Specifically, the computer device may input the above organ images into corresponding organ anomaly point segmentation models respectively, for example, inputting a pancreas image into a pancreas anomaly point segmentation model, inputting a lung image into a lung anomaly point segmentation model, and the like, to obtain an anomaly point segmentation result corresponding to each organ image; the abnormal point may be a lesion point (suspected lesion point) on an organ or an abnormal hypermetabolic region, and the abnormal point segmentation result may be an abnormal point mask or an abnormal point probability map.
The above organ anomaly point segmentation model may include a cascade structure of a plurality of segmentation networks, which may be VNet networks, UNet networks, etc., and the input data of the latter segmentation network includes at least one output result of the former segmentation network, that is, the output result of the former segmentation network is used as the input of the latter segmentation network. Illustratively, the organ anomaly point segmentation model is obtained by cascading two or more UNet networks, the former segmentation network can obtain multi-scale feature semantic information of an anomaly point region through processing of each network layer, and the multi-scale feature semantic information is input into the latter segmentation network to guide the latter segmentation network to use for reference analysis, namely, the former segmentation network is supervised by the high-level multi-scale semantic information and guides the latter segmentation network to carry out semantic segmentation, so that the segmentation accuracy of the organ anomaly point segmentation model is improved.
Alternatively, the computer device may first resample the organ image to a resolution space that is the same as the sample image at the training stage of the organ outlier segmentation model, normalize the resampled organ image, and input the normalized organ image into the organ outlier segmentation model. Alternatively, may be according to
Figure BDA0002611827830000081
The relation formula (c) is used for standardizing the organ image after resampling, wherein I is the organ image after resampling, mu is the average value of the sample image in the training stage of the organ anomaly point segmentation model, and sigma is the standard deviation of the sample image in the training stage of the organ anomaly point segmentation model. In addition, for both the PET and CT images of the same organ, the computer device can input the normalized PET and CT images into the organ outlier segmentation model after connecting along the channel dimension.
Optionally, since organs such as the brain, the kidney, the bladder and the heart are high-uptake regions in the PET image, and the significance of the abnormal points is not high, in order to avoid the influence of such organs on the segmentation result of the high-uptake abnormal points, the computer device may remove the organ images corresponding to such organs. If the organ type corresponding to each organ image belongs to the organ category set, respectively inputting each organ image into the corresponding organ abnormal point segmentation model; if the organ type corresponding to each organ image does not belong to the organ type set, the organ image is not subjected to segmentation processing; wherein the set of organ classes includes other organ types except brain, kidney, bladder and heart.
And S103, obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
Specifically, the computer device may merge (or merge) the abnormal point segmentation results corresponding to the organ images to obtain a whole-body abnormal point segmentation result corresponding to the image to be segmented. Alternatively, since the organ image is acquired from the image to be segmented, the abnormal point segmentation result can be merged and combined into the whole-body abnormal point segmentation result.
Optionally, after the segmentation result of the abnormal point of the whole body is obtained, the abnormal point region of the whole body can be displayed in a highlighted form on the original image to be segmented, so as to assist a doctor in checking and diagnosing each abnormal point region.
In the segmentation method for medical images provided by this embodiment, the computer device respectively inputs each acquired organ image into a corresponding organ anomaly point segmentation model to obtain an anomaly point segmentation result corresponding to each organ image, and then obtains a whole body anomaly point segmentation result corresponding to an image to be segmented according to the anomaly point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network. In the method, the segmentation process of the abnormal points of the whole body is decomposed into the segmentation process of each abnormal point of the organ, so that the segmentation of the whole body area is converted into the segmentation of each small area, the calculated amount of an abnormal point segmentation model of the organ is reduced, the accuracy of the abnormal point segmentation result is improved, the automatic segmentation of the abnormal points of the whole body is realized, and the statistical efficiency of the image clinical work is improved; the adopted organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and the former segmentation network provides multi-scale segmentation semantic information for the latter segmentation network, so that the accuracy of the organ anomaly point segmentation result is further improved, and the accuracy of the obtained whole body anomaly point segmentation result is further improved.
In one embodiment, the organ anomaly segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders; here, the first divided network and the second divided network are used only for explaining different divided networks, and the number of the divided networks is not limited. Alternatively, as shown in fig. 3, the S102 may include:
s201, aiming at each organ image, inputting the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputting a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result.
Specifically, for a plurality of decoders, the latter decoder is used for performing an upsampling operation on the output result of the former decoder, and each decoder outputs an upsampling result as a reference segmentation result. After the computer device inputs the organ image into the first segmentation network of the organ outlier segmentation model, a plurality of reference segmentation results, which have different resolution scales, may be output through a plurality of decoders of the first segmentation network.
S202, inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, and N is larger than or equal to 1.
Specifically, the computer device may select a target division result from the plurality of reference division results, the target division result being a reference division result output by the last N decoders, i.e., the last N reference division results, of the plurality of decoders of the first division network. Then, the original organ image and the target segmentation result are jointly input into a second segmentation network, namely the second segmentation network refers to the output result of the reference first segmentation network, and a final abnormal point segmentation result is output through the last decoder in a plurality of decoders of the second segmentation network; the first segmentation network provides high-level semantic information for the second segmentation network, the second segmentation network is guided to perform feature extraction and semantic analysis, and the accuracy of the abnormal point segmentation result output by the second segmentation network can be improved.
Optionally, the segmentation network further includes the same number of encoders as the decoders, the latter encoder is used for performing downsampling operation on the output result of the former encoder, the computer device may input the organ image and the target segmentation result together into a first encoder of a second segmentation network, the downsampling result of the first encoder is input into a second encoder, and so on. Optionally, the computer device can also input the organ image and the target segmentation result into the encoder at the corresponding position; referring to fig. 3a, which is a schematic structural diagram of an organ outlier segmentation model, a computer device inputs an organ image and a last target segmentation result into a first decoder of a second segmentation network, and inputs an nth-last (N ≦ N) target segmentation result into an nth encoder of the second segmentation network, that is, each target segmentation result is input into an encoder adapted to its scale, so as to improve the accuracy of the output result of each encoder. Alternatively, the first n target segmentation results may be input into the nth encoder together, so that the nth encoder performs segmentation processing by synthesizing target segmentation results with different sizes.
Optionally, as shown in fig. 3a, a hole convolution block (scaled block) with different expansion coefficients is further disposed between the encoder and the decoder at the corresponding position of each partition network, so that the receptive field of the depth network can be greatly ensured while multi-scale information is extracted, and the loss of small information caused by downsampling operation is avoided to a certain extent, thereby improving the accuracy of the abnormal point partition result.
In the medical image segmentation method provided by the embodiment, for each organ image, the computer device inputs the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputs a plurality of reference segmentation results through a plurality of decoders of the first segmentation network, wherein one decoder outputs one reference segmentation result; then inputting the organ image and a target segmentation result in the multiple reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in multiple decoders of the second segmentation network; therefore, the first segmentation network provides supervision and guidance information for the second segmentation network, the accuracy of the abnormal point segmentation result output by the second segmentation network can be improved, and the accuracy of the subsequent whole body abnormal point segmentation result is improved.
In one embodiment, the training process of the model is also involved on the basis of the model structure shown in fig. 3 a. As shown in fig. 4, the training method of the organ outlier segmentation model includes:
s301, acquiring a sample image and an abnormal point segmentation gold standard corresponding to the sample image.
Specifically, a plurality of PET images and CT images containing abnormal points can be scanned by a PET-CT apparatus, a plurality of organ images can be obtained from the PCT image and the CT image as sample images, and a senior physician marks the abnormal point segmentation gold standard (mask) of each sample image. Optionally, the computer device may further resample all the sample images to the same resolution space, and set an appropriate mean and standard deviation to normalize the resampled sample images, where the normalization process may refer to the description of the above embodiment, and is not described herein again.
S302, inputting a sample image into a first segmentation network of the initial organ anomaly point segmentation network, and outputting a plurality of first prediction segmentation results through a plurality of decoders of the first segmentation network; the sample image and a first target segmentation result of the plurality of first predicted segmentation results are input into a second segmentation network of the initial organ outlier segmentation network, and a plurality of second predicted segmentation results are output by a plurality of decoders of the second segmentation network.
In this step, the processing procedures of the first segmentation network and the second segmentation network in the initial organ anomaly point segmentation network can be referred to the description of the above embodiments, and the implementation principles thereof are similar; it should be noted that the initial organ anomaly point segmentation network in this step is a network in the training process.
S303, calculating a first loss between a first target segmentation result of the plurality of first predicted segmentation results and the outlier segmentation gold standard, and a second loss between a second target segmentation result of the plurality of second predicted segmentation results and the outlier segmentation gold standard; and training the initial organ anomaly point segmentation network according to the first loss and the second loss to obtain an organ anomaly point segmentation model.
The first target division result is a prediction division result output by the last N decoders in the plurality of decoders of the first division network, and the second target division result is a prediction division result output by the last N decoders in the plurality of decoders of the second division network. The computer device may calculate a first loss between the first target segmentation result and the outlier segmentation gold standard and a second loss between the second target segmentation result and the outlier segmentation gold standard, respectively. For example, the distance between the last three predicted segmentation results (i.e. the first target segmentation result) of the first segmentation network and the outlier segmentation criteria can be calculatedLoss1, Loss2, and Loss3, and Loss4, Loss5, and Loss6 between the last three predicted segmented results (i.e., the second target segmented result) of the second segmented network and the outlier segmentation gold standard, respectively. Alternatively, each loss may also consist of two parts, a BCE loss and a Dice loss, wherein,
Figure BDA0002611827830000121
y is the outlier segmentation gold standard,
Figure BDA0002611827830000122
to predict the segmentation result;
Figure BDA0002611827830000123
x is the result of the prediction segmentation, and Y is the abnormal point segmentation gold standard. Then training the initial organ anomaly point segmentation network according to the first loss and the second loss to obtain an organ anomaly point segmentation model; optionally, the initial organ outlier segmentation network may be trained on the result of a direct sum or a weighted sum of the first and second losses.
Alternatively, for a plurality of sample images, the computer device may randomly select an image patch from the plurality of sample images, the size of the patch may be [64,128,128], and the size of the input data may be [2,64,128 ] since the network may be input by connecting the PET image and the CT image along the channel dimension. In addition, the optimizer in the training of the initial organ outlier segmentation network may be Adam.
In the medical image segmentation method provided by the embodiment, the computer device trains the initial organ anomaly point segmentation network according to the sample image and the anomaly point segmentation gold standard corresponding to the sample image so as to continuously improve the accuracy of the initial organ anomaly point segmentation network and finally obtain an organ anomaly point segmentation model with higher accuracy; and then the organ abnormal point segmentation model is used for segmenting abnormal points, so that the accuracy of the abnormal point segmentation result is greatly improved.
In one embodiment, the computer device may further perform data enhancement on the organ image input into the organ outlier segmentation model, so that more data information is considered when the organ outlier segmentation model is segmented, and the segmentation accuracy is improved. Alternatively, as shown in fig. 5, the S102 may include:
s401, aiming at each organ image, performing a first conversion operation on the organ image to obtain a conversion image corresponding to the organ image.
The first conversion operation may be an operation of flipping the organ image, for example, the organ image may be respectively flipped along the Z direction, Y direction and X direction to obtain a conversion image I corresponding to the organ image IZflip、IYflipAnd IXflip(ii) a The organ image may be subjected to a scaling operation, such as reduction of the organ image in the direction toward the center of the image or enlargement in the direction opposite to the center of the image, to enhance the organ image.
S402, inputting the organ image into an organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; and inputting the converted image into the organ abnormal point segmentation model to obtain a second segmentation result corresponding to the converted image.
Specifically, the computer equipment inputs the organ image into the organ abnormal point segmentation model to obtain a first segmentation result I corresponding to the organ image ImaskAnd converting the image (e.g. I)Zflip、IYflip、IXflip) Inputting the organ abnormal point segmentation model to obtain a second segmentation result (I) corresponding to the converted imagemask-Zflip、Imask-Yflip、Imask-Xflip)。
S403, performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
In particular, after obtaining the second segmentation result, the computer device may perform a second transformation operation on the second segmentation result, the second transformation operation being an inverse operation of the first transformation operation, such as if the first transformation operation is flipping the organ image in the Z direction, the second transformation operation being to flip Imask-ZflipIn the Z directionTurning over; if the first conversion operation is to flip the organ image in the Y direction, the second conversion operation is to invert Imask-YflipTurning over along the Y direction; if the first conversion operation is to flip the organ image in the X direction, the second conversion operation is to flip Imask-XflipTurning in the opposite direction of X to obtain a third segmentation result (I)A-mask-Zflip、IA-mask-Yflip、IA-mask-Xflip) And converting the third segmentation result into the same image direction as the first segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image. Or, if the first conversion operation is a reduction operation on the organ image, the second conversion operation is an enlargement operation on the second division result; if the first conversion operation is an enlargement operation for the organ image, the second conversion operation is a reduction operation for the second division result. By converting the input organ image, more data information can be integrated when the organ abnormal point segmentation model is segmented, and the segmentation accuracy is improved. It should be noted that, in this embodiment, the method for performing data enhancement by transforming an organ image may also be applied to the training process of the above organ outlier segmentation model, so as to perform data enhancement on the sample image used in the training process, thereby improving the model accuracy of the organ outlier segmentation model.
After obtaining the first segmentation result and the third segmentation result, the computer device may obtain a final segmentation result of the whole body outliers according to the first segmentation result and the third segmentation result. Optionally, the computer device may merge the first segmentation results corresponding to each organ image to obtain a first merged result; merging the third segmentation results corresponding to each organ image to obtain a second merged result; that is, the first segmentation results of the organ images which are not turned over are combined, and the third segmentation results corresponding to the turned-over images are combined to obtain two types of combination results; then, a whole-body outlier segmentation result is obtained according to the first combined result and the second combined result, for example, the first combined result and the second combined result may be fused, for example, a weighted sum is performed on pixel point values at the same position of the first combined result and the second combined result, so as to obtain a whole-body outlier segmentation result.
In one embodiment, the computer device may further process the first combined result and the second combined result by using a whole-body outlier segmentation model to obtain a whole-body outlier segmentation result. Optionally, the computer device may perform channel connection on the first merged result and the second merged result, and then input the result into a preset whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
The whole-body outlier segmentation model includes, but is not limited to, a full convolution network model, such as a VNet model, a UNet model, and the like, and the training mode may include: a plurality of PET images and CT images are obtained through the scanning of PET-CT equipment and are used as sample images, and a general abnormal point segmentation gold standard (mask) of each sample image is marked by a senior doctor; then obtaining an organ image from the sample image, and inputting the organ image into the organ anomaly point segmentation model to obtain an organ anomaly point segmentation result; and solving and connecting the abnormal point segmentation results of each organ and the channels, inputting the results into an initial whole body abnormal point segmentation network, calculating and predicting the loss between the segmentation results and the whole body abnormal point segmentation golden standard, and training the initial whole body abnormal point segmentation network according to the loss to obtain a whole body abnormal point segmentation model. Optionally, the computer device may also randomly select a patch from the sample image as a network input, for example, the size of the patch may be [64,128,128 ]; the optimizer of the network training can be Adam; the calculated loss may also include BCE loss and Dice loss, and the calculation manner of the two types of losses may be referred to the description of the above embodiments, and is not described herein again.
In an embodiment, the computer device may further acquire an organ image from the image to be segmented by using an organ segmentation model, and as shown in fig. 6, the S101 may optionally include:
s501, the images to be segmented are respectively input into the organ segmentation models of the organs, and organ segmentation results of the organs are obtained.
Alternatively, the computer device may first resample the image to be segmented to the same resolution space as the sample image at the training stage of the organ segmentation model and resample itAnd standardizing the sampled image to be segmented, and inputting the standardized image to be segmented into the organ segmentation model. Alternatively, may be according to
Figure BDA0002611827830000151
The relation formula (I) is the image to be segmented after resampling, mu is the mean value of the sample image in the training stage of the organ segmentation model, and sigma is the standard deviation of the sample image in the training stage of the organ segmentation model. In addition, for both the PET and CT images, the computer device can connect the normalized PET and CT images along the channel dimension before inputting into the organ segmentation model. The computer device needs to input the image to be segmented into the organ segmentation models of each organ, for example, the image to be segmented is input into the pancreas segmentation model, the lung segmentation model, and other models, respectively, to obtain organ segmentation results of the organs such as the pancreas and the lung, such as organ masks.
Optionally, the above organ segmentation model includes, but is not limited to, a full convolution network model, such as a VNet model and a UNet model, and the training manner may include: a plurality of PET images and CT images can be obtained by scanning through a PET-CT device to be used as sample images, and organ segmentation gold standards (masks) of each sample image are labeled by a senior doctor. Optionally, the computer device may further resample the sample image to the same resolution space, and set an appropriate mean and standard deviation to normalize the resampled sample image, where the normalization process may refer to the description of the above embodiment, and is not described herein again. Then, connecting the standardized sample images along the channel dimension, and inputting the sample images into an initial organ segmentation network for training; for parameters such as the loss function and the optimizer calculated during the training of the organ segmentation model, reference may be made to the training process of the organ outlier segmentation model, which is not described herein again.
S502, aiming at each organ, according to the organ segmentation result of the organ, acquiring an organ image corresponding to the organ from the image to be segmented.
Specifically, the coordinate position of the organ in the image to be segmented can be known according to the organ segmentation result, and the computer device can acquire (for example, intercept, cut, and the like) the organ image corresponding to the organ from the image to be segmented according to the coordinate position.
In the segmentation method of the medical image provided by the embodiment, the computer device can input the image to be segmented into the organ segmentation models of the organs to obtain the organ segmentation results of the organs; and aiming at each organ, acquiring a corresponding organ image from the image to be segmented according to the organ segmentation result. Through the segmentation processing of the organ segmentation model, compared with a segmentation method based on pixel values or artificial marks of an image to be segmented, the accuracy of the obtained organ image can be improved, and an accurate data basis is further provided for subsequent abnormal point segmentation.
In one embodiment, the computer device may further determine information such as a maximum standard uptake value (SUVmax), a major axis, a minor axis, and a volume of each outlier region according to the obtained whole-body outlier segmentation result. Optionally, the data information may be calculated according to the number of pixel points in the abnormal point region in the whole-body abnormal point segmentation result; in addition, only the data information to be viewed by the user can be calculated according to the viewing instruction input by the user, so that the calculation amount of the computer equipment is reduced. And then at least one of the segmentation result of the abnormal points of the whole body, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume is displayed to a user terminal (such as a computer device used by a doctor). Optionally, the computer device may further display at least one of the segmentation result of the whole body outlier, the maximum standard uptake value of the outlier region, the major axis, the minor axis, and the volume according to a selection instruction input by the user through the user terminal.
In addition, in some clinical applications, the doctor can check and switch the above-mentioned segmentation result of the whole body abnormal point at any time by one key through the user terminal, for example, the doctor can check the abnormal point region of the pancreas and directly switch to the abnormal point region of the lung. In addition, there may be a case where the output whole-body singular point division result has an error, and if the singular point is not divided or the divided singular point is not, the computer device may further receive an singular point deletion instruction and/or an singular point addition instruction input by the user through the user terminal, and update the whole-body singular point division result according to the singular point deletion instruction and/or the singular point addition instruction. Optionally, when the user views the whole-body abnormal point segmentation result, the user can switch from one abnormal point display interface to another abnormal point display interface, so that the interactivity with the user is improved.
In an embodiment, the computer device may further assist the user in completing follow-up and tracking treatment of the subject, and the computer device may obtain a plurality of whole body anomaly segmentation results corresponding to the same subject within a preset time period (e.g., half a year, etc.), compare the plurality of whole body anomaly segmentation results, and determine the degree of change of the anomaly region of the subject within the time period. For example, the number of abnormal points is increased, the area range of the same abnormal point is enlarged, and the treatment effect is provided, so as to assist the user to know the abnormal change trend of the detected object and provide accurate curative effect evaluation and after-healing evaluation.
To better understand the overall process of the above-described method for segmenting medical images, the method is described below in an overall embodiment. As shown in fig. 7, the method includes:
s601, respectively inputting the images to be segmented into the organ segmentation models of the organs to obtain organ segmentation results of the organs;
s602, aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ, and removing organ images of organs such as a brain, a kidney, a bladder, a heart and the like;
s603, performing first conversion operation on the organ images aiming at each organ image to obtain a turnover image corresponding to the organ image;
s604, inputting the organ image into the organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the turnover image into an organ abnormal point segmentation model to obtain a second segmentation result corresponding to the turnover image;
s605, performing second conversion operation on the second segmentation result to obtain a third segmentation result;
s606, merging the first segmentation results corresponding to each organ image to obtain a first merged result; merging the third segmentation results corresponding to each organ image to obtain a second merged result;
and S607, performing channel connection on the first combination result and the second combination result, and inputting the result into the whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
For the implementation process of each step, reference may be made to the description of the above embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
It should be understood that although the various steps in the flowcharts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 8, there is provided a segmentation apparatus for medical images, comprising: an acquisition module 11, a segmentation module 12 and a determination module 13.
Specifically, the acquiring module 11 is configured to acquire each organ image in the image to be segmented;
the segmentation module 12 is configured to input each organ image into a corresponding organ outlier segmentation model, so as to obtain an outlier segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and the determining module 13 is configured to obtain a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
The medical image segmentation apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, the organ anomaly segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders; a segmentation module 12, specifically configured to, for each organ image, input the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and output a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result; inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, and N is larger than or equal to 1.
In one embodiment, the split network further comprises the same number of encoders as decoders; a segmentation module 12, specifically configured to input the organ image and the last target segmentation result into a first encoder of the second segmentation network, and input an nth last target segmentation result into an nth encoder of the second segmentation network; wherein N is less than or equal to N.
In an embodiment, the segmentation module 12 is specifically configured to perform a first conversion operation on the organ image for each organ image to obtain a conversion image corresponding to the organ image; inputting the organ image into an organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the converted image into an organ abnormal point segmentation model to obtain a second segmentation result corresponding to the converted image; performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
In an embodiment, the determining module 13 is specifically configured to combine the first segmentation results corresponding to each organ image to obtain a first combined result; merging the third segmentation results corresponding to each organ image to obtain a second merged result; and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the first combination result and the second combination result.
In an embodiment, the determining module 13 is specifically configured to perform channel connection on the first merged result and the second merged result, and input the channel connection into the whole-body outlier segmentation model to obtain a whole-body outlier segmentation result.
In an embodiment, the obtaining module 11 is specifically configured to input the image to be segmented into the organ segmentation models of the respective organs respectively, so as to obtain organ segmentation results of the respective organs; and aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ.
In an embodiment, the determining module 13 is further configured to determine a maximum standard uptake value, a major axis, a minor axis, and a volume of each outlier region according to a whole-body outlier segmentation result; and displaying at least one of the segmentation result of the abnormal points of the whole body, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume to the user terminal.
In one embodiment, the apparatus further includes a receiving module, configured to receive an abnormal point deleting instruction and/or an abnormal point adding instruction input by a user through a user terminal; and updating the segmentation result of the abnormal points of the whole body according to the abnormal point deleting instruction and/or the abnormal point adding instruction.
For specific definition of the segmentation apparatus for a medical image, reference may be made to the above definition of the segmentation method for a medical image, which is not described herein again. The modules in the above medical image segmentation apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a method of segmentation of a medical image. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
The implementation principle and technical effect of the computer device provided in this embodiment are similar to those of the method embodiments described above, and are not described herein again.
In one embodiment, the organ anomaly segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders;
the processor, when executing the computer program, further performs the steps of:
for each organ image, inputting the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputting a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result;
inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, and N is larger than or equal to 1.
In one embodiment, the split network further comprises the same number of encoders as decoders; the processor, when executing the computer program, further performs the steps of:
inputting the organ image and the last target segmentation result into a first encoder of a second segmentation network, and inputting the nth last target segmentation result into an nth encoder of the second segmentation network; wherein N is less than or equal to N.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
performing first conversion operation on the organ images aiming at each organ image to obtain a conversion image corresponding to the organ image;
inputting the organ image into an organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the converted image into an organ abnormal point segmentation model to obtain a second segmentation result corresponding to the converted image;
performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
merging the first segmentation results corresponding to each organ image to obtain a first merged result;
merging the third segmentation results corresponding to each organ image to obtain a second merged result;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the first combination result and the second combination result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
and connecting the first combination result and the second combination result by channels, and inputting the first combination result and the second combination result into the whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
respectively inputting the images to be segmented into the organ segmentation models of the organs to obtain organ segmentation results of the organs;
and aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
determining the maximum standard uptake value, the major axis, the minor axis and the volume of each abnormal point region according to the whole body abnormal point segmentation result;
and displaying at least one of the segmentation result of the abnormal points of the whole body, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume to the user terminal.
In one embodiment, the processor, when executing the computer program, further performs the steps of:
receiving an abnormal point deleting instruction and/or an abnormal point adding instruction input by a user through a user terminal;
and updating the segmentation result of the abnormal points of the whole body according to the abnormal point deleting instruction and/or the abnormal point adding instruction.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a next segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
The implementation principle and technical effect of the computer-readable storage medium provided by this embodiment are similar to those of the above-described method embodiment, and are not described herein again.
In one embodiment, the organ anomaly segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders;
the computer program when executed by the processor further realizes the steps of:
for each organ image, inputting the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputting a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result;
inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, and N is larger than or equal to 1.
In one embodiment, the split network further comprises the same number of encoders as decoders; the computer program when executed by the processor further realizes the steps of:
inputting the organ image and the last target segmentation result into a first encoder of a second segmentation network, and inputting the nth last target segmentation result into an nth encoder of the second segmentation network; wherein N is less than or equal to N.
In one embodiment, the computer program when executed by the processor further performs the steps of:
performing first conversion operation on the organ images aiming at each organ image to obtain a conversion image corresponding to the organ image;
inputting the organ image into an organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the converted image into an organ abnormal point segmentation model to obtain a second segmentation result corresponding to the converted image;
performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
In one embodiment, the computer program when executed by the processor further performs the steps of:
merging the first segmentation results corresponding to each organ image to obtain a first merged result;
merging the third segmentation results corresponding to each organ image to obtain a second merged result;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the first combination result and the second combination result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and connecting the first combination result and the second combination result by channels, and inputting the first combination result and the second combination result into the whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
respectively inputting the images to be segmented into the organ segmentation models of the organs to obtain organ segmentation results of the organs;
and aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ.
In one embodiment, the computer program when executed by the processor further performs the steps of:
determining the maximum standard uptake value, the major axis, the minor axis and the volume of each abnormal point region according to the whole body abnormal point segmentation result;
and displaying at least one of the segmentation result of the abnormal points of the whole body, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume to the user terminal.
In one embodiment, the computer program when executed by the processor further performs the steps of:
receiving an abnormal point deleting instruction and/or an abnormal point adding instruction input by a user through a user terminal;
and updating the segmentation result of the abnormal points of the whole body according to the abnormal point deleting instruction and/or the abnormal point adding instruction.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical storage, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of segmentation of a medical image, the method comprising:
acquiring each organ image in an image to be segmented;
respectively inputting each organ image into a corresponding organ abnormal point segmentation model to obtain an abnormal point segmentation result corresponding to each organ image; wherein the organ anomaly point segmentation model comprises a cascade structure of a plurality of segmentation networks, and input data of a subsequent segmentation network comprises at least one output result of a previous segmentation network;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the abnormal point segmentation result corresponding to each organ image.
2. The method of claim 1, wherein the organ outlier segmentation model comprises a first segmentation network and a second segmentation network, each segmentation network comprising a plurality of decoders;
the step of inputting each organ image into a corresponding organ outlier segmentation model respectively to obtain an outlier segmentation result corresponding to each organ image includes:
for each organ image, inputting the organ image into a first segmentation network of a corresponding organ outlier segmentation model, and outputting a plurality of reference segmentation results through a plurality of decoders of the first segmentation network; wherein a decoder outputs a reference segmentation result;
inputting the organ image and a target segmentation result in the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, and outputting an outlier segmentation result corresponding to the organ image through a last decoder in a plurality of decoders of the second segmentation network; the target segmentation result is a reference segmentation result output by the last N decoders in the plurality of decoders of the first segmentation network, wherein N is larger than or equal to 1.
3. The method of claim 2, wherein the partitioned network further comprises the same number of encoders as decoders; the inputting the organ image and a target segmentation result of the plurality of reference segmentation results into a second segmentation network of the organ outlier segmentation model, comprising:
inputting the organ image and a last target segmentation result into a first encoder of the second segmentation network and inputting a target segmentation result of the nth last to the nth encoder of the second segmentation network; wherein N is less than or equal to N.
4. The method according to claim 1, wherein the inputting each organ image into a corresponding organ outlier segmentation model respectively to obtain an outlier segmentation result corresponding to each organ image comprises:
for each organ image, performing first conversion operation on the organ image to obtain a conversion image corresponding to the organ image;
inputting the organ image into the organ abnormal point segmentation model to obtain a first segmentation result corresponding to the organ image; inputting the conversion image into the organ abnormal point segmentation model to obtain a second segmentation result corresponding to the conversion image;
performing second conversion operation on the second segmentation result to obtain a third segmentation result, and taking the first segmentation result and the third segmentation result as abnormal point segmentation results corresponding to the organ image; wherein the second conversion operation is an inverse operation of the first conversion operation.
5. The method according to claim 4, wherein obtaining the segmentation result of the abnormal point of the whole body corresponding to the image to be segmented according to the segmentation result of the abnormal point corresponding to each organ image comprises:
merging the first segmentation results corresponding to each organ image to obtain a first merged result;
merging the third segmentation results corresponding to each organ image to obtain a second merged result;
and obtaining a whole body abnormal point segmentation result corresponding to the image to be segmented according to the first combination result and the second combination result.
6. The method according to claim 5, wherein obtaining the segmentation result of the abnormal point of the whole body corresponding to the image to be segmented according to the first and second combination results comprises:
and connecting channels of the first combination result and the second combination result, and inputting the channel into a whole body abnormal point segmentation model to obtain a whole body abnormal point segmentation result.
7. The method of claim 1, wherein the acquiring of each organ image in the image to be segmented comprises:
respectively inputting the images to be segmented into organ segmentation models of all organs to obtain organ segmentation results of all organs;
and aiming at each organ, acquiring an organ image corresponding to the organ from the image to be segmented according to the organ segmentation result of the organ.
8. The method of claim 1, further comprising:
determining the maximum standard uptake value, the major axis, the minor axis and the volume of each abnormal point region according to the whole body abnormal point segmentation result;
and displaying at least one of the whole body abnormal point segmentation result, the maximum standard uptake value of the abnormal point region, the major axis, the minor axis and the volume to a user terminal.
9. The method of claim 8, further comprising:
receiving an abnormal point deleting instruction and/or an abnormal point adding instruction input by a user through the user terminal;
and updating the whole body abnormal point segmentation result according to the abnormal point deleting instruction and/or the abnormal point adding instruction.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 9.
CN202010756772.3A 2020-07-31 2020-07-31 Method for segmenting medical images and computer-readable storage medium Pending CN111951278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010756772.3A CN111951278A (en) 2020-07-31 2020-07-31 Method for segmenting medical images and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010756772.3A CN111951278A (en) 2020-07-31 2020-07-31 Method for segmenting medical images and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111951278A true CN111951278A (en) 2020-11-17

Family

ID=73338945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010756772.3A Pending CN111951278A (en) 2020-07-31 2020-07-31 Method for segmenting medical images and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111951278A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489046A (en) * 2020-12-25 2021-03-12 上海深博医疗器械有限公司 AI auxiliary measurement volume compensation method and device for flexible scanning image
CN112651974A (en) * 2020-12-29 2021-04-13 上海联影智能医疗科技有限公司 Image segmentation method and system, electronic device and storage medium
CN113674254A (en) * 2021-08-25 2021-11-19 上海联影医疗科技股份有限公司 Medical image abnormal point identification method, equipment, electronic device and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112489046A (en) * 2020-12-25 2021-03-12 上海深博医疗器械有限公司 AI auxiliary measurement volume compensation method and device for flexible scanning image
CN112651974A (en) * 2020-12-29 2021-04-13 上海联影智能医疗科技有限公司 Image segmentation method and system, electronic device and storage medium
CN113674254A (en) * 2021-08-25 2021-11-19 上海联影医疗科技股份有限公司 Medical image abnormal point identification method, equipment, electronic device and storage medium
CN113674254B (en) * 2021-08-25 2024-05-14 上海联影医疗科技股份有限公司 Medical image outlier recognition method, apparatus, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
WO2020224406A1 (en) Image classification method, computer readable storage medium, and computer device
US11960571B2 (en) Method and apparatus for training image recognition model, and image recognition method and apparatus
Lin et al. Deep learning for fully automated tumor segmentation and extraction of magnetic resonance radiomics features in cervical cancer
CN111951278A (en) Method for segmenting medical images and computer-readable storage medium
KR101818074B1 (en) Artificial intelligence based medical auto diagnosis auxiliary method and system therefor
CN111161270B (en) Vascular segmentation method for medical image, computer device and readable storage medium
CN111325714B (en) Method for processing region of interest, computer device and readable storage medium
CN111951272A (en) Method and device for segmenting brain image, computer equipment and readable storage medium
CN110866909B (en) Training method of image generation network, image prediction method and computer equipment
CN112151179A (en) Image data evaluation method, device, equipment and storage medium
CN113610752A (en) Mammary gland image registration method, computer device and storage medium
CN113935943A (en) Method, device, computer equipment and storage medium for intracranial aneurysm identification detection
CN111223158B (en) Artifact correction method for heart coronary image and readable storage medium
CN110751187A (en) Training method of abnormal area image generation network and related product
WO2021097595A1 (en) Method and apparatus for segmenting lesion area in image, and server
CN113096132B (en) Image processing method, image processing device, storage medium and electronic equipment
CN111027469B (en) Human body part recognition method, computer device, and readable storage medium
CN111951316A (en) Image quantization method and storage medium
CN111583264A (en) Training method for image segmentation network, image segmentation method, and storage medium
CN116993812A (en) Coronary vessel centerline extraction method, device, equipment and storage medium
CN116630239A (en) Image analysis method, device and computer equipment
TW202346826A (en) Image processing method
CN112530554B (en) Scanning positioning method and device, storage medium and electronic equipment
CN111210414B (en) Medical image analysis method, computer device, and readable storage medium
CN113077440A (en) Pathological image processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination