CN116934771A - Medical image segmentation method, electronic device and storage medium - Google Patents

Medical image segmentation method, electronic device and storage medium Download PDF

Info

Publication number
CN116934771A
CN116934771A CN202210344069.0A CN202210344069A CN116934771A CN 116934771 A CN116934771 A CN 116934771A CN 202210344069 A CN202210344069 A CN 202210344069A CN 116934771 A CN116934771 A CN 116934771A
Authority
CN
China
Prior art keywords
medical image
segmented
target organ
organ tissue
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210344069.0A
Other languages
Chinese (zh)
Inventor
杨君荣
崔晨
石思远
邹寅清
陈俊强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Weiwei Medical Technology Co ltd
Original Assignee
Shanghai Weiwei Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Weiwei Medical Technology Co ltd filed Critical Shanghai Weiwei Medical Technology Co ltd
Priority to CN202210344069.0A priority Critical patent/CN116934771A/en
Publication of CN116934771A publication Critical patent/CN116934771A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10088Magnetic resonance imaging [MRI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30016Brain

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a medical image segmentation method, electronic equipment and a storage medium, wherein the medical image segmentation method comprises the following steps: downsampling the acquired medical image to be segmented to acquire a low-resolution medical image to be segmented; dividing the low-resolution medical image to be divided by adopting a first neural network model and carrying out up-sampling treatment on a divided result so as to obtain an initial target organ tissue mask image; acquiring position information of at least one target organ tissue region of interest according to the initial target organ tissue mask image; and dividing each target organ tissue region of interest in the medical image to be divided by adopting a second neural network model according to the position information of at least one target organ tissue region of interest so as to acquire a final target organ tissue mask image. The invention not only can realize the efficient and accurate segmentation of the target organ tissues, but also effectively reduces the complicated operation of man-machine interaction.

Description

Medical image segmentation method, electronic device and storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a medical image segmentation method, an electronic device, and a storage medium.
Background
The basal nucleus, also known as basal ganglia (basal ganglia), is a collective term for some of the nucleus groups in the medulla of the hemispheres of the brain, consisting of the caudate nucleus, the shell, the globus pallidus (collectively referred to as the striatum), the screen nucleus and the amygdala. Together with the subthalamic nucleus substantia nigra, the striatum forms a circuit for subcortical regulation of locomotion, and in coordination with the cerebral cortex and cerebellum, regulate voluntary locomotion, muscle tone, and postural reflex. Therefore, the method can accurately identify various basal nerve nuclei on medical images, and has important significance for doctors to treat motor nerve diseases, assist operation planning and postoperative evaluation.
The existing basal nucelal segmentation technology is mainly obtained by manual segmentation of doctors or based on an automatic registration method. The manual segmentation method by doctors is time-consuming and requires a great deal of experience of the doctors; the automatic registration method is adopted for segmentation, iteration is needed, and time is consumed.
Disclosure of Invention
The invention aims to provide a medical image segmentation method, electronic equipment and a storage medium, which can accurately and efficiently segment target organ tissues on an acquired medical image so as to better assist doctors to complete preoperative planning and postoperative evaluation work.
To achieve the above object, the present invention provides a medical image segmentation method, comprising:
downsampling the acquired medical image to be segmented to acquire a low-resolution medical image to be segmented;
dividing the low-resolution medical image to be divided by adopting a first neural network model and carrying out up-sampling treatment on a divided result so as to obtain an initial target organ tissue mask image;
acquiring position information of at least one target organ tissue region of interest according to the initial target organ tissue mask image;
and dividing the target organ tissue region of interest in the medical image to be divided by adopting a second neural network model according to the position information of the at least one target organ tissue region of interest so as to obtain a final target organ tissue mask image.
Optionally, before performing downsampling processing on the acquired medical image to be segmented, the method includes:
preprocessing the acquired medical image to be segmented to remove non-target areas in the medical image to be segmented.
Optionally, the preprocessing the acquired medical image to be segmented to remove a non-target area in the medical image to be segmented includes:
Cutting off the acquired medical image to be segmented so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within a preset range;
carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing;
and segmenting the medical image to be segmented after normalization processing and carrying out morphological operation on the segmented result so as to remove non-target areas in the medical image to be segmented after normalization processing.
Optionally, the performing truncation processing on the acquired medical image to be segmented to adjust a pixel value of each pixel point in the medical image to be segmented to a preset range includes:
setting the pixel value of the pixel point of which the pixel value is smaller than a first preset value in the medical image to be segmented as the first preset value, setting the pixel value of the pixel point of which the pixel value is larger than a second preset threshold in the medical image to be segmented as the second preset value, and keeping the pixel value of the pixel point of which the pixel value is positioned in a range limited by the first preset value and the second preset value unchanged so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within the range limited by the first preset value and the second preset value.
Optionally, the normalizing the pixel value of each pixel point in the medical image to be segmented after the cutting-off processing includes:
and carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing according to the following formula:
wherein P is i ' is the pixel value of the pixel point i in the medical image to be segmented after normalization processing, P i For the pixel value of the pixel point i in the medical image to be segmented after the truncation processing,and sigma is the standard deviation of the pixel values of the medical image to be segmented after the truncation processing.
Optionally, before performing the truncation processing on the acquired medical image to be segmented, the method further includes:
and adjusting the size of the medical image to be segmented to be under the target resolution scale.
Optionally, the segmenting the low-resolution medical image to be segmented by using the first neural network model and upsampling the segmented result to obtain an initial target organ tissue mask image includes:
cutting the low-resolution medical image to be segmented into a plurality of first medical sub-images to be segmented according to a first preset size;
Dividing the plurality of first medical sub-images to be divided by adopting a first neural network model so as to obtain a plurality of first target organ tissue mask sub-images;
and splicing the plurality of first target organ tissue mask sub-images and carrying out up-sampling processing on the spliced result to obtain an initial target organ tissue mask image.
Optionally, the stitching the mask sub-images of the plurality of first target organ tissues includes:
and splicing the plurality of first target organ tissue mask sub-images according to a preset first weight template.
Optionally, the stitching the plurality of mask sub-images of the first target organ tissue according to a preset first weight template includes:
multiplying each pixel point in each first target organ tissue mask sub-image by a corresponding first weight in the first weight template for each first target organ tissue mask sub-image to obtain a corresponding first recalibrated target organ tissue mask sub-image;
and splicing all the mask sub-images of the first recalibration target organ tissue.
Optionally, the acquiring, according to the initial target organ tissue mask image, the position information of the at least one target organ tissue region of interest includes:
Carrying out connected domain analysis on the initial target organ tissue mask image to extract all connected domains;
and acquiring the position information of the corresponding target organ tissue region of interest according to the position information of the circumscribed frame of each connected domain.
Optionally, the segmenting the region of interest of each target organ tissue in the medical image to be segmented by using the second neural network model to obtain a final target organ tissue mask image includes:
cutting each target organ tissue region of interest into a plurality of second medical sub-images to be segmented according to a second preset size;
dividing the plurality of second medical sub-images to be divided corresponding to the target organ tissue region of interest by adopting a second neural network model so as to obtain a plurality of second target organ tissue mask sub-images corresponding to the target organ tissue region of interest;
splicing the plurality of second target organ tissue mask sub-images corresponding to each target organ tissue region of interest to obtain a target organ tissue mask image corresponding to each target organ tissue region of interest;
and merging all the target organ tissue interested mask images to obtain a final target organ tissue mask image.
Optionally, the stitching the mask sub-images of the plurality of second target organ tissues includes:
and splicing the plurality of second target organ tissue mask sub-images according to a preset second weight template.
To achieve the above object, the present invention further provides an electronic device comprising a processor and a memory, wherein the memory stores a computer program, which when executed by the processor, implements the medical image segmentation method described above.
To achieve the above object, the present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, implements the medical image segmentation method described above.
Compared with the prior art, the medical image segmentation method, the electronic device and the storage medium provided by the invention have the following advantages: the method comprises the steps of firstly, carrying out downsampling treatment on an acquired medical image to be segmented to acquire a low-resolution medical image to be segmented; then, a first neural network model is adopted to segment the low-resolution medical image to be segmented, and upsampling processing is carried out on the segmented result so as to obtain an initial target organ tissue mask image; then, according to the initial target organ tissue mask image, acquiring the position information of at least one target organ tissue region of interest; and finally, according to the position information of the at least one target organ tissue region of interest, segmenting each target organ tissue region of interest in the medical image to be segmented by adopting a second neural network model so as to obtain a final target organ tissue mask image. Therefore, the initial target organ tissue mask image is roughly segmented on the low-resolution medical image to be segmented, so that the position information of the target organ tissue interested region is obtained, and then the accurate segmentation is carried out on the corresponding region on the high-resolution medical image to be segmented (namely the medical image to be segmented before downsampling), so that the efficient and accurate segmentation of the target organ tissue (such as the brain basal nerve nuclear cluster) is realized. In addition, the invention can realize the end-to-end algorithm flow, effectively reduce the complicated operation of man-machine interaction, and improve the diagnosis efficiency and accuracy, thereby better assisting doctors in completing preoperative planning and postoperative evaluation.
Drawings
FIG. 1 is a flow chart of a medical image segmentation method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of preprocessing in an embodiment of the invention;
FIG. 3 is a flowchart illustrating a specific process for obtaining mask images of an initial target organ tissue according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for obtaining a final mask image of a target organ tissue according to an embodiment of the present invention;
FIG. 5 is a schematic view of a cut-away view of a medical image (brain image) to be segmented in accordance with an embodiment of the present invention;
FIG. 6 is a schematic diagram of a target organ tissue (nugget) mask image obtained by segmenting the medical image to be segmented shown in FIG. 5 by the image segmentation method provided by the invention;
FIG. 7 is a schematic diagram of a training process of a first neural network model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a training process of a second neural network model according to an embodiment of the present invention;
FIG. 9 is a block diagram of an electronic device according to an embodiment of the present invention;
wherein, the reference numerals are as follows:
STN nucelomic-11; GPi nucelium-12;
a processor-21; a communication interface-22; a memory-23; communication bus-24.
Detailed Description
The medical image segmentation method, the electronic device and the storage medium according to the present invention are further described in detail below with reference to the accompanying drawings and detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are shown only in connection with the present disclosure for the understanding and reading of the present disclosure, and are not intended to limit the scope of the invention, which is defined by the appended claims, and any structural modifications, proportional changes, or dimensional adjustments, which may be made by the present disclosure, should fall within the scope of the present disclosure under the same or similar circumstances as the effects and objectives attained by the present invention.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Furthermore, in the description herein, reference to the terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples described in this specification and the features of the various embodiments or examples may be combined and combined by those skilled in the art without contradiction.
The invention provides a medical image segmentation method, electronic equipment and a storage medium, which can accurately and efficiently segment target organ tissues on an acquired medical image so as to better assist doctors to complete preoperative planning and postoperative evaluation work. It should be noted that, although the present invention is described by taking the example of the segmentation of basal nuggets (including STN nuggets and GPi nuggets) from brain images, the present invention may also be used to segment other brain tissues from brain images or other organ tissues from other medical images, as will be understood by those skilled in the art, and the present invention is not limited thereto.
In addition, it should be noted that the medical image segmentation method according to the embodiment of the present invention may be applied to the electronic device according to the embodiment of the present invention, where the electronic device may be a personal computer, a mobile terminal, and the mobile terminal may be a hardware device with various operating systems, such as a mobile phone, a tablet computer, and the like.
In order to achieve the foregoing, the present invention provides a medical image segmentation method, please refer to fig. 1, which schematically shows a flow chart of the medical image segmentation method according to an embodiment of the present invention. As shown in fig. 1, the medical image segmentation method includes the steps of:
step S100, performing downsampling processing on the acquired medical image to be segmented to acquire a low-resolution medical image to be segmented.
And step 200, segmenting the low-resolution medical image to be segmented by adopting a first neural network model, and carrying out up-sampling processing on the segmented result so as to acquire an initial target organ tissue mask image.
And step S300, acquiring the position information of at least one target organ tissue region of interest according to the initial target organ tissue mask image.
And step S400, segmenting each target organ tissue region of interest in the medical image to be segmented by adopting a second neural network model according to the position information of the at least one target organ tissue region of interest so as to acquire a final target organ tissue mask image.
Because the size of the target organ tissue (such as the target nerve nucleus) occupied on the whole image to be segmented is small, the initial target organ tissue mask image is firstly roughly segmented on the low-resolution medical image to be segmented so as to acquire the position information of the region of interest of the target organ tissue, and then the accurate segmentation is carried out on the corresponding region on the high-resolution medical image to be segmented (namely the medical image to be segmented before downsampling), so that the segmentation efficiency can be effectively improved, the segmentation accuracy can be effectively improved, and the efficient and accurate segmentation of the target organ tissue (such as the brain basal nerve nucleus) can be further realized. In addition, the method comprises the steps of firstly carrying out downsampling treatment on the acquired medical image to be segmented to acquire a low-resolution medical image to be segmented, and then carrying out rough segmentation on the low-resolution medical image to be segmented by adopting the first neural network model, so that the segmentation efficiency and accuracy of the first neural network model can be improved, and a good foundation is laid for acquiring accurate position information of a target organ tissue region of interest. In addition, the invention can realize the end-to-end algorithm flow, effectively reduce the complicated operation of man-machine interaction, and improve the diagnosis efficiency and accuracy, thereby better assisting doctors in completing preoperative planning and postoperative evaluation.
In particular, the medical image to be segmented may be a three-dimensional brain image or may be a three-dimensional image of other organs, which is not limited in this regard by the present invention. The size of the medical image to be segmented may be set according to the specific situation, and the present invention is not limited thereto, for example, the size of the medical image to be segmented may be 256×256×176 pixels. The medical image to be segmented can be acquired through an image acquisition device, such as CT, MRI and other image equipment, can be obtained through Internet collection, and can be obtained through scanning by a scanning device.
In an exemplary embodiment, before downsampling the acquired medical image to be segmented, the method comprises:
preprocessing the acquired medical image to be segmented to remove non-target areas in the medical image to be segmented.
Correspondingly, the downsampling processing is performed on the acquired medical image to be segmented, specifically:
and carrying out downsampling treatment on the medical image to be segmented after the non-target area is removed.
Therefore, the acquired medical image to be segmented is preprocessed to remove the non-target area in the medical image to be segmented, so that the accuracy of the subsequent image segmentation can be further improved.
Further, please refer to fig. 2, which schematically illustrates a flowchart of the pretreatment according to an embodiment of the present invention. As shown in fig. 2, the preprocessing the acquired medical image to be segmented to remove the non-target area in the medical image to be segmented includes:
cutting off the acquired medical image to be segmented so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within a preset range;
carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing;
and segmenting the medical image to be segmented after normalization processing and carrying out morphological operation on the segmented result so as to remove non-target areas in the medical image to be segmented after normalization processing.
Therefore, the medical image to be segmented is cut off, so that the pixel value of each pixel point in the medical image to be segmented can be adjusted to be within a preset range, and the subsequent normalization processing is facilitated.
In an exemplary embodiment, the performing a truncation process on the acquired medical image to be segmented to adjust a pixel value of each pixel point in the medical image to be segmented to be within a preset range includes:
Setting the pixel value of the pixel point of which the pixel value is smaller than a first preset value in the medical image to be segmented as the first preset value, setting the pixel value of the pixel point of which the pixel value is larger than a second preset threshold in the medical image to be segmented as the second preset value, and keeping the pixel value of the pixel point of which the pixel value is positioned in a range limited by the first preset value and the second preset value unchanged so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within the range limited by the first preset value and the second preset value.
Specifically, taking the first preset value as 100 and the second preset value as 800 as an example, the pixel value of the pixel point with the pixel value smaller than 100 in the medical image to be segmented may be set as 100, the pixel value of the pixel point with the pixel value larger than 800 may be set as 800, and the pixel value of the pixel point with the pixel value in the [100,800] range may be kept unchanged, so that the pixel value of each pixel point in the medical image to be segmented may be adjusted to be within the [100,800] range.
By carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing, the occurrence of large change of gradients in the training process of a network model (comprising a first neural network model and a second neural network model) can be effectively avoided, so that the neural network (comprising the first neural network model and the second neural network model) has a better iterative convergence effect. Specifically, the pixel values of each pixel point in the medical image to be segmented after the truncation processing may be normalized according to the following formula:
Wherein P is i ' is the pixel value of the pixel point i in the medical image to be segmented after normalization processing, P i For the pixel value of the pixel point i in the medical image to be segmented after the truncation processing,and sigma is the standard deviation of the pixel values of the medical image to be segmented after the truncation processing.
By segmenting the medical image to be segmented after normalization processing and performing morphological operation on the segmented result, the non-target region can be separated from the target region, so that the non-target region (namely, the pixel value of the non-target region is set to 0, namely, the non-target region is set to black) in the medical image to be segmented after normalization processing can be removed. Specifically, taking a three-dimensional MRI image of the human brain as an example, as the skull has a higher pixel value relative to brain tissues, seed points can be manually selected to perform region growing to divide the three-dimensional MRI image of the human brain after normalization treatment, and the separation of the skull and the brain tissues can be realized by combining morphological operation. It should be noted that, as understood by those skilled in the art, since the segmented result is a binary image, the result after the morphological operation is also a binary image (the pixel value of the target area is 1, and the pixel value of the non-target area is 0), and the non-target area in the medical image to be segmented after the normalization processing can be set to black by multiplying the obtained medical image to be segmented by the result after the morphological operation (i.e., the pixel value of each pixel point in the binary image after the morphological operation is multiplied by the pixel value of the corresponding pixel point in the medical image to be segmented after the normalization processing). In addition, it should be noted that in other embodiments, other image segmentation methods, such as a watershed segmentation method, may be used to segment the medical image to be segmented after normalization.
In an exemplary embodiment, before the truncation process is performed on the acquired medical image to be segmented, the method further comprises:
and adjusting the size of the medical image to be segmented to be under the target resolution scale.
Correspondingly, the cutting-off processing is carried out on the acquired medical image to be segmented, specifically:
and cutting off the medical image to be segmented, the size of which is adjusted to the target resolution scale.
Due to the differences between the different medical imaging devices (e.g. MRI scanning devices), the original resolutions of the medical images acquired by the different medical imaging devices (the larger the resolution, the larger the image size) will be slightly different, resulting in the original sizes of the medical images acquired by the different devices also being different, whereby the size of the medical images to be segmented needs to be adjusted to the target resolution scale to meet the input requirements of the first and second neural network models. Specifically, assuming that the original resolution of the acquired medical image to be segmented is origin_spacing, the original size is origin_size, and the target resolution is target_spacing, the target size adjusted to the target resolution scale is target_size:
target_size=origin_spacing*origin_size/target_spacing
It should be noted that, as can be understood by those skilled in the art, when calculating the target size of the medical image to be segmented, the target sizes of the medical image to be segmented in the X direction, the Y direction, and the Z direction are calculated respectively by the above calculation formulas of the target sizes, specifically:
target_size(X)=origin_spacing(X)*origin_size(X)/target_spacing(X)
target_size(Y)=origin_spacing(Y)*origin_size(Y)/target_spacing(Y)
target_size(Z)=origin_spacing(Z)*origin_size(Z)/target_spacing(Z)
wherein target_size (X) represents a target size of the medical image to be segmented in the X direction, origin_spacing (X) represents an original resolution of the medical image to be segmented in the X direction, origin_size (X) represents an original size of the medical image to be segmented in the X direction, and target_spacing (X) represents a target resolution of the medical image to be segmented in the X direction; target_size (Y) represents a target size of the medical image to be segmented in a Y direction, origin_spacing (Y) represents an original resolution of the medical image to be segmented in the Y direction, origin_size (Y) represents an original size of the medical image to be segmented in the Y direction, and target_spacing (Y) represents a target resolution of the medical image to be segmented in the Y direction; target_size (Z) represents a target size of the medical image to be segmented in the Z direction, origin_spacing (Z) represents an original resolution of the medical image to be segmented in the Z direction, origin_size (Z) represents an original size of the medical image to be segmented in the Z direction, and target_spacing (Z) represents a target resolution of the medical image to be segmented in the Z direction.
For how to acquire the target resolution, reference may be made to the following related description, which will not be repeated here.
According to the calculated target size of the medical image to be segmented under the target resolution scale, the acquired medical image to be segmented can be adjusted from the original size to the target size by adopting an interpolation method, such as a spline interpolation method (tri-linear interpolation method), namely, the acquired medical image to be segmented is adjusted to the target resolution scale, and the medical image to be segmented adjusted to the target resolution scale is subjected to downsampling treatment, so that the medical image to be segmented with low resolution can be acquired. For ease of description, the medical image to be segmented adjusted to the target resolution scale is hereinafter represented in full resolution. Of course, other interpolation methods may also be employed to adjust the size of the acquired medical image to be segmented to the target size, as will be appreciated by those skilled in the art.
With continued reference to fig. 3, a schematic flowchart of an embodiment of the present invention for obtaining a mask image of an initial target organ tissue is schematically shown. As shown in fig. 3, in an exemplary embodiment, the segmenting the low-resolution medical image to be segmented using the first neural network model and upsampling the segmented result to obtain an initial target organ tissue mask image includes:
Cutting the low-resolution medical image to be segmented into a plurality of first medical sub-images to be segmented according to a first preset size;
dividing the plurality of first medical sub-images to be divided by adopting a first neural network model so as to obtain a plurality of first target organ tissue mask sub-images;
and splicing the plurality of first target organ tissue mask sub-images and carrying out up-sampling processing on the spliced result to obtain an initial target organ tissue mask image.
In particular, the first preset size may be set according to the specific situation, the first preset size is set to, for example, 64 x 64, whereby, according to the first preset size, the low resolution medical image to be segmented is progressively cropped with a first preset step size (e.g. 32), a plurality of first medical sub-images to be segmented having a first predetermined size (e.g., 64 x 64) may be cropped. And then the first target organ tissue mask sub-images corresponding to each medical sub-image to be segmented (one medical sub-image to be segmented corresponds to one first target organ tissue mask sub-image) are obtained by respectively segmenting the plurality of first medical sub-images to be segmented through a pre-trained first neural network model, and finally the corresponding initial target organ tissue mask images can be obtained by splicing the plurality of first target organ tissue mask sub-images and performing up-sampling treatment on the spliced result (so as to restore the spliced result to the size of the medical image to be segmented before downsampling). It should be noted that, as those skilled in the art will understand, the first preset size may be set according to the specific situation, and the present invention is not limited thereto.
In an exemplary embodiment, the stitching the plurality of first target organ tissue mask sub-images includes:
and splicing the plurality of first target organ tissue mask sub-images according to a preset first weight template.
Specifically, since the clipping regions overlap, it is generally believed that the neural network predicts that the center region is more accurate than the edge regions, thus defining a first weight template of the same size as the first target organ tissue mask sub-image (i.e., corresponding to a first predetermined size), for example, a first weight template of 64 x 64 size, in which the first weights are gaussian distributed, i.e., the first weight of the center region is large, the first weight of the edge region is small, multiplying each first target organ tissue mask sub-image by a first template weight (namely multiplying each pixel point in the first target organ tissue mask sub-image by a corresponding first weight) to obtain a corresponding first recalibration target organ tissue mask sub-image, stitching all the first recalibration target organ tissue mask sub-images, and performing up-sampling processing on the stitched result to obtain an initial target organ tissue mask image.
In an exemplary embodiment, the acquiring the location information of the at least one target organ tissue region of interest according to the initial target organ tissue mask image includes:
carrying out connected domain analysis on the initial target organ tissue mask image to extract all connected domains;
and acquiring the position information of the corresponding target organ tissue region of interest according to the position information of the circumscribed frame of each connected domain.
Since the foreground region (i.e., the white region, i.e., the region with the pixel value of 1) in the initial target organ tissue mask image is the possible region where the target organ tissue is located, by performing connected domain analysis on the initial target organ tissue mask image to extract all the connected domains, all the possible regions where the target organ tissue is located can be extracted, and then the position information of the region of interest of the target organ tissue can be obtained through the position information of the circumscribed frame (e.g., circumscribed rectangular frame) of each connected domain, thereby finding the corresponding region of interest of the target organ tissue on the medical image to be segmented (the medical image to be segmented with full resolution) before downsampling according to the position information of the region of interest of the target organ tissue. It should be noted that, as those skilled in the art will understand, one connected domain corresponds to a target organ tissue region of interest.
With continued reference to fig. 4, a schematic flowchart of a method for obtaining a final mask image of a target organ tissue according to an embodiment of the present invention is schematically shown. As shown in fig. 4, in an exemplary embodiment, the segmenting the target organ tissue regions of interest in the medical image to be segmented using the second neural network model to obtain the final target organ tissue mask image includes:
cutting each target organ tissue region of interest into a plurality of second medical sub-images to be segmented according to a second preset size;
dividing the plurality of second medical sub-images to be divided corresponding to the target organ tissue region of interest by adopting a second neural network model so as to obtain a plurality of second target organ tissue mask sub-images corresponding to the target organ tissue region of interest;
splicing the plurality of second target organ tissue mask sub-images corresponding to each target organ tissue region of interest to obtain a target organ tissue mask image corresponding to each target organ tissue region of interest;
and merging all the target organ tissue interested mask images to obtain a final target organ tissue mask image.
In particular, the second preset size may be set according to circumstances, for example, the second preset size is set to 64 x 64, thus, for each target organ tissue region of interest, according to a second preset size, the target organ tissue region of interest is progressively cropped in a second predetermined step size (e.g., 32) to form a plurality of second medical sub-images to be segmented having a second predetermined size (e.g., 64 x 64). And respectively segmenting a plurality of second medical sub-images to be segmented corresponding to the target organ tissue region of interest through a pre-trained second neural network model aiming at each target organ tissue region of interest, so as to obtain a second target organ tissue mask sub-image corresponding to each second medical sub-image to be segmented corresponding to the target organ tissue region of interest. And aiming at each target organ tissue region of interest, splicing a plurality of second target organ tissue mask sub-images corresponding to the target organ tissue region of interest, and obtaining a target organ tissue mask image corresponding to the target organ tissue region of interest. And finally, merging all target organ tissue interested mask images to obtain the target organ tissue mask image corresponding to the medical image to be segmented before downsampling. It should be noted that, as will be understood by those skilled in the art, if the medical image to be segmented before downsampling is the medical image to be segmented adjusted to the target resolution scale (i.e. the full resolution medical image to be segmented), the size of the target organ tissue mask image needs to be restored to the original size of the acquired medical image to be segmented, specifically, the size of the target organ tissue mask image corresponding to the medical image to be segmented before downsampling may be adjusted to the original size of the acquired medical image to be segmented by adopting the nearest neighbor interpolation method, so as to obtain the final target organ tissue segmented image.
In an exemplary embodiment, the stitching the plurality of second target organ tissue mask sub-images includes:
and splicing the plurality of second target organ tissue mask sub-images according to a preset second weight template.
Specifically, the size of the second weight template is the same as the size of the second target organ tissue mask sub-image (i.e., corresponds to a second preset size), for example the size of the second weight template is 64 x 64, in the second weight template, the second weights are in Gaussian distribution, namely the second weight of the central region is large, the second weight of the edge region is small, for each target organ tissue region of interest, multiplying each second target organ tissue mask sub-image corresponding to the target organ tissue region of interest by a second template weight (that is, multiplying each pixel point in the second target organ tissue mask sub-image by a corresponding second weight) to obtain a corresponding second remarked target organ tissue mask sub-image, and then stitching all the second remarked target organ tissue mask sub-images to obtain a target organ tissue region of interest mask image corresponding to the target organ tissue region of interest.
Referring to fig. 5 and 6, fig. 5 schematically shows a schematic cross-sectional view of a medical image (brain image) to be segmented according to a specific example of the present invention; fig. 6 schematically shows a schematic diagram of a target organ tissue (neuronulus) mask image obtained by segmenting the medical image to be segmented shown in the figure by using the image segmentation method provided by the invention. As shown in fig. 5 and 6, by dividing the acquired brain image by the image dividing method provided by the present invention, the brain basal nuclei such as the STN nuclei 11 and the GPi nuclei 12 can be accurately and efficiently divided.
The specific training process of the first neural network model and the second neural network model is described below.
Specifically, the basic frameworks of the first neural network model and the second neural network model are 3DV-Net network structures, the V-Net network structures adopt encoder-decoder structures, an encoder consists of a convolution layer and a pooling layer, and a decoder consists of the convolution layer and a deconvolution layer. The V-Net adopts a residual error learning structure in the convolution layer, namely, the input of the convolution layer and the final output of the convolution layer are added, and then the subsequent calculation is carried out, so that the problem of gradient disappearance is solved. The pooling layer of the V-Net network adopts a convolution pooling mode, so that the video memory occupation during training can be effectively reduced.
With continued reference to fig. 7, a schematic diagram of a training process of the first neural network model according to an embodiment of the present invention is shown. As shown in fig. 7, the first neural network model is specifically trained by:
step A1, acquiring a first training sample, wherein the first training sample comprises a first medical sample image and a first medical label image corresponding to the first medical sample image.
And B1, preprocessing the first medical sample image and the first medical label image in the first training sample to cut the first medical sample image into a plurality of first medical sample sub-images with first preset sizes, and cutting the first medical label image into a plurality of first medical label sub-images with first preset sizes, wherein the first medical label sub-images are in one-to-one correspondence with the first medical sample sub-images.
And C1, amplifying the first medical sample sub-image and the corresponding first medical label sub-image to obtain an amplified first medical sample sub-image and a corresponding first medical label sub-image.
And D1, training a first neural network model built in advance according to the amplified first medical sample sub-image and the corresponding first medical label sub-image until a first preset training ending condition is met.
Therefore, the first medical sample image is cut into a plurality of first medical sample sub-images with a first preset size (for example, 64 multiplied by 64), the first medical label image is correspondingly cut into a plurality of first medical label sub-images with a first preset size (for example, 64 multiplied by 64), each first training sample can be cut into a small area to be input into the first neural network model, so that training of the first neural network model can be performed, and the problem that the whole first training sample data cannot be directly put in due to the limitation of a display memory of a processor can be effectively avoided.
In addition, since the data of the first training sample is limited, and the deep learning needs to learn on certain data to have certain robustness, in order to increase the robustness, a data amplification operation is needed to increase the generalization capability of the first neural network model. Specifically, the augmentation of the training data may be accomplished by performing a random rigid transformation on the first medical sample sub-image and the corresponding first medical label sub-image, such as rotation along the Z-axis (90 °, 180 °, 270 °), left-right mirroring, adding gaussian noise, etc.
In an exemplary embodiment, the preprocessing the first medical sample image and the first medical label image in the first training sample to crop the first medical sample image and the first medical label image into a plurality of first medical sample sub-images and first medical label sub-images having a first preset size includes:
Adjusting the first medical sample image and the first medical label image in the first training sample to a target resolution scale;
downsampling the first medical sample image and the first medical label image adjusted to a target resolution scale to obtain a low-resolution first medical sample image and a corresponding low-resolution first medical label image;
performing truncation processing on the low-resolution first medical sample image so as to adjust pixel values of all pixel points in the low-resolution first medical sample image to be within a preset range;
normalizing pixel values of all pixel points in the truncated first medical sample image with low resolution;
dividing the normalized low-resolution first medical sample image and carrying out morphological operation on the divided result so as to remove non-target areas in the normalized low-resolution first medical sample image;
clipping the low-resolution first medical sample image with the non-target area removed and the corresponding low-resolution first medical label image to clip the low-resolution first medical sample image with the non-target area removed into a plurality of first medical sample sub-images with a first preset size, and clipping the corresponding low-resolution first medical label image into a plurality of first medical label sub-images with a first preset size.
Specifically, the original resolution values of the X-direction, Y-direction and Z-direction of the first medical sample image of each first training sample may be counted and sequenced, then the median value of the X-direction, Y-direction and Z-direction resolutions is selected as the target resolution value of the corresponding direction, and then the target size of each first medical sample image is calculated according to the original size, the original resolution and the calculated target resolution of each first medical sample image. For how to calculate the target size according to the target resolution, reference may be made to the above description, so that a detailed description thereof will be omitted. Specifically, the first medical sample image may be adjusted to the target size by a spline interpolation (tri-linear interpolation) and the corresponding first medical label image may be adjusted to the target size by a nearest neighbor interpolation. Of course, other interpolation methods may also be employed to adjust the first medical sample image and the corresponding first medical label image to a target size, as will be appreciated by those skilled in the art. Further, reference may be made to the above description regarding how to perform the truncation process, the normalization process, and the step of removing the non-target region, so that a detailed description thereof will not be provided.
Further, the invention adopts the loss function L to lose L by Dice when training the first neural network model 1 And a binary cross entropy loss L 2 And (5) determining. Specifically:
L=L 1 +L 2
wherein the binary cross entropy loss L 2 Can be used for improving the condition that training fluctuation is unstable when only the Dice loss is used as a loss function, and the Dice loss L with weight 1 Can improve the situation that the target organ tissue (such as target basal nucels) occupies less background, for example, STN nucels can be weighted as [1,2 ]]Between GPi nuceltes has a weight of [2,3 ]]Between them. Specifically, when the first neural network model is trained, the sample number Batch size selected by one training may be set to 2, and the value may be gradually increased when the GPU video memory does not reach the upper limit; the training cycle epoch may be 200 times.
Please continue to refer to fig. 8, which schematically illustrates a training flow chart of the second neural network model according to an embodiment of the present invention. As shown in fig. 8, the second neural network model is specifically trained by:
and A2, acquiring a second training sample, wherein the second training sample comprises a second medical sample image and a second medical label image corresponding to the second medical sample image.
And B2, preprocessing the second medical sample image and the second medical label image in the second training sample to cut the second medical sample image into a plurality of second medical sample sub-images with second preset sizes, and cutting the second medical label image into a plurality of second medical label sub-images with second preset sizes, wherein the second medical label sub-images are in one-to-one correspondence with the second medical sample sub-images.
And C2, amplifying the second medical sample sub-image and the corresponding second medical label sub-image to obtain an amplified second medical sample sub-image and the corresponding second medical label sub-image.
And D2, training a second neural network model built in advance according to the amplified second medical sample sub-image and the corresponding second medical label sub-image until a second preset training ending condition is met.
Specifically, according to the method, an acquired medical sample image (preferably a medical sample image adjusted to a target resolution scale) is segmented according to a first neural network model trained in advance, so as to acquire a medical label image corresponding to the medical sample image (preferably the medical sample image adjusted to the target resolution scale), all connected domains are extracted by conducting connected domain analysis on the acquired medical label image, then according to the position information of an external frame of each connected domain, a corresponding target organ tissue region of interest is extracted and cut out on the medical sample image (preferably the medical sample image adjusted to the target resolution scale), a corresponding target organ tissue region of interest is cut out on the medical label image, and the cut-out target organ tissue region of interest can be used as a second medical sample sub-image, and the cut-out corresponding target organ tissue region of interest mask is a corresponding second medical label sub-image.
Therefore, the second medical sample image is cut into a plurality of second medical sample sub-images with a second preset size (for example, 64 multiplied by 64), the second medical label image is correspondingly cut into a plurality of second medical label sub-images with a second preset size (for example, 64 multiplied by 64), each second training sample can be cut into small areas to be input into the second neural network model, so that training of the second neural network model can be performed, and the problem that the whole second training sample data cannot be directly put in due to the limitation of display memory of a processor can be effectively avoided.
In addition, since the data of the second training sample is limited, and the deep learning needs to learn on certain data to have certain robustness, in order to increase the robustness, a data amplification operation is needed to increase the generalization capability of the second neural network model. Specifically, the augmentation of the training data may be accomplished by performing a random rigid transformation on the second medical sample sub-image and the corresponding second medical label sub-image, such as rotation along the Z-axis (90 °, 180 °, 270 °), left-right mirroring, adding gaussian noise, etc.
In particular, regarding how to cut the second medical sample image into a plurality of second medical sample sub-images having the second preset size, the corresponding second medical label image is cut into a plurality of second medical label sub-images having the second preset size, which are described above, and thus will not be described in detail.
Further, the invention adopts the loss function L to lose L by the Dice when training the second neural network model 1 And a binary cross entropy loss L 2 And (5) determining. Specifically:
L=L 1 +L 2
wherein the binary cross entropy loss L 2 Can be used for improving the condition that training fluctuation is unstable when only using the Dice loss as a loss function, and is weightedThe Dice loss L 1 Can improve the situation that the target organ tissue (such as target basal nucels) occupies less background, for example, STN nucels can be weighted as [1,2 ]]Between GPi nuceltes has a weight of [2,3 ]]Between them. Specifically, when the second neural network model is trained, the sample number Batch size selected by one training may be set to 2, and the value may be gradually increased if the GPU video memory does not reach the upper limit; the training cycle epoch may be 200 times.
Based on the same inventive concept, the present invention further provides an electronic device, and referring to fig. 9, a schematic block structure diagram of the electronic device provided by an embodiment of the present invention is shown. As shown in fig. 9, the electronic device comprises a processor 21 and a memory 23, the memory 23 having stored thereon a computer program which, when executed by the processor 21, implements the medical image segmentation method described above. Because the electronic device provided by the invention and the medical image segmentation method provided by the invention belong to the same invention conception, the electronic device provided by the invention has all the advantages of the medical image segmentation method provided by the invention, and the advantages of the electronic device provided by the invention are not repeated.
As shown in fig. 9, the electronic device further comprises a communication interface 22 and a communication bus 24, wherein the processor 21, the communication interface 22 and the memory 23 communicate with each other via the communication bus 24. The communication bus 24 may be a Peripheral component interconnect standard (Peripheral ComponentInterconnect, PCI) bus, an extended industry standard StandardArchitecture, EISA bus, or the like. The communication bus 24 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus. The communication interface 22 is used for communication between the electronic device and other devices.
The processor 21 of the present invention may be a central processing unit (CentralProcessingUnit, CPU), other general purpose processors, digital signal processors (DigitalSignalProcessor, DSP), application specific integrated circuits (ApplicationSpecificIntegratedCircuit, ASIC), off-the-shelf programmable gate arrays (Field-ProgrammableGateArray, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor 21 is a control center of the electronic device, and connects various parts of the entire electronic device using various interfaces and lines.
The memory 23 may be used to store the computer program, and the processor 21 implements various functions of the electronic device by running or executing the computer program stored in the memory 23 and invoking data stored in the memory 23.
The memory 23 may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The present invention also provides a readable storage medium having stored therein a computer program which, when executed by a processor, enables the medical image segmentation method described above. Because the readable storage medium provided by the invention and the medical image segmentation method provided by the invention belong to the same inventive concept, the readable storage medium provided by the invention has all the advantages of the medical image segmentation method provided by the invention, and the advantages of the readable storage medium provided by the invention are not repeated.
The readable storage media of embodiments of the present invention may take the form of any combination of one or more computer-readable media. The readable medium may be a computer readable signal medium or a computer readable storage medium. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: an electrical connection having one or more wires, a portable computer hard disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Computer program code for carrying out operations of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
In summary, compared with the prior art, the medical image segmentation method, the electronic device and the storage medium provided by the invention have the following advantages: the method comprises the steps of firstly, carrying out downsampling treatment on an acquired medical image to be segmented to acquire a low-resolution medical image to be segmented; then, a first neural network model is adopted to segment the low-resolution medical image to be segmented, and upsampling processing is carried out on the segmented result so as to obtain an initial target organ tissue mask image; then, according to the initial target organ tissue mask image, acquiring the position information of at least one target organ tissue region of interest; and finally, according to the position information of the at least one target organ tissue region of interest, segmenting each target organ tissue region of interest in the medical image to be segmented by adopting a second neural network model so as to obtain a final target organ tissue mask image. Therefore, the initial target organ tissue mask image is roughly segmented on the low-resolution medical image to be segmented, so that the position information of the target organ tissue interested region is obtained, and then the accurate segmentation is carried out on the corresponding region on the high-resolution medical image to be segmented (namely the medical image to be segmented before downsampling), so that the efficient and accurate segmentation of the target organ tissue (such as the brain basal nerve nuclear cluster) is realized. In addition, the invention can realize the end-to-end algorithm flow, effectively reduce the complicated operation of man-machine interaction, and improve the diagnosis efficiency and accuracy, thereby better assisting doctors in completing preoperative planning and postoperative evaluation.
It should be noted that the apparatus and methods disclosed in the embodiments herein may be implemented in other ways. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments herein. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments herein may be integrated together to form a single part, or the modules may exist alone, or two or more modules may be integrated to form a single part.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the present invention. It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, the present invention is intended to include such modifications and alterations insofar as they come within the scope of the invention or the equivalents thereof.

Claims (14)

1. A medical image segmentation method, comprising:
downsampling the acquired medical image to be segmented to acquire a low-resolution medical image to be segmented;
dividing the low-resolution medical image to be divided by adopting a first neural network model and carrying out up-sampling treatment on a divided result so as to obtain an initial target organ tissue mask image;
Acquiring position information of at least one target organ tissue region of interest according to the initial target organ tissue mask image;
and dividing the target organ tissue region of interest in the medical image to be divided by adopting a second neural network model according to the position information of the at least one target organ tissue region of interest so as to obtain a final target organ tissue mask image.
2. The medical image segmentation method according to claim 1, characterized in that before downsampling the acquired medical image to be segmented, the method comprises:
preprocessing the acquired medical image to be segmented to remove non-target areas in the medical image to be segmented.
3. The medical image segmentation method according to claim 2, wherein the preprocessing of the acquired medical image to be segmented to remove non-target regions in the medical image to be segmented comprises:
cutting off the acquired medical image to be segmented so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within a preset range;
carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing;
And segmenting the medical image to be segmented after normalization processing and carrying out morphological operation on the segmented result so as to remove non-target areas in the medical image to be segmented after normalization processing.
4. A medical image segmentation method according to claim 3, wherein the performing a truncation process on the acquired medical image to be segmented to adjust the pixel value of each pixel point in the medical image to be segmented to be within a preset range includes:
setting the pixel value of the pixel point of which the pixel value is smaller than a first preset value in the medical image to be segmented as the first preset value, setting the pixel value of the pixel point of which the pixel value is larger than a second preset threshold in the medical image to be segmented as the second preset value, and keeping the pixel value of the pixel point of which the pixel value is positioned in a range limited by the first preset value and the second preset value unchanged so as to adjust the pixel value of each pixel point in the medical image to be segmented to be within the range limited by the first preset value and the second preset value.
5. A medical image segmentation method according to claim 3, wherein the normalizing the pixel value of each pixel point in the medical image to be segmented after the truncation processing includes:
And carrying out normalization processing on pixel values of all pixel points in the medical image to be segmented after the truncation processing according to the following formula:
wherein P' i The pixel value P of the pixel point i in the medical image to be segmented after normalization processing is the pixel value P of the pixel point i i For the pixel value of the pixel point i in the medical image to be segmented after the truncation processing,and sigma is the standard deviation of the pixel values of the medical image to be segmented after the truncation processing.
6. A medical image segmentation method according to claim 3, wherein prior to the truncation of the acquired medical image to be segmented, the method further comprises:
and adjusting the size of the medical image to be segmented to be under the target resolution scale.
7. The medical image segmentation method according to claim 1, wherein the segmenting the low-resolution medical image to be segmented using the first neural network model and upsampling the segmented result to obtain an initial target organ tissue mask image comprises:
cutting the low-resolution medical image to be segmented into a plurality of first medical sub-images to be segmented according to a first preset size;
Dividing the plurality of first medical sub-images to be divided by adopting a first neural network model so as to obtain a plurality of first target organ tissue mask sub-images;
and splicing the plurality of first target organ tissue mask sub-images and carrying out up-sampling processing on the spliced result to obtain an initial target organ tissue mask image.
8. The medical image segmentation method as set forth in claim 7, wherein the stitching of the plurality of first target organ tissue mask sub-images comprises:
and splicing the plurality of first target organ tissue mask sub-images according to a preset first weight template.
9. The medical image segmentation method according to claim 8, wherein the stitching the plurality of first target organ tissue mask sub-images according to a preset first weight template comprises:
multiplying each pixel point in each first target organ tissue mask sub-image by a corresponding first weight in the first weight template for each first target organ tissue mask sub-image to obtain a corresponding first recalibrated target organ tissue mask sub-image;
And splicing all the mask sub-images of the first recalibration target organ tissue.
10. The medical image segmentation method according to claim 1, wherein the acquiring the location information of the at least one target organ tissue region of interest from the initial target organ tissue mask image comprises:
carrying out connected domain analysis on the initial target organ tissue mask image to extract all connected domains;
and acquiring the position information of the corresponding target organ tissue region of interest according to the position information of the circumscribed frame of each connected domain.
11. The medical image segmentation method according to claim 1, wherein the segmenting the target organ tissue regions of interest in the medical image to be segmented using the second neural network model to obtain the final target organ tissue mask image comprises:
cutting each target organ tissue region of interest into a plurality of second medical sub-images to be segmented according to a second preset size;
dividing the plurality of second medical sub-images to be divided corresponding to the target organ tissue region of interest by adopting a second neural network model so as to obtain a plurality of second target organ tissue mask sub-images corresponding to the target organ tissue region of interest;
Splicing the plurality of second target organ tissue mask sub-images corresponding to each target organ tissue region of interest to obtain a target organ tissue mask image corresponding to each target organ tissue region of interest;
and merging all the target organ tissue interested mask images to obtain a final target organ tissue mask image.
12. The medical image segmentation method as set forth in claim 11, wherein the stitching of the plurality of second target organ tissue mask sub-images comprises:
and splicing the plurality of second target organ tissue mask sub-images according to a preset second weight template.
13. An electronic device comprising a processor and a memory, the memory having stored thereon a computer program which, when executed by the processor, implements the method of any of claims 1 to 12.
14. A readable storage medium, characterized in that the readable storage medium has stored therein a computer program which, when executed by a processor, implements the method of any one of claims 1 to 12.
CN202210344069.0A 2022-03-31 2022-03-31 Medical image segmentation method, electronic device and storage medium Pending CN116934771A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210344069.0A CN116934771A (en) 2022-03-31 2022-03-31 Medical image segmentation method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210344069.0A CN116934771A (en) 2022-03-31 2022-03-31 Medical image segmentation method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN116934771A true CN116934771A (en) 2023-10-24

Family

ID=88376034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210344069.0A Pending CN116934771A (en) 2022-03-31 2022-03-31 Medical image segmentation method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN116934771A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274258A (en) * 2023-11-21 2023-12-22 深圳市研盛芯控电子技术有限公司 Method, system, equipment and storage medium for detecting defects of main board image
CN117438056A (en) * 2023-12-20 2024-01-23 达州市中心医院(达州市人民医院) Editing, screening and storage control method and system for digestive endoscopy image data

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117274258A (en) * 2023-11-21 2023-12-22 深圳市研盛芯控电子技术有限公司 Method, system, equipment and storage medium for detecting defects of main board image
CN117438056A (en) * 2023-12-20 2024-01-23 达州市中心医院(达州市人民医院) Editing, screening and storage control method and system for digestive endoscopy image data
CN117438056B (en) * 2023-12-20 2024-03-12 达州市中心医院(达州市人民医院) Editing, screening and storage control method and system for digestive endoscopy image data

Similar Documents

Publication Publication Date Title
WO2021031066A1 (en) Cartilage image segmentation method and apparatus, readable storage medium, and terminal device
CN109410219B (en) Image segmentation method and device based on pyramid fusion learning and computer readable storage medium
CN116934771A (en) Medical image segmentation method, electronic device and storage medium
CN112950651B (en) Automatic delineation method of mediastinal lymph drainage area based on deep learning network
WO2022028127A1 (en) Artificial intelligence-based pathological image processing method and apparatus, electronic device, and storage medium
CN112348785B (en) Epileptic focus positioning method and system
CN113076987B (en) Osteophyte identification method, device, electronic equipment and storage medium
CN110599505A (en) Organ image segmentation method and device, electronic equipment and storage medium
CN111275686B (en) Method and device for generating medical image data for artificial neural network training
CN110472521B (en) Pupil positioning calibration method and system
CN115423754A (en) Image classification method, device, equipment and storage medium
CN112200725A (en) Super-resolution reconstruction method and device, storage medium and electronic equipment
CN116071300A (en) Cell nucleus segmentation method based on context feature fusion and related equipment
CN112348819A (en) Model training method, image processing and registering method, and related device and equipment
Liang et al. Multi-scale self-attention generative adversarial network for pathology image restoration
CN116309612B (en) Semiconductor silicon wafer detection method, device and medium based on frequency decoupling supervision
Wang et al. Spatially adaptive multi-scale contextual attention for image inpainting
CN116128887A (en) Target organ tissue region of interest positioning method, electronic device and storage medium
CN116071239A (en) CT image super-resolution method and device based on mixed attention model
Gupta et al. Depth Analysis of Different Medical Image Segmentation Techniques for Brain Tumor Detection
Lee et al. Improved classification of brain-tumor mri images through data augmentation and filter application
Dinh et al. Medical image fusion based on transfer learning techniques and coupled neural P systems
CN113689353A (en) Three-dimensional image enhancement method and device and training method and device of image enhancement model
DE102021108527A1 (en) NEURON NETWORK DEVICE FOR OPERATING A NEURON NETWORK, METHOD FOR OPERATING A NEURON NETWORK DEVICE AND APPLICATION PROCESSOR INCLUDING A NEURON NETWORK DEVICE
CN114169467A (en) Image annotation method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination