CN112614144A - Image segmentation method, device, equipment and storage medium - Google Patents

Image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN112614144A
CN112614144A CN202011613841.1A CN202011613841A CN112614144A CN 112614144 A CN112614144 A CN 112614144A CN 202011613841 A CN202011613841 A CN 202011613841A CN 112614144 A CN112614144 A CN 112614144A
Authority
CN
China
Prior art keywords
image
segmentation
standard
neural network
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011613841.1A
Other languages
Chinese (zh)
Inventor
林逢雨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Original Assignee
Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen United Imaging Research Institute of Innovative Medical Equipment filed Critical Shenzhen United Imaging Research Institute of Innovative Medical Equipment
Priority to CN202011613841.1A priority Critical patent/CN112614144A/en
Publication of CN112614144A publication Critical patent/CN112614144A/en
Priority to US17/559,473 priority patent/US20220207742A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The embodiment of the invention discloses an image segmentation method, an image segmentation device, image segmentation equipment and a storage medium. The method comprises the following steps: acquiring a target image to be segmented; inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image; the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction. According to the embodiment of the invention, the neural network model is trained based on the standard segmentation image and the standard delineation information, so that the problem of poor image segmentation effect of the existing neural network model is solved, and when the neural network model segments the image, the neural network model not only focuses on visual characteristic information in the image but also focuses on anisotropic information in the image, so that the image segmentation accuracy is improved.

Description

Image segmentation method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to an image segmentation method, an image segmentation device, image segmentation equipment and a storage medium.
Background
The image segmentation at the present stage can be divided into manual segmentation and automatic segmentation. Although the manual segmentation precision is high, the precision of the manual segmentation is greatly related to the prior knowledge of an operator, and meanwhile, the manual segmentation process is long in time consumption and needs much effort and time. Therefore, it is very important and urgent to realize automatic segmentation of images.
The deep learning model is colorful in the image field from 2012, and the precision of the automatic segmentation algorithm based on deep learning is also improved year by year. However, in the training of the conventional deep learning model, only the visual features of the target object in the image are focused for learning, but in practical applications, the information included in the image is not limited to the visual feature information, so that when the image segmentation is performed by using the conventional deep learning model, the accuracy of the image segmentation is not high.
Disclosure of Invention
The embodiment of the invention provides an image segmentation method, an image segmentation device, image segmentation equipment and a storage medium, which are used for improving the accuracy of image segmentation of a neural network model.
In a first aspect, an embodiment of the present invention provides an image segmentation method, where the method includes:
acquiring a target image to be segmented;
inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
In a second aspect, an embodiment of the present invention further provides an image segmentation apparatus, including:
the target image acquisition module is used for acquiring a target image to be segmented;
the segmentation result output module is used for inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
In a third aspect, an embodiment of the present invention further provides an electronic device, where the electronic device includes:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the image segmentation methods referred to above.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform any of the image segmentation methods referred to above.
According to the embodiment of the invention, the neural network model is trained based on the standard segmentation image and the standard delineation information, so that the problem of poor image segmentation effect of the existing neural network model is solved, and when the neural network model segments the image, the neural network model not only focuses on visual characteristic information in the image but also focuses on anisotropic information in the image, so that the image segmentation accuracy is improved.
Drawings
Fig. 1 is a flowchart of an image segmentation method according to an embodiment of the present invention;
fig. 2A is a schematic diagram of standard delineation information according to an embodiment of the present invention;
FIG. 2B is a diagram illustrating a target neural network model according to an embodiment of the present invention;
FIG. 2C is a diagram of another target neural network model according to an embodiment of the present invention;
FIG. 2D is a diagram illustrating an embodiment of a target neural network model according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image segmentation method according to a second embodiment of the present invention;
fig. 4 is a schematic diagram of an image segmentation apparatus according to a third embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an image segmentation method according to an embodiment of the present invention, where the embodiment is applicable to a case of segmenting an image, and the method may be executed by an image segmentation apparatus, where the apparatus may be implemented in a software and/or hardware manner, and the apparatus may be configured in a terminal device, where the terminal device may be, for example, an intelligent terminal such as a mobile terminal, a desktop computer, a notebook computer, a tablet computer, and a server. The method specifically comprises the following steps:
and S110, acquiring a target image to be segmented.
In one embodiment, the target image is optionally a two-dimensional image or a three-dimensional image. The target image may be a medical image such as a CT (Computed Tomography) image, an MRI (Magnetic Resonance Imaging) image, or a PET (Positron Emission Tomography), for example. Of course, the target image may be a landscape image or a person image, and the type of the target image is not limited herein.
For example, in the medical field, according to the DICOM (Digital Imaging and Communications in Medicine) standard definition, the X-axis direction corresponds to the left-right direction of the human body, the Y-axis direction corresponds to the front (chest) and back (back) directions of the human body, and the Z-axis direction corresponds to the upper (head) and lower (foot) directions of the human body. In order to increase the imaging rate, there may be cases where the image elements in the medical image differ in size in the three dimensional directions. For example, the image cell size of the image cell in the X-axis direction and the Y-axis direction is small, and the image cell size in the Z-axis direction is large, resulting in that the imaged medical image contains not only visual characteristic information but also anisotropy information of the image cell.
And S120, inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image.
In this embodiment, the target neural network model is trained based on a standard segmentation image and standard delineation information, where the standard delineation information is used to represent positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
Illustratively, the standard segmented image is an image including an image segmentation unit, wherein the image segmentation unit is used for describing image units belonging to the segmented parts in the standard segmented image. The standard delineation information can be used for describing anisotropic information corresponding to a segmentation part in a standard segmentation image. In one embodiment, optionally, the standard segmented image includes a training standard segmented image and a testing standard segmented image, and accordingly, the standard delineation information includes training standard delineation information and testing standard delineation information.
Specifically, the image unit is an image pixel when the standard segmented image is a two-dimensional image, and the image unit is an image voxel when the standard segmented image is a three-dimensional image. In one embodiment, when the target image is a two-dimensional image, the target delineation information output by the target neural network model includes positioning information of image segmentation pixels in the target segmentation image in the X-axis direction and/or the Y-axis direction. In another embodiment, when the target image is a three-dimensional image, the target delineation information output by the target neural network model includes positioning information of image segmentation voxels in the target segmentation image in at least one dimension of an X-axis direction, a Y-axis direction, and a Z-axis direction.
Fig. 2A is a schematic diagram of standard delineation information according to an embodiment of the present invention. Fig. 2A shows a standard segmented image including 8 × 4 image pixels, the hatched squares in fig. 2A indicate image segmented pixels in the standard segmented image, and the blank squares indicate non-image segmented pixels in the standard segmented image. If the pixel layer including the image division pixel is marked as 1 and the pixel layer not including the image division pixel is marked as 0, the standard delineation information of the image division pixel in the X-axis direction is [0,0,1,1,1,0,0,0], and the standard delineation information in the Y-axis direction is [0,1,1,0 ].
In one embodiment, optionally, the target neural network model includes a feature extraction module, an image segmentation module, an intermediate depth supervision module and a terminal depth supervision module, the feature extraction module is configured to output feature vectors based on an input target image, the intermediate depth supervision module is configured to output intermediate delineation information based on the input feature vectors, the image segmentation module is configured to output a target segmentation image corresponding to the target image based on the input feature vectors and the intermediate delineation information, and the terminal depth supervision module is configured to output target delineation information based on the target segmentation image.
Specifically, the intermediate deep supervision module takes the trained and learned intermediate delineation information as attention to be introduced into the image segmentation module, so that the target neural network model has the capability of learning and identifying anisotropic information in the image.
Fig. 2B is a schematic diagram of a target neural network model according to an embodiment of the present invention. Specifically, the dashed box represents a target neural network model, and the segmentation result corresponding to the target image includes a target segmentation image and target delineation information. In one embodiment, optionally, the feature extraction module is an encoder module and the image segmentation module is a decoder module. For example, the feature vector may be a feature map corresponding to the target image output by the encoder module.
In one embodiment, optionally, the intermediate depth supervision module includes a first intermediate depth supervision module and at least one second intermediate depth supervision module, the first intermediate depth supervision module is configured to output first intermediate delineation information based on an input feature vector, the first image segmentation module is configured to output a first segmentation image based on the input feature vector and the first intermediate delineation information, the second intermediate depth supervision module is configured to output second delineation information based on the first segmentation image, and the second image segmentation module is configured to output a second segmentation image based on the first segmentation image and the second delineation information; wherein the second segmented image comprises a target segmented image.
Fig. 2C is a schematic diagram of another target neural network model according to an embodiment of the present invention. In particular, fig. 2C shows that the target neural network model includes two second image segmentation modules and two second intermediate deep supervision modules. The number of the second image segmentation module and the second intermediate deep supervision module in the target neural network model is not limited herein.
Fig. 2D is a schematic diagram of a specific example of a target neural network model according to an embodiment of the present invention. Specifically, the feature extraction module in the target neural network model shown in fig. 2D includes 4 encoding modules, the image segmentation module is specifically a decoder module, and the decoder module includes a first decoding module and two second decoding modules.
According to the technical scheme, the neural network model is trained on the basis of the standard segmentation image and the standard delineation information, the problem that the image segmentation effect of the existing neural network model is poor is solved, so that when the neural network model segments the image, not only can the visual characteristic information in the image be focused on, but also the anisotropic information in the image can be focused on, and the image segmentation accuracy is improved.
Example two
Fig. 3 is a flowchart of an image segmentation method according to a second embodiment of the present invention, and the technical solution of the present embodiment is further detailed based on the above-mentioned embodiment. Optionally, the standard segmented image includes a training standard segmented image, the standard delineation information includes training standard delineation information, and correspondingly, the training method of the target neural network model includes: acquiring images to be trained in a training set and images to be tested in a testing set; inputting the image to be trained into an initial neural network model, and performing iterative training on the initial neural network model based on a training standard segmentation image, training standard delineation information, a training prediction segmentation image and training prediction delineation information output by the initial neural network model to obtain at least one intermediate neural network model; inputting the image to be tested into each intermediate neural network model, and determining the evaluation result corresponding to each intermediate neural network model based on the output test prediction segmentation image; and taking the intermediate neural network model with the evaluation result meeting the preset evaluation standard as a target neural network model.
The specific implementation steps of this embodiment include:
s210, images to be trained in the training set and images to be tested in the testing set are obtained.
Specifically, the images to be trained in the training set are images for performing iterative training on the initial neural network model, and the images to be tested in the testing set are images for testing the trained intermediate neural network model.
In one embodiment, optionally, the method further comprises: acquiring an original image set, and respectively preprocessing the original images in the original image set to obtain preprocessed original images; dividing the preprocessed original images into images to be trained in a training set and images to be tested in a testing set based on a preset proportion; and performing data enhancement processing on the image to be trained in the training set, and adding the image to be trained with data enhancement into the training set.
In this case, the raw image set includes at least two raw images, which may be CT images, MRI images, or PET images.
In one embodiment, the preprocessing optionally includes at least one of format conversion, truncation, and normalization. Here, for example, when the original image is a two-dimensional image, the two-dimensional image may be converted into a three-dimensional image by format conversion. Specifically, the original image is cut off based on a preset window position value and a preset window width value, and optionally, the original image may be a plurality of two-dimensional images.
Taking a CT image as an example, the CT device can recognize density differences of 2000 different gray scales, and the human eye can only distinguish 16 gray scales, so that the CT value range in the CT image exceeds 125Hu and can be recognized by the human eye. The preset window level value is used for describing a central CT value corresponding to the original image after the truncation processing or a mean value corresponding to the CT value within the window width range, wherein the window level should be equal to or close to the CT value corresponding to the tissue image to be segmented. The preset window width value is used for describing a CT value range corresponding to the original image after the truncation processing, and the definition and the contrast of the original image after the truncation processing can be influenced by the width of the window width. Illustratively, the preset window bit value is 50Hu and the preset window width value is 400 Hu.
In an embodiment, optionally, format conversion, truncation processing, and normalization processing are sequentially performed on the original images in the original image set to obtain the preprocessed original images.
Wherein, for example, the preset ratio may be 7: 3.
Illustratively, the data enhancement processing includes at least one of flipping, translating, and rotating. The advantage of this arrangement is that the sample size of the images to be trained in the training set can be increased, thereby improving the generalization capability of the target neural network model.
S220, inputting the image to be trained into the initial neural network model, and performing iterative training on the initial neural network model based on the training standard segmentation image, the training standard delineation information, the training prediction segmentation image output by the initial neural network model and the training prediction delineation information to obtain at least one intermediate neural network model.
In this embodiment, the standard segmented image includes a training standard segmented image, and the standard delineation information includes training standard delineation information. And the training standard segmentation image is used for training the initial neural network model.
In an embodiment, optionally, the iteratively training the initial neural network model based on the training standard segmented image, the training standard delineation information, the training prediction segmented image output by the initial neural network model, and the training prediction delineation information to obtain at least one intermediate neural network model includes: determining a first loss function based on the training prediction segmentation image and the training standard segmentation image, and determining a second loss function based on the training prediction delineation information and the training standard delineation information; and performing iterative training on the initial neural network model based on the first loss function, the second loss function and a preset optimizer to obtain at least one intermediate neural network model.
Exemplary methods of calculating the loss function include, but are not limited to, a 0-1 loss function, an absolute loss function, a log-log loss function, a square loss function, an exponential loss function, a Hinge loss function, a cross-entropy loss function, a Dice loss function, a focal loss function, a region-based loss function, or a boundary-based loss function, among others. In one embodiment, optionally, the first loss function is calculated based on a cross-entropy loss function or a Dice loss function, and the second loss function is calculated based on the cross-entropy loss function.
Exemplary preset optimizers include, but are not limited to, Adam optimizer, SGD optimizer, RMSprop optimizer, and the like.
Specifically, the first loss function and the second loss function are output to a preset optimizer, and the preset optimizer outputs the variation of the model parameters based on a gradient descent algorithm, so that iterative training is performed on the model parameters of the initial neural network model, and the loss function is minimized.
On the basis of the foregoing embodiment, optionally, the method includes: acquiring a training standard segmentation image, and determining a target dimension direction with the highest anisotropy degree based on the image unit sizes of image units in the training standard segmentation image in at least two dimension directions; and acquiring positioning information of the image segmentation unit in the training standard segmentation image in the target dimension direction, and generating training standard delineation information based on the positioning information.
In an exemplary case, the training standard segmented image is assumed to be a three-dimensional CT segmented image, where an image unit size is an image voxel size, specifically including voxel sizes in an X-axis direction, a Y-axis direction, and a Z-axis direction, and the voxel sizes in the dimension directions are compared to obtain a dimension direction with the largest voxel size as a target dimension direction with the highest anisotropy, for example, the target dimension direction is a Z-axis direction. The method includes the steps that positioning information of image segmentation voxels in a training standard segmentation image in the Z-axis direction is obtained, specifically, a voxel layer containing the image segmentation voxels in the Z-axis direction is marked as 1, a voxel layer not containing the image segmentation voxels is marked as 0, and training standard delineation information is obtained.
The advantage of such setting is that, in this embodiment, the training standard delineation information only includes delineation information in one dimension direction, so that the trained target neural network model has a better image segmentation effect for a target image in which anisotropic information exists in the dimension direction, thereby further improving the image segmentation accuracy for the target image in which anisotropic information exists in a single direction.
And S230, inputting the image to be tested into each intermediate neural network model, and determining the evaluation result corresponding to each intermediate neural network model based on the output test prediction segmentation image.
In this embodiment, the standard segmented image further includes a test standard segmented image. The test standard segmentation image is used for evaluating the middle neural network model. Specifically, based on a preset evaluation algorithm, a test prediction segmentation image and a test standard segmentation image, the evaluation result corresponding to each intermediate neural network is determined. Exemplary preset evaluation algorithms include, but are not limited to, a Dice coefficient algorithm, an IOU algorithm, a Hausdorff _95 coefficient algorithm, and the like. The preset evaluation algorithm is not limited herein.
And S240, taking the intermediate neural network model with the evaluation result meeting the preset evaluation standard as a target neural network model.
For example, when the evaluation result is an evaluation score, the preset evaluation criterion may be that the evaluation score is greater than a preset score threshold.
And S250, acquiring a target image to be segmented.
And S260, inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image.
According to the technical scheme of the embodiment, iterative training is carried out on the initial neural network model based on the image to be trained in the training set, at least one intermediate neural network model obtained through training is evaluated based on the image to be tested in the testing set, the intermediate neural network model with the evaluation result meeting the preset evaluation standard is used as the target neural network model, compared with the target neural network model obtained through selection of the existing training method only based on the training set, the target neural network model selected by the technical scheme is better in segmentation effect on unknown images, and therefore the accuracy of image segmentation of the target neural network model is further guaranteed.
EXAMPLE III
Fig. 4 is a schematic diagram of an image segmentation apparatus according to a third embodiment of the present invention. The embodiment is applicable to the case of segmenting an image, the apparatus can be implemented in a software and/or hardware manner, and the apparatus can be configured in a terminal device. The image segmentation apparatus includes: a target image acquisition module 310 and a segmentation result output module 320.
The target image obtaining module 310 is configured to obtain a target image to be segmented;
a segmentation result output module 320, configured to input the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
According to the technical scheme, the neural network model is trained on the basis of the standard segmentation image and the standard delineation information, the problem that the image segmentation effect of the existing neural network model is poor is solved, so that when the neural network model segments the image, not only can the visual characteristic information in the image be focused on, but also the anisotropic information in the image can be focused on, and the image segmentation accuracy is improved.
On the basis of the technical scheme, optionally, the target neural network model comprises a feature extraction module, an image segmentation module, an intermediate deep supervision module and a terminal deep supervision module, wherein the feature extraction module is used for outputting feature vectors based on an input target image, the intermediate deep supervision module is used for outputting intermediate delineation information based on the input feature vectors, the image segmentation module is used for outputting a target segmentation image corresponding to the target image based on the input feature vectors and the intermediate delineation information, and the terminal deep supervision module is used for outputting target delineation information based on the target segmentation image.
On the basis of the foregoing technical solution, optionally, the image segmentation module includes a first image segmentation module and at least one second image segmentation module, and correspondingly, the intermediate depth supervision module includes a first intermediate depth supervision module and at least one second intermediate depth supervision module, the first intermediate depth supervision module is configured to output first intermediate delineation information based on an input feature vector, the first image segmentation module is configured to output a first segmented image based on the input feature vector and the first intermediate delineation information, the second intermediate depth supervision module is configured to output second delineation information based on the first segmented image, and the second image segmentation module is configured to output a second segmented image based on the first segmented image and the second delineation information; wherein the second segmented image comprises a target segmented image.
On the basis of the above technical solution, optionally, the standard segmented image includes a training standard segmented image, and the standard delineation information includes training standard delineation information, correspondingly, the apparatus further includes:
the target neural network model training module is used for acquiring images to be trained in a training set and images to be tested in a testing set; inputting an image to be trained into an initial neural network model, and performing iterative training on the initial neural network model based on a training standard segmentation image, training standard delineation information, a training prediction segmentation image output by the initial neural network model and the training prediction delineation information to obtain at least one intermediate neural network model; inputting the image to be tested into each intermediate neural network model, and determining the evaluation result corresponding to each intermediate neural network model based on the output test prediction segmentation image; and taking the intermediate neural network model with the evaluation result meeting the preset evaluation standard as a target neural network model.
On the basis of the above technical solution, optionally, the target neural network model training module is specifically configured to:
determining a first loss function based on the training prediction segmentation image and the training standard segmentation image, and determining a second loss function based on the training prediction delineation information and the training standard delineation information;
and performing iterative training on the initial neural network model based on the first loss function, the second loss function and a preset optimizer to obtain at least one intermediate neural network model.
On the basis of the above technical solution, optionally, the apparatus further includes:
the training standard delineation information generation module is used for acquiring a training standard segmentation image and determining a target dimension direction with the highest anisotropy degree based on the image unit sizes of the image units in the training standard segmentation image in at least two dimension directions; and acquiring positioning information of the image segmentation unit in the training standard segmentation image in the target dimension direction, and generating training standard delineation information based on the positioning information.
On the basis of the above technical solution, optionally, the apparatus further includes:
the training set determining module is used for acquiring an original image set and respectively preprocessing original images in the original image set to obtain preprocessed original images; dividing the preprocessed original images into images to be trained in a training set and images to be tested in a testing set based on a preset proportion; and performing data enhancement processing on the image to be trained in the training set, and adding the image to be trained with data enhancement into the training set.
The image segmentation device provided by the embodiment of the invention can be used for executing the image segmentation method provided by the embodiment of the invention, and has corresponding functions and beneficial effects of the execution method.
It should be noted that, in the embodiment of the image segmentation apparatus, the included units and modules are only divided according to functional logic, but are not limited to the above division as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
Example four
Fig. 5 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention, where the embodiment of the present invention provides a service for implementing the image segmentation method according to the foregoing embodiment of the present invention, and the image segmentation apparatus according to the foregoing embodiment may be configured. FIG. 5 illustrates a block diagram of an exemplary electronic device 12 suitable for use in implementing embodiments of the present invention. The electronic device 12 shown in fig. 5 is only an example and should not bring any limitation to the function and the scope of use of the embodiment of the present invention.
As shown in FIG. 5, electronic device 12 is embodied in the form of a general purpose computing device. The components of electronic device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 that couples various system components including the system memory 28 and the processing unit 16.
Bus 18 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 12 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by electronic device 12 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 28 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)30 and/or cache memory 32. The electronic device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 34 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, and commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 18 by one or more data media interfaces. Memory 28 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
A program/utility 40 having a set (at least one) of program modules 42 may be stored, for example, in memory 28, such program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each of which examples or some combination thereof may comprise an implementation of a network environment. Program modules 42 generally carry out the functions and/or methodologies of the described embodiments of the invention.
Electronic device 12 may also communicate with one or more external devices 14 (e.g., keyboard, pointing device, display 24, etc.), with one or more devices that enable a user to interact with electronic device 12, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 12 to communicate with one or more other computing devices. Such communication may be through an input/output (I/O) interface 22. Also, the electronic device 12 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet) via the network adapter 20. As shown in FIG. 5, the network adapter 20 communicates with the other modules of the electronic device 12 via the bus 18. It should be understood that although not shown in the figures, other hardware and/or software modules may be used in conjunction with electronic device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processing unit 16 executes various functional applications and data processing, such as implementing the image segmentation method provided by the embodiments of the present invention, by running a program stored in the system memory 28.
Through the electronic equipment, the problem that the image segmentation effect of the existing neural network model is poor is solved, so that when the neural network model segments the image, not only the visual feature information in the image can be focused on, but also the anisotropic information in the image can be focused on, and the image segmentation accuracy is improved.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method for image segmentation, the method including:
acquiring a target image to be segmented;
inputting the target image into a target neural network model which is trained in advance to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the above method operations, and may also perform related operations in the image segmentation method provided by any embodiment of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (10)

1. An image segmentation method, comprising:
acquiring a target image to be segmented;
inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
2. The method according to claim 1, wherein the target neural network model comprises a feature extraction module, an image segmentation module, an intermediate depth supervision module and a terminal depth supervision module, the feature extraction module is used for outputting feature vectors based on the input target image, the intermediate depth supervision module is used for outputting intermediate delineation information based on the input feature vectors, the image segmentation module is used for outputting a target segmentation image corresponding to the target image based on the input feature vectors and the intermediate delineation information, and the terminal depth supervision module is used for outputting target delineation information based on the target segmentation image.
3. The method according to claim 2, wherein the image segmentation module comprises a first image segmentation module and at least one second image segmentation module, respectively, the intermediate depth supervision module comprises a first intermediate depth supervision module and at least one second intermediate depth supervision module, the first intermediate depth supervision module is configured to output first intermediate delineation information based on the input feature vector, the first image segmentation module is configured to output a first segmented image based on the input feature vector and the first intermediate delineation information, the second intermediate depth supervision module is configured to output second delineation information based on the first segmented image, and the second image segmentation module is configured to output a second segmented image based on the first segmented image and the second delineation information; wherein the second segmented image comprises the target segmented image.
4. The method of claim 1, wherein the standard segmented image comprises a training standard segmented image, the standard delineation information comprises training standard delineation information, and accordingly, the training method of the target neural network model comprises:
acquiring images to be trained in a training set and images to be tested in a testing set;
inputting the image to be trained into an initial neural network model, and performing iterative training on the initial neural network model based on a training standard segmentation image, training standard delineation information, a training prediction segmentation image and training prediction delineation information output by the initial neural network model to obtain at least one intermediate neural network model;
inputting the image to be tested into each intermediate neural network model, and determining the evaluation result corresponding to each intermediate neural network model based on the output test prediction segmentation image;
and taking the intermediate neural network model with the evaluation result meeting the preset evaluation standard as a target neural network model.
5. The method of claim 4, wherein the iteratively training the initial neural network model based on the training standard segmented image, the training standard delineation information, the training prediction segmented image output by the initial neural network model, and the training prediction delineation information to obtain at least one intermediate neural network model comprises:
determining a first loss function based on the training prediction segmentation image and the training standard segmentation image, and determining a second loss function based on the training prediction delineation information and the training standard delineation information;
and performing iterative training on the initial neural network model based on the first loss function, the second loss function and a preset optimizer to obtain at least one intermediate neural network model.
6. The method of claim 4, further comprising:
acquiring a training standard segmentation image, and determining a target dimension direction with the highest anisotropy degree based on the image unit sizes of image units in the training standard segmentation image in at least two dimension directions;
and acquiring positioning information of the image segmentation unit in the training standard segmentation image in the target dimension direction, and generating training standard delineation information based on the positioning information.
7. The method of claim 4, further comprising:
acquiring an original image set, and respectively preprocessing original images in the original image set to obtain preprocessed original images;
dividing the preprocessed original image into an image to be trained in a training set and an image to be tested in a testing set based on a preset proportion;
and performing data enhancement processing on the images to be trained in the training set, and adding the images to be trained with data enhancement into the training set.
8. An image segmentation apparatus, comprising:
the target image acquisition module is used for acquiring a target image to be segmented;
the segmentation result output module is used for inputting the target image into a pre-trained target neural network model to obtain an output segmentation result corresponding to the target image;
the target neural network model is obtained by training based on a standard segmentation image and standard delineation information, wherein the standard delineation information is used for representing positioning information of an image segmentation unit in the standard segmentation image in at least one dimension direction.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the image segmentation method as claimed in any one of claims 1-7.
10. A storage medium containing computer-executable instructions for performing the image segmentation method as claimed in any one of claims 1 to 7 when executed by a computer processor.
CN202011613841.1A 2020-12-30 2020-12-30 Image segmentation method, device, equipment and storage medium Pending CN112614144A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011613841.1A CN112614144A (en) 2020-12-30 2020-12-30 Image segmentation method, device, equipment and storage medium
US17/559,473 US20220207742A1 (en) 2020-12-30 2021-12-22 Image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011613841.1A CN112614144A (en) 2020-12-30 2020-12-30 Image segmentation method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112614144A true CN112614144A (en) 2021-04-06

Family

ID=75249664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011613841.1A Pending CN112614144A (en) 2020-12-30 2020-12-30 Image segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112614144A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177957A (en) * 2021-05-24 2021-07-27 西交利物浦大学 Cell image segmentation method and device, electronic equipment and storage medium
CN113409273A (en) * 2021-06-21 2021-09-17 上海联影医疗科技股份有限公司 Image analysis method, device, equipment and medium
CN113724203A (en) * 2021-08-03 2021-11-30 唯智医疗科技(佛山)有限公司 Segmentation method and device for target features in OCT (optical coherence tomography) image
CN114494496A (en) * 2022-01-27 2022-05-13 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN116013475A (en) * 2023-03-24 2023-04-25 福建自贸试验区厦门片区Manteia数据科技有限公司 Method and device for sketching multi-mode medical image, storage medium and electronic equipment

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177957A (en) * 2021-05-24 2021-07-27 西交利物浦大学 Cell image segmentation method and device, electronic equipment and storage medium
CN113177957B (en) * 2021-05-24 2024-03-08 西交利物浦大学 Cell image segmentation method and device, electronic equipment and storage medium
CN113409273A (en) * 2021-06-21 2021-09-17 上海联影医疗科技股份有限公司 Image analysis method, device, equipment and medium
CN113409273B (en) * 2021-06-21 2023-04-07 上海联影医疗科技股份有限公司 Image analysis method, device, equipment and medium
CN113724203A (en) * 2021-08-03 2021-11-30 唯智医疗科技(佛山)有限公司 Segmentation method and device for target features in OCT (optical coherence tomography) image
CN113724203B (en) * 2021-08-03 2024-04-23 唯智医疗科技(佛山)有限公司 Model training method and device applied to target feature segmentation in OCT image
CN114494496A (en) * 2022-01-27 2022-05-13 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN114494496B (en) * 2022-01-27 2022-09-20 深圳市铱硙医疗科技有限公司 Automatic intracranial hemorrhage delineation method and device based on head CT flat scanning image
CN116013475A (en) * 2023-03-24 2023-04-25 福建自贸试验区厦门片区Manteia数据科技有限公司 Method and device for sketching multi-mode medical image, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN112614144A (en) Image segmentation method, device, equipment and storage medium
CN111524106B (en) Skull fracture detection and model training method, device, equipment and storage medium
US10636141B2 (en) Adversarial and dual inverse deep learning networks for medical image analysis
CN111754596B (en) Editing model generation method, device, equipment and medium for editing face image
US10853409B2 (en) Systems and methods for image search
CN111932529B (en) Image classification and segmentation method, device and system
CN112085714B (en) Pulmonary nodule detection method, model training method, device, equipment and medium
CN111028246A (en) Medical image segmentation method and device, storage medium and electronic equipment
CN113256592B (en) Training method, system and device of image feature extraction model
CN113326851B (en) Image feature extraction method and device, electronic equipment and storage medium
US20200143934A1 (en) Systems and methods for semi-automatic tumor segmentation
CN115100185A (en) Image processing method, image processing device, computer equipment and storage medium
CN116129141A (en) Medical data processing method, apparatus, device, medium and computer program product
CN111192320A (en) Position information determining method, device, equipment and storage medium
EP3843038B1 (en) Image processing method and system
CN114066905A (en) Medical image segmentation method, system and device based on deep learning
CN113850796A (en) Lung disease identification method and device based on CT data, medium and electronic equipment
CN112530554B (en) Scanning positioning method and device, storage medium and electronic equipment
CN115239655A (en) Thyroid ultrasonic image tumor segmentation and classification method and device
CN114049674A (en) Three-dimensional face reconstruction method, device and storage medium
CN113222989A (en) Image grading method and device, storage medium and electronic equipment
CN113408596B (en) Pathological image processing method and device, electronic equipment and readable storage medium
CN116013475B (en) Method and device for sketching multi-mode medical image, storage medium and electronic equipment
CN113450351B (en) Segmentation model training method, image segmentation method, device, equipment and medium
CN117556077B (en) Training method of text image model, related method and related product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination