WO2022242131A1 - 图像分割方法、装置、设备及存储介质 - Google Patents

图像分割方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2022242131A1
WO2022242131A1 PCT/CN2021/138027 CN2021138027W WO2022242131A1 WO 2022242131 A1 WO2022242131 A1 WO 2022242131A1 CN 2021138027 W CN2021138027 W CN 2021138027W WO 2022242131 A1 WO2022242131 A1 WO 2022242131A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
segmented
sample
target
self
Prior art date
Application number
PCT/CN2021/138027
Other languages
English (en)
French (fr)
Inventor
李阳
吴剑煌
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2022242131A1 publication Critical patent/WO2022242131A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • Embodiments of the present invention relate to the technical field of image processing, and in particular, to an image segmentation method, device, equipment, and storage medium.
  • image processing technology as an effective means to obtain effective information from images, is widely used in various application scenarios. In many scenarios, it is necessary to segment images to capture interesting information from rich image information. With the rapid development of artificial intelligence technology, in order to improve the efficiency of image processing, various neural networks are applied to image segmentation.
  • Embodiments of the present invention provide an image segmentation method, device, device, and storage medium, so as to improve the capability of long-distance feature capture and image segmentation accuracy.
  • an embodiment of the present invention provides an image segmentation method, including:
  • the image segmentation model is constructed based on an encoder, a decoder, and at least one self-attention model, and the self-attention model is used to determine the distance between each pixel in the image to be segmented and all pixels in the image. dependencies.
  • the embodiment of the present invention also provides an image segmentation device, including:
  • An image acquisition module configured to acquire at least one image to be segmented
  • An image segmentation module configured to input the image to be segmented into a pre-trained image segmentation model to obtain a target segmented image corresponding to the image to be segmented;
  • the image segmentation model is constructed based on an encoder, a decoder, and at least one self-attention model, and the self-attention model is used to determine the distance between each pixel in the image to be segmented and all pixels in the image. dependencies.
  • an embodiment of the present invention also provides an image segmentation device, which includes:
  • processors one or more processors
  • the one or more processors are made to implement an image segmentation method provided in any embodiment of the present invention.
  • an embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, an image segmentation method provided in any embodiment of the present invention is implemented.
  • the image segmented model is based on coding
  • a device, a decoder, and at least one self-attention model are constructed, and the self-attention model is used to determine the dependency between each pixel in the image to be segmented and all pixels in the image.
  • the feature of the image to be segmented is initially abstracted and compressed by the encoder, and the high Dimensional data is mapped into low-dimensional data to reduce the amount of data; the feature of the image to be segmented is reproduced through the decoder; the self-attention model can effectively learn the relationship between each pixel in the image to be segmented and all pixels in the image Dependency relationship, so as to capture the long-distance dependency relationship in the image to be segmented, obtain more abundant global context features of the image to be segmented, and make the image segmentation accuracy higher.
  • FIG. 1 is a schematic flow chart of an image segmentation method provided by Embodiment 1 of the present invention.
  • FIG. 2 is a structural diagram of an image segmentation model provided by Embodiment 1 of the present invention.
  • FIG. 3 is a schematic flowchart of an image segmentation method provided by Embodiment 2 of the present invention.
  • FIG. 4 is a structural diagram of an initial network model provided by Embodiment 2 of the present invention.
  • FIG. 5 is a structural diagram of a self-attention model provided by Embodiment 2 of the present invention.
  • FIG. 6 is a schematic structural diagram of an image segmentation device provided by Embodiment 3 of the present invention.
  • FIG. 7 is a schematic structural diagram of an electronic device provided by Embodiment 4 of the present invention.
  • FIG. 1 is a schematic flow chart of an image segmentation method provided by Embodiment 1 of the present invention.
  • This embodiment is applicable to the situation where an image is automatically segmented by an image segmentation model.
  • This method can be implemented by the image segmentation device provided by the embodiment of the present invention.
  • the device can be implemented by software and/or hardware, and can be configured in a terminal and/or server to implement the image segmentation method in the embodiment of the present invention.
  • the image segmentation method of this embodiment may specifically include:
  • the image to be segmented may be an image including a target segmentation object.
  • the type and content of the image to be segmented are not specifically limited here.
  • the images to be segmented include medical images and the like.
  • the medical image may specifically be a clinical medical image such as a computed tomography (Computed Tomography, CT) image, a nuclear magnetic resonance (Magnetic Resonance, MR) image, or a positron emission tomography (Positron Emission Tomography, PET) image.
  • the image to be segmented may be a multi-dimensional intracranial blood vessel image or a pulmonary bronchus image or the like.
  • the image to be segmented includes target segmentation objects and non-target segmentation objects.
  • the target segmentation object may be an object of interest to the user such as a blood vessel or a bone.
  • the image to be segmented may be a planar image.
  • the planar image may be an originally acquired planar image. It is considered that the acquired original image to be segmented may be a unit dimension or a three-dimensional or more stereoscopic image.
  • the original image to be segmented may be preprocessed to obtain a planar image of the image to be segmented. For example, it may be a planar image obtained by segmenting a three-dimensional image.
  • the image to be segmented may be a grayscale image.
  • one, two or more than two images to be segmented are acquired.
  • acquiring the image to be segmented includes: acquiring the image to be segmented containing the target segmented object in real time based on the image acquisition device, or acquiring the image to be segmented including the target segmented object from a preset storage location, or receiving the image to be segmented by the target device
  • the sent image to be segmented contains the target segmentation object.
  • the storage location of the image to be segmented is not limited, can be set according to actual needs, and can be obtained directly from the corresponding storage location when necessary.
  • the image to be segmented is input as input data into the pre-trained image segmentation model; the image segmentation model realizes the image segmentation of the image to be segmented through an encoder, a decoder, and at least one self-attention model. Segmentation, the target segmentation image corresponding to the image to be segmented is obtained, and output from the image segmentation model as output data, which can realize efficient and accurate automatic segmentation of the image.
  • the encoder can initially abstract and compress the features of the input image to be segmented, so as to perform preliminary cleaning and screening of the features of the image to be segmented, while retaining important features, reduce the feature dimension, reduce the amount of data, and improve segmentation efficiency.
  • the decoder can realize the reproduction of the features of the image to be segmented.
  • the self-attention model is used to determine the dependency between each pixel in the image to be segmented and all pixels in the image, thereby capturing the long-distance dependencies in the image to be segmented and obtaining a richer global context of the image to be segmented features to more accurately segment the image features of the image to be segmented.
  • the image segmentation model may include an encoder, at least one self-attention model connected to the encoder, and a decoder connected to the last-level self-attention model.
  • the image to be segmented is used as the input of the encoder
  • the output of the encoder is used as the input of the self-attention model connected to the encoder
  • the output of the last-level self-attention model is used as the input of the decoder
  • the decoder outputs The target segmented image corresponding to the image to be segmented.
  • the number of self-attention models is not limited, and can be set according to actual needs. Exemplarily, there can be one, two or more than two self-attention models.
  • each self-attention model is serially connected.
  • the image segmentation model may include: an encoder, at least one self-attention model, and a decoder.
  • the encoder can map the high-dimensional image to be segmented to a new encoding space through encoding, and the new encoding space can contain the pixel information of the image to be segmented
  • the decoder can map the encoding space to the image corresponding to the image to be segmented through decoding. target segmented image.
  • the image to be segmented is input to the self-attention model by encoding the encoder mapping to determine the dependency between each pixel in the image to be segmented and all pixels in the image, and then the decoder maps to the image to be segmented by decoding The corresponding target segmented image.
  • the inputting the image to be segmented into the pre-trained image segmentation model to obtain the target segmented image corresponding to the image to be segmented includes: inputting the image to be segmented Input the image into the pre-trained encoder to obtain the target coded image corresponding to the image to be segmented; input the target coded image into at least one self-attention model that has been pre-trained to obtain the self-attention corresponding to the target coded image Segment the image; input the self-attention segmented image into the pre-trained decoder to obtain the target segmented image corresponding to the image to be segmented.
  • the image to be segmented is input as input data into the pre-trained encoder, and the encoder obtains the target encoded image corresponding to the image to be segmented through encoding mapping; the target encoded image is input as input data to the pre-trained encoder
  • the self-attention model obtains the self-attention segmentation image corresponding to the target encoding image by determining the dependency between each pixel in the target encoding image and all pixels in the image
  • the self-attention The force segmented image is input into the pre-trained decoder as input data, and the decoder obtains the target segmented image corresponding to the image to be segmented through decoding mapping.
  • the image segmentation model includes a first conversion layer and a second conversion layer; after the target coded image corresponding to the image to be segmented is obtained, the Before the target coded image is input into at least one pre-trained self-attention model, it also includes: inputting the target coded image into the first conversion layer, so as to transform the target coded image into two-dimensional image features Converted into a one-dimensional image feature; Before the described self-attention segmentation image is input into the pre-trained decoder, it also includes: the self-attention segmentation image is input to the second conversion layer to convert The self-attention segmentation image is transformed from one-dimensional image features into two-dimensional image features.
  • the image segmentation model of the present application converts the dimensions of the image features through the first conversion layer and the second conversion layer, the image segmentation model can more fully extract feature information from the image to be segmented, and ensure that the encoder, Data transfer dimensions can be matched between the decoder and at least one self-attention model.
  • the image segmentation model is constructed based on an encoder, a decoder, and at least one self-attention model, and the self-attention model is used to determine the dependency between each pixel in the image to be segmented and all pixels in the image.
  • the image segmentation model performs preliminary abstraction and compression on the features of the image to be segmented through the encoder during the process of image processing, and the high-dimensional The data is mapped into low-dimensional data to reduce the amount of data; the decoder realizes the reproduction of the features of the image to be segmented; the self-attention model can effectively capture the long-distance dependencies in the image, so as to achieve efficient and accurate image segmentation.
  • Fig. 3 is a flow chart of an image segmentation method provided by Embodiment 2 of the present invention.
  • This embodiment on the basis of any optional technical solution in the embodiment of the present invention, optionally further includes: training based on multiple groups The sample data is to train the pre-established initial network model to generate an image segmentation model, wherein the training sample data includes sample image data and a sample target segmented image corresponding to the sample image to be segmented.
  • the method of the embodiment of the present invention specifically includes:
  • the image segmentation model can be obtained by training the initial network model in advance through a large number of sample images to be segmented and sample target segmented images corresponding to the sample images to be segmented.
  • the sample image to be segmented will be encoded and decoded, and the model parameters in the image segmentation model will be trained based on the self-attention model, and the model parameters will be continuously adjusted to make the output of the model
  • the deviation between the target segmentation images corresponding to the sample image to be segmented gradually decreases and tends to be stable, and an image segmentation model is generated.
  • model parameters of the initial network model may adopt a random initialization principle, or may adopt a fixed value initialization principle based on experience, which is not specifically limited in this embodiment.
  • the training of the pre-established initial network model based on multiple sets of training sample data may include: inputting sample image data into a pre-established encoder to obtain a sample coded image corresponding to the image to be segmented ; Input the sample coded image into at least one pre-established self-attention model to obtain a sample self-attention image corresponding to the target coded image; input the sample self-attention image to a pre-established decoder to obtain the image to be divided The image corresponds to the target segmentation image.
  • the sample image data is samples of multiple groups of images to be segmented, and the specific design of the encoder and the decoder can be shown in Table 1.
  • all convolution layers use a 3x3 convolution kernel, and the maximum pooling layer uses 2 times downsampling.
  • the first conversion layer converts the tensor E of (25, 25, 256) into the tensor R of (25*25, 256), and the second conversion layer converts the tensor R of (25*25, 256)
  • the quantity S' is transformed into a tensor R' of (25, 25, 256).
  • the encoder encodes high-dimensional sample image data into low-dimensional hidden variables through a series of convolutional layers and pooling layers.
  • the convolutional layer is responsible for obtaining local features of the image, and the pooling layer down-samples the image.
  • the encoder joins the pool The layer can speed up the calculation and prevent overfitting.
  • the decoder upsamples and concatenates the low-dimensional latent variables, and then performs convolution processing to improve the geometry of the target segmented image and compensate for the loss of detail caused by the reduction of the sample encoded image by the pooling layer in the encoder.
  • the input of the sample coded image into at least one pre-established self-attention model to obtain a sample self-attention image corresponding to the target coded image may be Including: inputting the sample encoded image into the pre-established self-attention model; performing linear changes based on the sample encoded image to obtain the first parameter matrix to be adjusted, the second parameter matrix to be adjusted and the third parameter matrix to be adjusted of the self-attention model ; Determine the similarity matrix corresponding to the sample coded image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted; weight the similarity matrix based on the third parameter matrix to be adjusted to obtain a weighted feature image; based on at least two weighted
  • the feature images and sample encoded images determine the sample self-attention images corresponding to the target encoded images.
  • the first parameter matrix to be adjusted can be represented by q
  • the second parameter matrix to be adjusted can be represented by k
  • the third parameter matrix to be adjusted can be represented by v.
  • the linear change is to use a straight line equation to perform data transformation on the sample coded image to obtain the first to-be-adjusted parameter matrix, the second to-be-adjusted parameter matrix, and the third to-be-adjusted parameter matrix of the self-attention model.
  • the purpose is to make the sample coded image highlight the region of interest to facilitate subsequent processing.
  • a similarity matrix is obtained by calculating the first parameter matrix to be adjusted and the second parameter matrix to be adjusted of the sample coded image, wherein the similarity matrix is a matrix of the relationship between each position in the sample coded image and other positions.
  • the third parameter matrix to be adjusted weights the similarity matrix, specifically, the third parameter matrix to be adjusted is used as a weight matrix multiplied by the similarity matrix to obtain a weighted feature image.
  • the linear change based on the sample extraction image to obtain the first parameter matrix to be adjusted, the second parameter matrix to be adjusted and the third parameter matrix to be adjusted of the self-attention model may include:
  • R represents the sample coded image
  • q represents the first parameter matrix to be adjusted
  • k represents the second parameter matrix to be adjusted
  • v represents the third parameter matrix to be adjusted
  • W q represents the random initialization corresponding to the first parameter matrix to be adjusted matrix
  • W k represents a randomly initialized matrix corresponding to the second parameter matrix to be adjusted
  • W v represents a randomly initialized matrix corresponding to the third parameter matrix to be adjusted.
  • the self-attention model performs random initialization on the parameter matrix to be adjusted, which can improve the calculation speed of the self-attention model and converge to the global optimum as much as possible.
  • the determining the similarity matrix corresponding to the sample encoded image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted may include: Each pixel in the sample coded image is determined as a target pixel one by one; for each target pixel, the target pixel and all pixels in the sample coded image are calculated based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted.
  • the pixel similarity between points; based on the position of each target pixel in the sample coded image and the similarity of each pixel, a similarity matrix corresponding to the sample coded image is constructed.
  • the pixel information can include the position information of each pixel in the sample coded image and the similarity of each pixel, and construct a similarity matrix corresponding to the sample coded image, so that Learn the dependency between the position of each pixel in the image and all other pixel positions, and obtain the global context information of the sample encoded image.
  • the calculation of the target pixel point and all the pixel points in the sample coded image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted respectively can be achieved by the following formula:
  • (i, j) represents the position of the i-th row and j-column of the sample coded image
  • ⁇ (,) represents the similarity at the position of the i-th row and j-column in the similarity matrix
  • q represents the first to be adjusted Parameter matrix
  • k represents the second parameter matrix to be adjusted
  • q (i, n) represents the element of row i and column n in the first parameter matrix q to be adjusted
  • t (n, j) represents the element of row n and column n in matrix t
  • the matrix t is the transpose of the second parameter matrix k to be adjusted
  • d represents the dimension of the second parameter matrix k to be adjusted
  • c represents the number of channels of the input image.
  • the spatial position of the pixel points of the sample coded image in the new image can be changed through the zoom operation, so that the calculation of the pixel similarity can have a stable gradient.
  • the pixel similarity of the sample coded image it can be The dependency relationship between the current pixel point and other pixel points of the current image is obtained, thereby improving the ability to capture the long-distance dependency relationship of the image.
  • the weighting the similarity matrix based on the third parameter matrix to be adjusted to obtain a weighted feature image may include:
  • the normalized similarity matrix is weighted based on the third parameter matrix to be adjusted to obtain a weighted feature image.
  • weighting the normalized similarity matrix based on the third parameter matrix to be adjusted is specifically implemented based on the following calculation formula:
  • A(q,k,v) (i,j) represents the weighted eigenvalue of the i-th row and j-th column obtained by the weighted feature image A through the matrix q, k and v, and v represents the third parameter matrix to be adjusted
  • H 0 represents the target output length of the sample feature map
  • W 0 represents the target output width of the sample feature map
  • ⁇ ′ represents the normalized similarity matrix
  • ⁇ ′ (i,n) represents the normalized similarity
  • v (n, j) represents the element in row n and column j in the third parameter matrix v to be adjusted.
  • the embodiment of the present invention normalizes the similarity matrix, then weights the normalized similarity matrix through the third parameter matrix to be adjusted, and calculates the weighted eigenvalue of the current pixel, thereby improving the efficiency of the sample coded image.
  • the reliability of the extracted features is obtained, and a more effective weighted feature image is obtained.
  • the determining the sample self-attention image corresponding to the target coded image based on at least two weighted feature images and the sample coded image may include: combining at least two The weighted feature image is fused to obtain the fusion feature image; the feature dimension of the fusion feature image is adjusted to the target feature dimension, and the fusion feature image adjusted to the target feature dimension is added to the sample coded image to obtain the target dimension image;
  • the target dimension image is input to at least one fully connected layer of the self-attention model to obtain an output dimension image; the output dimension image is adjusted to the feature dimension of the fusion feature image to obtain a sample self corresponding to the target encoding image Attention to images.
  • the target feature dimension can be understood as the number of channels of the target feature, for example, one channel is one-dimensional, two channels are two-dimensional, and n channels are n-dimensional.
  • the fused feature image A' is obtained:
  • A' A 1 +A 2 +...+A n
  • n is the channel number of the weighted feature image.
  • the self-attention model includes two fully connected layers, and the output dimension image can be:
  • S represents the output dimension image
  • dense represents the fully connected layer
  • the activation function of the fully connected layer is a linear rectification function (Rectified Linear Unit, ReLU)
  • conv represents the convolutional layer, which is used to unify the feature dimension.
  • the self-attention model includes two fully connected layers, each neuron in the fully connected layer is fully connected to all neurons in the previous layer, and the fully connected layer can integrate the class-discriminative neurons in the convolutional layer. local information.
  • the activation function of each neuron in the fully connected layer generally adopts a linear rectification function.
  • the technical solution of the present invention also expands the sample image data.
  • the original sample image data may be preprocessed to obtain new sample image data.
  • the preprocessing includes but not limited to methods such as slicing, cropping, windowing or mosaic slice replacement.
  • it also includes: cutting the acquired original sample image data into at least two image slices, and splicing at least two of the image slices to obtain new sample image data.
  • the mosaic slice replacement method cuts the original sample image data and its labels into at least two image slices of different sizes, and then randomly stitches these image slices into the size of the original sample image data to obtain a new sample image Data, the target pixels of the new sample image data are distributed more abundantly and more uniformly in the whole picture, thereby speeding up the convergence speed of the model, increasing the number of training samples, and enhancing the robustness of the network.
  • the method further includes: performing multi-dimensional reconstruction on the target segmented image to obtain a multi-dimensional reconstructed image.
  • the multi-dimensional reconstruction method may include but not limited to a ray casting algorithm, a texture mapping algorithm, or a slice-level reconstruction method and the like.
  • an image segmentation model is generated by training a pre-established initial network model based on multiple sets of training sample data, wherein the training sample data includes sample image data and corresponding to the sample image to be segmented.
  • the sample target segmentation image of the sample obtain at least one image to be segmented; input the image to be segmented into the pre-trained image segmentation model to obtain the target segmented image corresponding to the image to be segmented; wherein the image segmentation
  • the model is constructed based on an encoder, a decoder, and at least one self-attention model, and the self-attention model is used to determine the dependency between each pixel in the image to be segmented and all pixels in the image.
  • the above-mentioned technical solution uses the encoder, decoder and self-attention model to enable the image segmentation model to effectively capture the long-distance dependencies in the image during image processing, thereby achieving efficient and accurate image segmentation. to split.
  • FIG. 6 is a schematic structural diagram of an image segmentation device provided in Embodiment 3 of the present invention.
  • the image segmentation device provided in this embodiment can be realized by software and/or hardware, and can be configured in a terminal and/or server to realize this embodiment.
  • the device may specifically include: an image acquisition module 310 and an image segmentation module 320 .
  • the image acquisition module 310 is used to acquire at least one image to be segmented; the image segmentation module 320 is used to input the image to be segmented into the pre-trained image segmentation model to obtain the image corresponding to the image to be segmented.
  • Target segmentation image wherein, the image segmentation model is constructed based on an encoder, a decoder, and at least one self-attention model, and the self-attention model is used to determine each pixel in the image to be segmented and all pixels in the image dependencies between points.
  • An embodiment of the present invention provides an image segmentation device, by acquiring at least one image to be segmented; inputting the image to be segmented into a pre-trained image segmentation model to obtain a target segmented image corresponding to the image to be segmented ;
  • the image segmentation model is constructed based on encoder, decoder and at least one self-attention model, and the self-attention model is used to determine the distance between each pixel in the image to be segmented and all pixels in the image dependencies.
  • the image segmentation model performs preliminary abstraction and compression on the features of the image to be segmented through the encoder during the process of image processing, and the high-dimensional The data is mapped into low-dimensional data to reduce the amount of data; the decoder realizes the reproduction of the features of the image to be segmented; the self-attention model can effectively capture the long-distance dependencies in the image, so as to achieve efficient and accurate image segmentation.
  • the image segmentation module 320 may include:
  • An image encoding unit configured to input the image to be segmented into a pre-trained encoder to obtain a target encoded image corresponding to the image to be segmented;
  • a self-attention segmentation unit configured to input the target coded image into at least one pre-trained self-attention model to obtain a self-attention segmentation image corresponding to the target coded image;
  • An image decoding unit configured to input the self-attention segmented image into a pre-trained decoder to obtain a target segmented image corresponding to the image to be segmented.
  • the image segmentation model includes a first conversion layer and a second conversion layer
  • the image segmentation module 320 can also be used for:
  • the pre-trained decoder Before the described self-attention segmentation image is input into the pre-trained decoder, it also includes:
  • the self-attention segmented image is input to the second conversion layer to convert the self-attention segmented image from one-dimensional image features to two-dimensional image features.
  • the image segmentation device may further include: an image segmentation model training module, which is used to perform a pre-established initial network model based on multiple sets of training sample data Performing training to generate an image segmentation model, wherein the training sample data includes sample image data and a sample target segmentation image corresponding to the sample image to be segmented.
  • an image segmentation model training module which is used to perform a pre-established initial network model based on multiple sets of training sample data Performing training to generate an image segmentation model, wherein the training sample data includes sample image data and a sample target segmentation image corresponding to the sample image to be segmented.
  • the image segmentation model training module may include:
  • a sample encoding unit configured to input the sample image data into a pre-established encoder to obtain a sample encoded image corresponding to the image to be divided;
  • a sample self-attention image generation unit configured to input the sample coded image into at least one pre-established self-attention model to obtain a sample self-attention image corresponding to the target coded image;
  • a sample decoding unit configured to input the sample self-attention image into a pre-established decoder to obtain a target segmented image corresponding to the image to be segmented.
  • the sample self-attention image generating unit may include:
  • An image input subunit configured to input the sample encoded image into a pre-established self-attention model
  • a linear transformation subunit configured to perform linear transformation based on the sample coded image to obtain the first parameter matrix to be adjusted, the second parameter matrix to be adjusted, and the third parameter matrix to be adjusted of the self-attention model;
  • a similarity matrix determining subunit configured to determine a similarity matrix corresponding to the sample coded image based on the first parameter matrix to be adjusted and the second parameter matrix to be adjusted;
  • a matrix weighting subunit configured to weight the similarity matrix based on the third parameter matrix to be adjusted to obtain a weighted feature image
  • the image determination subunit is used to determine the sample self-attention image corresponding to the target coded image based on at least two weighted feature images and the sample coded image.
  • the similarity matrix determination subunit can be used for:
  • a similarity matrix corresponding to the sample coded image is constructed based on the position of each target pixel in the sample coded image and the similarity of each pixel.
  • the similarity matrix determining subunit can be specifically used for:
  • (i, j) represents the position of the i-th row and j-column of the sample coded image
  • ⁇ (i, j) represents the similarity at the position of the i-th row and j-column in the similarity matrix
  • q represents the first The parameter matrix to be adjusted
  • k represents the second parameter matrix to be adjusted
  • q (i, n) represents the element of row i and column n in the first parameter matrix q to be adjusted
  • t (n, j) represents the nth element in matrix t
  • the matrix t is the transpose of the second parameter matrix k to be adjusted
  • d represents the dimension of the second parameter matrix k to be adjusted
  • c represents the number of channels of the input image.
  • the matrix weighting subunit can be specifically used for:
  • the normalized similarity matrix is weighted to obtain a weighted feature image, which is specifically realized based on the following calculation formula:
  • A(q,k,v) (i,j) represents the weighted eigenvalue of the i-th row and j-th column obtained by the weighted feature image A through the matrix q, k and v, and v represents the third parameter matrix to be adjusted
  • H 0 represents the target output length of the sample feature map
  • W 0 represents the target output width of the sample feature map
  • ⁇ ′ represents the normalized similarity matrix
  • ⁇ ′ (i,n) represents the normalized similarity
  • v (n, j) represents the element in row n and column j in the third parameter matrix v to be adjusted.
  • the image determining subunit is specifically used for:
  • the image segmentation model training module can also be used for:
  • the above image segmentation device can execute the image segmentation method provided by any embodiment of the present invention, and has corresponding functional modules and beneficial effects for executing the image segmentation method.
  • FIG. 7 is a schematic structural diagram of an image segmentation device provided by Embodiment 6 of the present invention.
  • Fig. 7 shows a block diagram of an exemplary image segmentation device 12 suitable for implementing embodiments of the present invention.
  • the image segmentation device 12 shown in FIG. 7 is only an example, and should not limit the functions and scope of use of this embodiment of the present invention.
  • the image segmentation device 12 takes the form of a general-purpose computing device.
  • Components of the image segmentation device 12 may include, but are not limited to: one or more processors or processing units 16, a system memory 28, and a bus 18 connecting various system components (including the system memory 28 and the processing unit 16).
  • Bus 18 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, a processor, or a local bus using any of a variety of bus structures.
  • bus structures include, by way of example, but are not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MAC) bus, Enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect ( PCI) bus.
  • ISA Industry Standard Architecture
  • MAC Micro Channel Architecture
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • Image segmentation device 12 typically includes a variety of computer system readable media. These media can be any available media that can be accessed by the image segmentation device 12, including volatile and non-volatile media, removable and non-removable media.
  • System memory 28 may include computer system readable media in the form of volatile memory, such as random access memory (RAM) 30 and/or cache memory 32 .
  • the image segmentation device 12 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • storage system 34 may be used to read and write to non-removable, non-volatile magnetic media (not shown in Figure 7, commonly referred to as a "hard drive”).
  • a disk drive for reading and writing to removable nonvolatile disks e.g., "floppy disks”
  • removable nonvolatile optical disks e.g., CD-ROM, DVD-ROM or other optical media
  • each drive may be connected to bus 18 via one or more data media interfaces.
  • System memory 28 may include at least one program product having a set (eg, at least one) of program modules configured to perform the functions of various embodiments of the present invention.
  • Program/utility 40 may be stored, for example, in system memory 28 as a set (at least one) of program modules 42 including, but not limited to, an operating system, one or more application programs, other program modules, and program data, each or some combination of these examples may include the implementation of the network environment.
  • Program modules 42 generally perform the functions and/or methodologies of the described embodiments of the invention.
  • the image segmentation device 12 may also communicate with one or more external devices 14 (such as a keyboard, pointing device, display 24, etc.), and may also communicate with one or more devices that enable a user to interact with the image segmentation device 12, and/or Or communicate with any device (such as a network card, modem, etc.) that enables the image segmentation device 12 to communicate with one or more other computing devices. Such communication may occur through input/output (I/O) interface 22 .
  • the image segmentation device 12 can also communicate with one or more networks (such as a local area network (LAN), a wide area network (WAN) and/or a public network such as the Internet) through the network adapter 20 . As shown in FIG.
  • the network adapter 20 communicates with other modules of the image segmentation device 12 through the bus 18 .
  • other hardware and/or software modules may be used in conjunction with image segmentation device 12, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, Tape drives and data backup storage systems, etc.
  • the processing unit 16 executes various functional applications and data processing by running the programs stored in the system memory 28 , for example, implementing an image segmentation method provided by the embodiment of the present invention.
  • Embodiment 5 of the present invention also provides a storage medium containing computer-executable instructions, the computer-executable instructions are used to perform an image segmentation method when executed by a computer processor, the method comprising:
  • Acquiring at least one image to be segmented inputting the image to be segmented into a pre-trained image segmentation model to obtain a target segmented image corresponding to the image to be segmented; wherein the image segmentation model is based on an encoder, a decoding and at least one self-attention model, the self-attention model is used to determine the dependency between each pixel in the image to be segmented and all pixels in the image.
  • the computer storage medium in the embodiments of the present invention may use any combination of one or more computer-readable media.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave traveling as a data signal. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including - but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations of embodiments of the present invention may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, including A conventional procedural programming language - such as "C" or a similar programming language.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)

Abstract

一种图像分割方法、装置、设备及存储介质,其中,该方法包括:获取至少一张待分割图像(S110);将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系(S120)。在进行图像分割时,借助编码器、解码器及自注意力模型,能够有效学习待分割图像中每个像素点与图像中所有像素点之间的依赖关系,从而捕获待分割图像中的长距离依赖关系,获取更加丰富的待分割图像全局上下文特征,使得图像分割精度更高。

Description

图像分割方法、装置、设备及存储介质 技术领域
本发明实施例涉及图像处理技术领域,尤其涉及一种图像分割方法、装置、设备及存储介质。
背景技术
目前,图像处理技术作为从图像中获取有效信息的有效手段,在各种应用场景中被广泛应用。在很多场景下,会需要对图像进行分割来从丰富的图像信息中捕捉到关注信息。随着人工智能技术的快速发展,为提高图像处理效率,各种神经网络被应用于图像分割。
但是,传统的利用神经网络模型进行图像分割的方法,由于卷积核的感受野受限,会造成模型只能学习到图像之间的短距离依赖关系,而长距离捕获特征的能力差,从而影响图像分割的效果。
发明内容
本发明实施例提供了一种图像分割方法、装置、设备及存储介质,以实现提升长距离捕获特征的能力,提高图像分割精度。
第一方面,本发明实施例提供了一种图像分割方法,包括:
获取至少一张待分割图像;
将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;
其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
第二方面,本发明实施例还提供了一种图像分割装置,包括:
图像获取模块,用于获取至少一张待分割图像;
图像分割模块,用于将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;
其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
第三方面,本发明实施例还提供了一种图像分割设备,该图像分割设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本发明任意实施例所提供的一种图像分割方法。
第四方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现本发明任意实施例所提供的一种图像分割方法。
本发明实施例的技术方案,通过获取至少一张待分割图像;将待分割图像输入至预先训练完成的图像分割模型中,得到与待分割图像对应的目标分割图像;其中,图像分割模型基于编码器、解码器以及至少一个自注意力模型构建, 自注意力模型用于确定待分割图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像分割时,借助编码器、解码器及自注意力模型,使得在图像分割模型进行图像处理的过程中,通过编码器对待分割图像的特征进行初步的抽象和压缩,将高维数据映射成低维数据,减少数据量;通过解码器实现待分割图像的特征的复现;通过自注意力模型能够有效学习待分割图像中每个像素点与图像中所有像素点之间的依赖关系,从而捕获待分割图像中的长距离依赖关系,获取更加丰富的待分割图像全局上下文特征,使得图像分割精度更高。
附图说明
为了更加清楚地说明本发明示例性实施例的技术方案,下面对描述实施例中所需要用到的附图做一简单介绍。显然,所介绍的附图只是本发明所要描述的一部分实施例的附图,而不是全部的附图,对于本领域普通技术人员,在不付出创造性劳动的前提下,还可以根据这些附图得到其他的附图。
图1是本发明实施例一所提供的一种图像分割方法的流程示意图;
图2是本发明实施例一所提供的一种图像分割模型的结构图;
图3是本发明实施例二所提供的一种图像分割方法的流程示意图;
图4是本发明实施例二所提供的一种初始网络模型的结构图;
图5是本发明实施例二所提供的一种自注意力模型的结构图;
图6是本发明实施例三所提供的一种图像分割装置的结构示意图;
图7是本发明实施例四所提供的一种电子设备的结构示意图。
具体实施方式
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
实施例一
图1为本发明实施例一所提供的一种图像分割方法的流程示意图,本实施例可适用于将图像通过图像分割模型进行自动分割的情况,该方法可以由本发明实施例提供的图像分割装置来执行,该装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本发明实施例中的图像分割方法。如图1所示,本实施例的图像分割方法具体可包括:
S110、获取至少一张待分割图像。
在本实施例中,待分割图像可以是包含有目标分割对象的图像。待分割图像的类型和内容等在此并不做具体限定。可选地,待分割图像包括医学图像等。典型地,医学图像具体可以是计算机断层(Computed Tomography,CT)图像、核 磁共振(Magnetic Resonance,MR)图像、正电子发射计算机断层显像(Positron Emission Tomography,PET)图像等临床医学图像。示例性地,待分割图像可以是多维颅内血管图像或肺部支气管图像等。具体地,待分割图像包括目标分割对象和非目标分割对象。其中,目标分割对象可以是血管、或骨骼等用户感兴趣的对象。
示例性地,待分割图像可以是平面图像。平面图像可以是原始采集的平面图像。考虑到获取到的原始的待分割图像可能是单位维或者三维以上的立体图像的情况。当原始的待分割图像为多维图像时,可以为原始的待分割图像经过预处理得到待分割图像的平面图像。例如,可以是将三维图像进行切片化分割得到的平面图像可选地,待分割图像可以是灰度图像。
在本发明实施例中,获取一张、两张或两张以上的待分割图像。可选地,获取待分割图像包括:基于图像采集设备实时采集包含有目标分割对象的待分割图像,或者,从预设存储位置获取包含有目标分割对象的待分割图像,又或者,接收目标设备所发送的包含有目标分割对象的待分割图像。其中,待分割图像存储位置并不受限制,可以根据实际需求进行设置,需要时直接从相应的存储位置进行获取。
S120、将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
在本发明实施例中,通过将待分割图像,作为输入数据输入至预先训练完成的图像分割模型中;图像分割模型通过编码器、解码器以及至少一个自注意 力模型,实现对待分割图像的图像分割,得到与待分割图像对应的目标分割图像,并作为输出数据从图像分割模型输,出能够实现图像高效精准地自动分割。
其中,编码器可以对输入的待分割图像的特征进行初步的抽象和压缩,以对待分割图像的特征进行初步的清洗和筛选,在保留重要特征的同时,降低特征维度,减少数据量,提升分割效率。解码器可以实现待分割图像的特征的复现。自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系,从而捕获待分割图像中的长距离依赖关系,获取更加丰富的待分割图像全局上下文特征,以更加精准地分割出待分割图像的图像特征。
具体地,图像分割模型可包括编码器,与编码器连接的至少一个自注意力模型,与最后一级自注意力模型连接的解码器。换言之,以待分割图像作为编码器的输入,将编码器的输出作为与编码器连接的自注意力模型的输入,将最后一级自注意力模型的输出作为解码器的输入,由解码器输出与待分割图像对应的目标分割图像。需要说明的是,在本发明实施例中,并不对自注意力模型的数量进行限定,可以根据实际需求进行设置,示例性地,自注意力模型可以是一个、两个或两个以上。可选地,各个自注意力模型之间串行连接。
示例性的,参见图2所示的图像分割模型的模型结构图。其中,所述图像分割模型可以包括:编码器、至少一个自注意力模型、解码器。其中,编码器可将高维的待分割图像通过编码映射到一个新的编码空间,新的编码空间可以包含待分割图像的像素点信息,解码器可将编码空间通过解码映射到待分割图像对应的目标分割图像。具体的,待分割图像通过将编码器编码映射输入至自注意力模型,确定待分割图像中每个像素点与图像中所有像素点之间的依赖关系,然后解码器通过解码映射到待分割图像对应的目标分割图像。
在本发明实施例的一个可选实施方式中,所述将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像,包括:将待分割图像输入至预先训练完成的编码器中,得到与待分割图像对应的目标编码图像;将目标编码图像输入至预先训练完成的至少一个自注意力模型中,得到与目标编码图像对应的自注意力分割图像;将自注意力分割图像输入至预先训练完成的解码器中,得到与待分割图像对应的目标分割图像。
其中,将待分割图像,作为输入数据输入至预先训练完成的编码器中,编码器通过编码映射得到与待分割图像对应的目标编码图像;将目标编码图像,作为输入数据输入至预先训练完成的至少一个自注意力模型中,自注意力模型通过确定目标编码图像中每个像素点与图像中所有像素点之间的依赖关系,得到与目标编码图像对应的自注意力分割图像;将自注意力分割图像,作为输入数据输入至预先训练完成的解码器中,解码器通过解码映射得到与待分割图像对应的目标分割图像。
可选的,如果所述目标编码图像为平面图像,所述图像分割模型包括第一转换层和第二转化层;在所述得到与所述待分割图像对应的目标编码图像之后,所述将所述目标编码图像输入至预先训练完成的至少一个自注意力模型中之前,还包括:将所述目标编码图像输入至所述第一转换层,以将所述目标编码图像由二维图像特征转化成一维图像特征;在所述将所述自注意力分割图像输入至预先训练完成的解码器中之前,还包括:将所述自注意力分割图像输入至所述第二转换层,以将所述自注意力分割图像由一维图像特征转化成二维图像特征。
需要说明的是,由于本申请图像分割模型通过第一转换层和第二转换层将图像特征进行维度转换,使得图像分割模型可从待分割图像中更充分的提取特 征信息,并保障编码器、解码器和至少一个自注意力模型之间数据传输维度能够匹配。
本实施例的技术方案,通过获取至少一张待分割图像;将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像分割时,借助编码器、解码器和自注意力模型,使得图像分割模型进行图像处理的过程中,通过编码器对待分割图像的特征进行初步的抽象和压缩,将高维数据映射成低维数据,减少数据量;通过解码器实现待分割图像的特征的复现;通过自注意力模型能够有效捕获图像中的远距离依赖关系,从而实现高效准确的对图像进行分割。
实施例二
图3为本发明实施例二所提供的一种图像分割方法的流程图,本实施例在本发明实施例中任一可选技术方案的基础上,可选地,还包括:基于多组训练样本数据,对基于预先建立的初始网络模型进行训练,生成图像分割模型,其中,所述训练样本数据包括样本图像数据以及与所述样本待分割图像对应的样本目标分割图像。
如图3所示,本发明实施例的方法具体包括:
S210、基于多组训练样本数据,对预先建立的初始网络模型进行训练,生成图像分割模型,其中,所述训练样本数据包括样本图像数据以及与所述样本待分割图像对应的样本目标分割图像。
在本实施例中,图像分割模型可以预先通过大量的样本待分割图像及与样本待分割图像对应的样本目标分割图像对初始网络模型进行训练得到。在所训练的图像分割模型中,会对样本待分割图像进行编码、解码处理,并基于自注意力模型对图像分割模型中的模型参数进行训练,并通过不断调整模型参数,使得模型的输出结果与样本待分割图像对应的目标分割图像之间的偏差逐渐减小并趋于稳定,生成图像分割模型。
其中,初始网络模型的模型参数可以采用随机初始化原则,也可以根据经验采用固定值初始化原则,本实施例对此不做具体限定。通过对模型各节点的权重和偏置值进行初始化赋值,可提升模型的收敛速度和性能表现。
可选的,所述基于多组训练样本数据,对基于预先建立的初始网络模型进行训练,可以包括:将样本图像数据输入至预先建立的编码器中,得到与待分割图像对应的样本编码图像;将样本编码图像输入至预先建立的至少一个自注意力模型中,得到与目标编码图像对应的样本自注意力图像;将样本自注意力图像输入至预先建立的解码器中,得到与待分割图像对应的目标分割图像。
表1编码器和解码器架构表
Figure PCTCN2021138027-appb-000001
Figure PCTCN2021138027-appb-000002
其中,样本图像数据为多组待分割图像的样本,编码器和解码器的具体设计可以如表1所示。示例性的,所有卷积层均使用3x3大小的卷积核,最大池化层采用2倍降采样。如图4所示,第一转换层将(25,25,256)的张量E转换成(25*25,256)的张量R,第二转换层将(25*25,256)的张量S'转换成(25,25,256)的张量R'。编码器将高维的样本图像数据通过一连串的卷积层和池化层编码成低维的隐变量,卷积层负责获取图像局域特征,池化层对图像进行下采样,编码器加入池化层可以加快计算速度和防止过拟合的作用。解码器对低维的隐变量进行上采样和级连,然后进行卷积处理,从而完善目标分割图像的几何形状,弥补编码器当中池化层将样本编码图像缩小造成的细节损失。
在本发明实施例的一个可选实施方式中,所述将所述样本编码图像输入至预先建立的至少一个自注意力模型中,得到与所述目标编码图像对应的样本自注意力图像,可以包括:将样本编码图像输入至预先建立的自注意力模型中;基于样本编码图像进行线性变化得到自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;基于第一待调整参数矩阵和第二待调整参数矩阵确定与样本编码图像对应的相似度矩阵;基于第三待调整参数矩阵对相似度矩阵进行加权,得到加权特征图像;基于至少两张加权特征图像和样本编码图像确定与目标编码图像对应的样本自注意力图像。如图5所示,第一待调整参数矩阵可以用q表示,第二待调整参数矩阵可以用k表示和第三待调整参数矩阵可以用v表示。
其中,线性变化是利用直线方程对样本编码图像进行数据变换,得到自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵。 目的是让样本编码图像突出自己感兴趣的区域,方便后续的处理。通过样本编码图像的第一待调整参数矩阵和第二待调整参数矩阵计算得到相似度矩阵,其中,相似度矩阵是样本编码图像中每个位置与其他位置之间关系的矩阵。第三待调整参数矩阵对相似度矩阵进行加权,具体为第三待调整参数矩阵作为权重矩阵乘以相似度矩阵得到加权特征图像。
具体的,所述基于样本提取图像进行线性变化得到自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵,可以包括:
q=W qR
k=W kR
v=W vR
其中,R表示样本编码图像,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,v表示第三待调整参数矩阵,W q表示与第一待调整参数矩阵对应的随机初始化的矩阵,W k表示与第二待调整参数矩阵对应的随机初始化的矩阵,W v表示与第三待调整参数矩阵对应的随机初始化的矩阵。本实施例自注意力模型通过对待调整参数矩阵进行随机初始化,可提升自注意力模型计算速度,并尽可能其收敛于全局最优。
在本发明实施例的一个可选实施方式中,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本编码图像对应的相似度矩阵,可以包括:将样本编码图像中每一个像素点逐个确定为目标像素点;针对每个目标像素点,基于第一待调整参数矩阵和第二待调整参数矩阵分别计算目标像素点与所述样本编码图像中所有像素点之间的像素相似度;基于每一个目标像素点在样本编码图像中所处的位置以及各个像素相似度构建与样本编码图像对应 的相似度矩阵。
具体来说,就是获取样本编码图像的每一个像素点信息,像素点信息可以包括每个像素在样本编码图像中的位置信息及各个像素相似度,构建与样本编码图像对应的相似度矩阵,从而学习图像中每个像素点位置与其他所有像素点位置之间的依赖关系,获取样本编码图像的全局上下文信息。
在本发明实施例的一个可选实施方式中,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本编码图像中所有像素点之间的像素相似度,具体可以通过如下公式实现:
Figure PCTCN2021138027-appb-000003
其中,(i,j)表示样本编码图像的第i行第j列的位置,Ω (,)表示相似度矩阵中位于第i行第j列的位置处的相似度,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,q (i,n)表示第一待调整参数矩阵q中第i行第n列的元素;t (n,j)表示矩阵t中第n行第j列的元素,矩阵t为第二待调整参数矩阵k的转置,d表示第二待调整参数矩阵k的维度,c表示输入图像的通道数。
其中,
Figure PCTCN2021138027-appb-000004
为对样本编码图像进行缩放操作,通过缩放操作可改变样本编码图像的像素点在新图像中的空间位置,可使像素相似度计算具有稳定的梯度,通过计算样本编码图像的像素相似度,可以得到当前像素点与当前图像其它像素点之间的依赖关系,从而提高了对图像长距离依赖关系的捕获能力。
在本发明实施例的一个可选实施方式中,所述基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像,可以包括:
对相似度矩阵进行归一化;
基于第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加权特征图像。
其中,基于第三待调整参数矩阵对归一化后的相似度矩阵进行加权具体基于如下计算公式实现:
Figure PCTCN2021138027-appb-000005
其中,A(q,k,v) (i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H 0表示样本特征图的目标输出长度,W 0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′ (i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v (n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
本发明实施例通过对相似度矩阵进行归一化,然后通过第三待调整参数矩阵对归一化后的相似度矩阵进行加权,进行计算当前像素点的加权特征值,从而提高对样本编码图像提取特征的可靠性,得到更加有效的加权特征图像。
在本发明实施例的一个可选实施方式中,所述基于至少两张加权特征图像和所述样本编码图像确定与所述目标编码图像对应的样本自注意力图像,可以包括:将至少两张加权特征图像进行融合得到融合特征图像;将融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与所述样本编码图像相加,得到目标维度图像;将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像;将所述输出维度图像调整为所述融合特征图像的特征维度得到与所述目标编码图像对应的样本自注意力图像。
其中,目标特征维度可以理解为目标特征的通道数,例如,一个通道是一维,两个通道是二维,n个通道是n维。具体的,通过将多张加权特征图像进行在通道维度上进行融合,得到融合特征图像A′:
A′=A 1+A 2+…+A n
其中,n为加权特征图像的通道数,得到A′后,将融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像C与所述样本编码图像R相加,得到目标维度图像C′。
C′=C+R
优选的,所述自注意力模型包括两个全连接层,输出维度图像可以为:
S=conv(dense(dense(C′))+C′)
其中,S表示输出维度图像,dense表示全连接层,所述全连接层的激活函数为线性整流函数(Rectified Linear Unit,ReLU),conv表示卷积层,用于统一特征维度。本实施例中自注意力模型包括两个全连接层,全连接层中的每个神经元与其前一层的所有神经元进行全连接,全连接层可以整合卷积层中具有类别区分性的局部信息。为了提升自注意力模型性能,全连接层每个神经元的激励函数一般采用线性整流函数。
可以理解的是,在对图像分割模型训练时,为了保证模型精准度,往往需要大量的样本图像数据。考虑到获取样本图像数据的实际困难,在本发明的技术方案还对样本图像数据进行了扩充处理。具体地,可以对原始的样本图像数据进行预处理得到新的样本图像数据。其中,预处理包括但不限于切片、裁剪、加窗或马赛克切片置换法等方法。
在本发明实施例的一个可选实施方式中,还包括:将获取到的原始的样本 图像数据裁剪成至少两个图像切片,将至少两个所述图像切片进行拼接得到新的样本图像数据。
示例性的,马赛克切片置换法将原始的样本图像数据和其标签均裁剪成至少两张大小不同的图像切片,再随机地将这些图像切片拼接成原始的样本图像数据大小,得到新的样本图像数据,新的样本图像数据的目标像素点在整个图片的分布更丰富、更均匀,从而加快模型的收敛速度,也增加了训练样本数量,增强网络的鲁棒性。
S220、获取至少一张待分割图像。
S230、将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
可选的,得到与待分割图像对应的目标分割图像之后,还包括:对目标分割图像进行多维重建,得到多维重建图像。其中,多维重建方法可以包括但不限于光线投射算法、纹理映射算法或切片级重建方法等。通过对目标分割图像进行多维重建,使得图像观察更加方便,提升用户体验。
本实施例的技术方案,通过基于多组训练样本数据,对预先建立的初始网络模型进行训练,生成图像分割模型,其中,所述训练样本数据包括样本图像数据以及与所述样本待分割图像对应的样本目标分割图像;获取至少一张待分割图像;将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解 码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像分割时,借助编码器、解码器和自注意力模型,使得图像分割模型进行图像处理的过程中,能够有效捕获图像中的远距离依赖关系,从而实现高效准确的对图像进行分割。
实施例三
图6为本发明实施例三提供的一种图像分割装置的结构示意图,本实施例所提供的图像分割装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本发明实施例中的图像分割方法。该装置具体可包括:图像获取模块310及图像分割模块320。
其中,图像获取模块310,用于获取至少一张待分割图像;图像分割模块320,用于将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
本发明实施例提供了一种图像分割装置,通过获取至少一张待分割图像;将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。上述技术方案在进行图像分割时,借助编码器、解码器和自注意力模型,使得图像分割模型进行图像处理的 过程中,通过编码器对待分割图像的特征进行初步的抽象和压缩,将高维数据映射成低维数据,减少数据量;通过解码器实现待分割图像的特征的复现;通过自注意力模型能够有效捕获图像中的远距离依赖关系,从而实现高效准确的对图像进行分割。
在本发明实施例中任一可选技术方案的基础上,可选地,图像分割模块320可以包括:
图像编码单元,用于将所述待分割图像输入至预先训练完成的编码器中,得到与所述待分割图像对应的目标编码图像;
自注意力分割单元,用于将所述目标编码图像输入至预先训练完成的至少一个自注意力模型中,得到与所述目标编码图像对应的自注意力分割图像;
图像解码单元,用于将所述自注意力分割图像输入至预先训练完成的解码器中,得到与所述待分割图像对应的目标分割图像。
在本发明实施例中任一可选技术方案的基础上,可选地,如果所述目标编码图像为平面图像,所述图像分割模型包括第一转换层和第二转化层;
图像分割模块320,还可以用于:
将所述目标编码图像输入至所述第一转换层,以将所述目标编码图像由二维图像特征转化成一维图像特征;
在所述将所述自注意力分割图像输入至预先训练完成的解码器中之前,还包括:
将所述自注意力分割图像输入至所述第二转换层,以将所述自注意力分割图像由一维图像特征转化成二维图像特征。
在本发明实施例中任一可选技术方案的基础上,可选地,图像分割装置可 以还包括:图像分割模型训练模块,用于基于多组训练样本数据,对基于预先建立的初始网络模型进行训练,生成图像分割模型,其中,所述训练样本数据包括样本图像数据以及与所述样本待分割图像对应的样本目标分割图像。
在本发明实施例中任一可选技术方案的基础上,可选地,图像分割模型训练模块,可以包括:
样本编码单元,用于将所述样本图像数据输入至预先建立的编码器中,得到与所述待分割图像对应的样本编码图像;
样本自注意力图像生成单元,用于将所述样本编码图像输入至预先建立的至少一个自注意力模型中,得到与所述目标编码图像对应的样本自注意力图像;
样本解码单元,用于将所述样本自注意力图像输入至预先建立的解码器中,得到与所述待分割图像对应的目标分割图像。
在本发明实施例中任一可选技术方案的基础上,可选地,样本自注意力图像生成单元可以包括:
图像输入子单元,用于将所述样本编码图像输入至预先建立的自注意力模型中;
线性变换子单元,用于基于所述样本编码图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;
相似度矩阵确定子单元,用于基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本编码图像对应的相似度矩阵;
矩阵加权子单元,用于基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像;
图像确定子单元,用于基于至少两张加权特征图像和所述样本编码图像确 定与所述目标编码图像对应的样本自注意力图像。
在本发明实施例中任一可选技术方案的基础上,可选地,相似度矩阵确定子单元可以用于:
将所述样本编码图像中每一个像素点逐个确定为目标像素点;
针对每个所述目标像素点,基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本编码图像中所有像素点之间的像素相似度;
基于每一个所述目标像素点在所述样本编码图像中所处的位置以及各个所述像素相似度构建与所述样本编码图像对应的相似度矩阵。
在本发明实施例中任一可选技术方案的基础上,可选地,所述相似度矩阵确定子单元具体可用于:
Figure PCTCN2021138027-appb-000006
其中,(i,j)表示样本编码图像的第i行第j列的位置,Ω (i,j)表示相似度矩阵中位于第i行第j列的位置处的相似度,q表示第一待调整参数矩阵,k表示第二待调整参数矩阵,q (i,n)表示第一待调整参数矩阵q中第i行第n列的元素;t (n,j)表示矩阵t中第n行第j列的元素,矩阵t为第二待调整参数矩阵k的转置,d表示第二待调整参数矩阵k的维度,c表示输入图像的通道数。
在本发明实施例中任一可选技术方案的基础上,可选地,所述矩阵加权子单元具体可用于:
对所述相似度矩阵进行归一化;
基于所述第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加 权特征图像,具体基于如下计算公式实现:
Figure PCTCN2021138027-appb-000007
其中,A(q,k,v) (i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H 0表示样本特征图的目标输出长度,W 0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′ (i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v (n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
在本发明实施例中任一可选技术方案的基础上,可选地,图像确定子单元具体用于:
将至少两张加权特征图像进行融合得到融合特征图像;
将所述融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与所述样本编码图像相加,得到目标维度图像;
将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输出维度图像;
将所述输出维度图像调整为所述融合特征图像的特征维度得到与所述目标编码图像对应的样本自注意力图像与所述目标编码图像对应的样本自注意力图像与所述目标编码图像对应的样本自注意力图像。
在本发明实施例中任一可选技术方案的基础上,可选地,图像分割模型训练模块还可以用于:
将获取到的原始的样本图像数据裁剪成至少两个图像切片,将至少两个所述图像切片进行拼接得到新的样本图像数据。
上述图像分割装置可执行本发明任意实施例所提供的图像分割方法,具备执行图像分割方法相应的功能模块和有益效果。
实施例四
图7为本发明实施例六所提供的一种图像分割设备的结构示意图。图7示出了适于用来实现本发明实施方式的示例性图像分割设备12的框图。图7显示的图像分割设备12仅仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。
如图7所示,图像分割设备12以通用计算设备的形式表现。图像分割设备12的组件可以包括但不限于:一个或者多个处理器或者处理单元16,系统存储器28,连接不同系统组件(包括系统存储器28和处理单元16)的总线18。
总线18表示几类总线结构中的一种或多种,包括存储器总线或者存储器控制器,外围总线,图形加速端口,处理器或者使用多种总线结构中的任意总线结构的局域总线。举例来说,这些体系结构包括但不限于工业标准体系结构(ISA)总线,微通道体系结构(MAC)总线,增强型ISA总线、视频电子标准协会(VESA)局域总线以及外围组件互连(PCI)总线。
图像分割设备12典型地包括多种计算机系统可读介质。这些介质可以是任何能够被图像分割设备12访问的可用介质,包括易失性和非易失性介质,可移动的和不可移动的介质。
系统存储器28可以包括易失性存储器形式的计算机系统可读介质,例如随机存取存储器(RAM)30和/或高速缓存存储器32。图像分割设备12可以进一步包括其它可移动/不可移动的、易失性/非易失性计算机系统存储介质。仅作为 举例,存储系统34可以用于读写不可移动的、非易失性磁介质(图7未显示,通常称为“硬盘驱动器”)。尽管图7中未示出,可以提供用于对可移动非易失性磁盘(例如“软盘”)读写的磁盘驱动器,以及对可移动非易失性光盘(例如CD-ROM,DVD-ROM或者其它光介质)读写的光盘驱动器。在这些情况下,每个驱动器可以通过一个或者多个数据介质接口与总线18相连。系统存储器28可以包括至少一个程序产品,该程序产品具有一组(例如至少一个)程序模块,这些程序模块被配置以执行本发明各实施例的功能。
具有一组(至少一个)程序模块42的程序/实用工具40,可以存储在例如系统存储器28中,这样的程序模块42包括但不限于操作系统、一个或者多个应用程序、其它程序模块以及程序数据,这些示例中的每一个或某种组合中可能包括网络环境的实现。程序模块42通常执行本发明所描述的实施例中的功能和/或方法。
图像分割设备12也可以与一个或多个外部设备14(例如键盘、指向设备、显示器24等)通信,还可与一个或者多个使得用户能与该图像分割设备12交互的设备通信,和/或与使得该图像分割设备12能与一个或多个其它计算设备进行通信的任何设备(例如网卡,调制解调器等等)通信。这种通信可以通过输入/输出(I/O)接口22进行。并且,图像分割设备12还可以通过网络适配器20与一个或者多个网络(例如局域网(LAN),广域网(WAN)和/或公共网络,例如因特网)通信。如图7所示,网络适配器20通过总线18与图像分割设备12的其它模块通信。应当明白,尽管图7中未示出,可以结合图像分割设备12使用其它硬件和/或软件模块,包括但不限于:微代码、设备驱动器、冗余处理单元、外部磁盘驱动阵列、RAID系统、磁带驱动器以及数据备份存储系统等。
处理单元16通过运行存储在系统存储器28中的程序,从而执行各种功能应用以及数据处理,例如实现本发实施例所提供的一种图像分割方法。
实施例五
本发明实施例五还提供一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行一种图像分割方法,该方法包括:
获取至少一张待分割图像;将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
本发明实施例的计算机存储介质,可以采用一个或多个计算机可读的介质的任意组合。计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本文件中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据 信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。
计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括——但不限于无线、电线、光缆、RF等等,或者上述的任意合适的组合。
可以以一种或多种程序设计语言或其组合来编写用于执行本发明实施例操作的计算机程序代码,所述程序设计语言包括面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言——诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (14)

  1. 一种图像分割方法,其特征在于,包括:
    获取至少一张待分割图像;
    将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;
    其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
  2. 根据权利要求1所述的方法,其特征在于,所述将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像,包括:
    将所述待分割图像输入至预先训练完成的编码器中,得到与所述待分割图像对应的目标编码图像;
    将所述目标编码图像输入至预先训练完成的至少一个自注意力模型中,得到与所述目标编码图像对应的自注意力分割图像;
    将所述自注意力分割图像输入至预先训练完成的解码器中,得到与所述待分割图像对应的目标分割图像。
  3. 根据权利要求2所述的方法,其特征在于,如果所述目标编码图像为平面图像,所述图像分割模型包括第一转换层和第二转化层;
    在所述得到与所述待分割图像对应的目标编码图像之后,所述将所述目标编码图像输入至预先训练完成的至少一个自注意力模型中之前,还包括:
    将所述目标编码图像输入至所述第一转换层,以将所述目标编码图像由二维图像特征转化成一维图像特征;
    在所述将所述自注意力分割图像输入至预先训练完成的解码器中之前,还包括:
    将所述自注意力分割图像输入至所述第二转换层,以将所述自注意力分割图像由一维图像特征转化成二维图像特征。
  4. 根据权利要求1所述的方法,其特征在于,还包括:
    基于多组训练样本数据,对基于预先建立的初始网络模型进行训练,生成图像分割模型,其中,所述训练样本数据包括样本图像数据以及与所述样本待分割图像对应的样本目标分割图像。
  5. 根据权利要求4所述的方法,其特征在于,所述基于多组训练样本数据,对基于预先建立的初始网络模型进行训练,包括:
    将所述样本图像数据输入至预先建立的编码器中,得到与所述待分割图像对应的样本编码图像;
    将所述样本编码图像输入至预先建立的至少一个自注意力模型中,得到与所述目标编码图像对应的样本自注意力图像;
    将所述样本自注意力图像输入至预先建立的解码器中,得到与所述待分割图像对应的目标分割图像。
  6. 根据权利要求5所述的方法,其特征在于,所述将所述样本编码图像输入至预先建立的至少一个自注意力模型中,得到与所述目标编码图像对应的样本自注意力图像,包括:
    将所述样本编码图像输入至预先建立的自注意力模型中;
    基于所述样本编码图像进行线性变化得到所述自注意力模型的第一待调整参数矩阵、第二待调整参数矩阵和第三待调整参数矩阵;
    基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本编码图像对应的相似度矩阵;
    基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像;
    基于至少两张加权特征图像和所述样本编码图像确定与所述目标编码图像对应的样本自注意力图像。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵确定与所述样本编码图像对应的相似度矩阵,包括:
    将所述样本编码图像中每一个像素点逐个确定为目标像素点;
    针对每个所述目标像素点,基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本编码图像中所有像素点之间的像素相似度;
    基于每一个所述目标像素点在所述样本编码图像中所处的位置以及各个所述像素相似度构建与所述样本编码图像对应的相似度矩阵。
  8. 根据权利要求7所述的方法,其特征在于,所述基于所述第一待调整参数矩阵和所述第二待调整参数矩阵分别计算所述目标像素点与所述样本编码图像中所有像素点之间的像素相似度,包括:
    Figure PCTCN2021138027-appb-100001
    其中,(i,j)表示样本编码图像的第i行第j列的位置,Ω (i,j)表示相似度矩阵中位于第i行第j列的位置处的相似度,q表示第一待调整参数矩阵,k表示第 二待调整参数矩阵,q (i,n)表示第一待调整参数矩阵q中第i行第n列的元素;t (n,j)表示矩阵t中第n行第j列的元素,矩阵t为第二待调整参数矩阵k的转置,d表示第二待调整参数矩阵k的维度,c表示输入图像的通道数。
  9. 根据权利要求8所述的方法,其特征在于,所述基于所述第三待调整参数矩阵对所述相似度矩阵进行加权,得到加权特征图像,包括:
    对所述相似度矩阵进行归一化;
    基于所述第三待调整参数矩阵对归一化后的相似度矩阵进行加权,得到加权特征图像,具体基于如下计算公式实现:
    Figure PCTCN2021138027-appb-100002
    其中,A(q,k,v) (i,j)表示加权特征图像A通过矩阵q,k和v得到的第i行第j列的加权后的特征值,v表示第三待调整参数矩阵,H 0表示样本特征图的目标输出长度,W 0表示样本特征图的目标输出宽度,Ω′表示归一化后的相似度矩阵,Ω′ (i,n)表示归一化后的相似度矩阵Ω′中第i行第n列的元素,v (n,j)表示第三待调整参数矩阵v中第n行第j列的元素。
  10. 根据权利要求6所述的方法,其特征在于,所述基于至少两张加权特征图像和所述样本编码图像确定与所述目标编码图像对应的样本自注意力图像,包括:
    将至少两张加权特征图像进行融合得到融合特征图像;
    将所述融合特征图像的特征维度调整为目标特征维度,并将调整为目标特征维度的融合特征图像与所述样本编码图像相加,得到目标维度图像;
    将所述目标维度图像输入所述自注意力模型的至少一个全连接层,得到输 出维度图像;
    将所述输出维度图像调整为所述融合特征图像的特征维度得到与所述目标编码图像对应的样本自注意力图像。
  11. 根据权利要求4所述的方法,其特征在于,还包括:
    将获取到的原始的样本图像数据裁剪成至少两个图像切片,将至少两个所述图像切片进行拼接得到新的样本图像数据。
  12. 一种图像分割装置,其特征在于,包括:
    图像获取模块,用于获取至少一张待分割图像;
    图像分割模块,用于将所述待分割图像输入至预先训练完成的图像分割模型中,得到与所述待分割图像对应的目标分割图像;
    其中,所述图像分割模型基于编码器、解码器以及至少一个自注意力模型构建,所述自注意力模型用于确定所述待分割图像中每个像素点与图像中所有像素点之间的依赖关系。
  13. 一种图像分割设备,其特征在于,所述图像分割设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序;
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-11中任一所述的一种图像分割方法。
  14. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1-11中任一所述的一种图像分割方法。
PCT/CN2021/138027 2021-05-21 2021-12-14 图像分割方法、装置、设备及存储介质 WO2022242131A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110558675.8 2021-05-21
CN202110558675.8A CN113159056B (zh) 2021-05-21 2021-05-21 图像分割方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022242131A1 true WO2022242131A1 (zh) 2022-11-24

Family

ID=76877160

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/138027 WO2022242131A1 (zh) 2021-05-21 2021-12-14 图像分割方法、装置、设备及存储介质

Country Status (2)

Country Link
CN (1) CN113159056B (zh)
WO (1) WO2022242131A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116342888A (zh) * 2023-05-25 2023-06-27 之江实验室 一种基于稀疏标注训练分割模型的方法及装置
CN116543147A (zh) * 2023-03-10 2023-08-04 武汉库柏特科技有限公司 一种颈动脉超声图像分割方法、装置、设备及存储介质
CN117408997A (zh) * 2023-12-13 2024-01-16 安徽省立医院(中国科学技术大学附属第一医院) 非小细胞肺癌组织学图像egfr基因突变的辅助检测系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113326851B (zh) * 2021-05-21 2023-10-27 中国科学院深圳先进技术研究院 图像特征提取方法、装置、电子设备及存储介质
CN113159056B (zh) * 2021-05-21 2023-11-21 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质
CN114185100B (zh) * 2021-12-10 2024-05-24 湖南五维地质科技有限公司 一种瞬变电磁数据提取精细目标体的方法
CN114092817B (zh) * 2021-12-14 2022-04-01 深圳致星科技有限公司 目标检测方法、存储介质、电子设备及目标检测装置
CN115880309A (zh) * 2023-02-27 2023-03-31 耕宇牧星(北京)空间科技有限公司 一种基于多层循环编码解码器网络的林木图像分割方法

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872306A (zh) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 医学图像分割方法、装置和存储介质
CN111429464A (zh) * 2020-03-11 2020-07-17 深圳先进技术研究院 医学图像分割方法、医学图像分割装置及终端设备
CN111612790A (zh) * 2020-04-29 2020-09-01 杭州电子科技大学 一种基于t型注意力结构的医学图像分割方法
CN111951281A (zh) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质
CN111951280A (zh) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质
CN112233135A (zh) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 眼底图像中视网膜血管分割方法及计算机可读存储介质
CN113159056A (zh) * 2021-05-21 2021-07-23 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11152013B2 (en) * 2018-08-02 2021-10-19 Arizona Board Of Regents On Behalf Of Arizona State University Systems and methods for a triplet network with attention for speaker diartzation
CN115885289A (zh) * 2020-09-16 2023-03-31 谷歌有限责任公司 利用全局自注意力神经网络对依赖性建模

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872306A (zh) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 医学图像分割方法、装置和存储介质
CN111429464A (zh) * 2020-03-11 2020-07-17 深圳先进技术研究院 医学图像分割方法、医学图像分割装置及终端设备
CN111612790A (zh) * 2020-04-29 2020-09-01 杭州电子科技大学 一种基于t型注意力结构的医学图像分割方法
CN111951281A (zh) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质
CN111951280A (zh) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质
CN112233135A (zh) * 2020-11-11 2021-01-15 清华大学深圳国际研究生院 眼底图像中视网膜血管分割方法及计算机可读存储介质
CN113159056A (zh) * 2021-05-21 2021-07-23 中国科学院深圳先进技术研究院 图像分割方法、装置、设备及存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116543147A (zh) * 2023-03-10 2023-08-04 武汉库柏特科技有限公司 一种颈动脉超声图像分割方法、装置、设备及存储介质
CN116342888A (zh) * 2023-05-25 2023-06-27 之江实验室 一种基于稀疏标注训练分割模型的方法及装置
CN116342888B (zh) * 2023-05-25 2023-08-11 之江实验室 一种基于稀疏标注训练分割模型的方法及装置
CN117408997A (zh) * 2023-12-13 2024-01-16 安徽省立医院(中国科学技术大学附属第一医院) 非小细胞肺癌组织学图像egfr基因突变的辅助检测系统
CN117408997B (zh) * 2023-12-13 2024-03-08 安徽省立医院(中国科学技术大学附属第一医院) 非小细胞肺癌组织学图像egfr基因突变的辅助检测系统

Also Published As

Publication number Publication date
CN113159056B (zh) 2023-11-21
CN113159056A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
WO2022242131A1 (zh) 图像分割方法、装置、设备及存储介质
EP3511942B1 (en) Cross-domain image analysis using deep image-to-image networks and adversarial networks
JP7373554B2 (ja) クロスドメイン画像変換
US11594006B2 (en) Self-supervised hierarchical motion learning for video action recognition
AU2019268184B2 (en) Precise and robust camera calibration
WO2022242127A1 (zh) 图像特征提取方法、装置、电子设备及存储介质
CN114365156A (zh) 用于神经网络的迁移学习
WO2024021194A1 (zh) 激光雷达点云分割方法、装置、设备及存储介质
CN111242952B (zh) 图像分割模型训练方法、图像分割方法、装置及计算设备
CN111091010A (zh) 相似度确定、网络训练、查找方法及装置和存储介质
KR20220038996A (ko) 특징 임베딩 방법 및 장치
CN112396605B (zh) 网络训练方法及装置、图像识别方法和电子设备
CN113807361A (zh) 神经网络、目标检测方法、神经网络训练方法及相关产品
CN116129141A (zh) 医学数据处理方法、装置、设备、介质和计算机程序产品
US11961266B2 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN116597260A (zh) 图像处理方法、电子设备、存储介质及计算机程序产品
WO2022208440A1 (en) Multiview neural human prediction using implicit differentiable renderer for facial expression, body pose shape and clothes performance capture
CN115994558A (zh) 医学影像编码网络的预训练方法、装置、设备及存储介质
Wang et al. Swimmer’s posture recognition and correction method based on embedded depth image skeleton tracking
CN111598904B (zh) 图像分割方法、装置、设备及存储介质
CN114549992A (zh) 一种跨分辨率的建筑物影像提取方法及装置
Molnár et al. Variational autoencoders for 3D data processing
US20240203052A1 (en) Replicating physical environments and generating 3d assets for synthetic scene generation
Ma et al. Depth Estimation from Monocular Images Using Dilated Convolution and Uncertainty Learning
CN118279488A (zh) 一种xr虚拟定位方法、介质及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21940574

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21940574

Country of ref document: EP

Kind code of ref document: A1