CN111583282B - Image segmentation method, device, equipment and storage medium - Google Patents

Image segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN111583282B
CN111583282B CN202010419140.8A CN202010419140A CN111583282B CN 111583282 B CN111583282 B CN 111583282B CN 202010419140 A CN202010419140 A CN 202010419140A CN 111583282 B CN111583282 B CN 111583282B
Authority
CN
China
Prior art keywords
image
auxiliary
frame
initial feature
feature map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010419140.8A
Other languages
Chinese (zh)
Other versions
CN111583282A (en
Inventor
魏亚男
田疆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN202010419140.8A priority Critical patent/CN111583282B/en
Publication of CN111583282A publication Critical patent/CN111583282A/en
Application granted granted Critical
Publication of CN111583282B publication Critical patent/CN111583282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application discloses an image segmentation method, an image segmentation device, image segmentation equipment and a storage medium, wherein a target initial feature image sequence of a target image and an auxiliary initial feature image sequence of at least one auxiliary image are obtained; correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image; fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features; and determining a region segmentation result of the target image by using the fusion characteristic. The image segmentation accuracy is improved.

Description

Image segmentation method, device, equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image segmentation method, apparatus, device, and storage medium.
Background
Image segmentation refers to the extraction of a region of interest from an image. The current image segmentation method based on deep learning is severely dependent on training samples, but in some fields, the training samples are difficult to collect, so that the number of the training samples is small, and the segmentation accuracy of the image segmentation method is low due to the defect of the training samples.
Disclosure of Invention
The application aims to provide an image segmentation method, an image segmentation device, an image segmentation equipment and a storage medium, which comprise the following technical scheme:
An image segmentation method, comprising:
acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one auxiliary image;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features;
and determining a region segmentation result of the target image by utilizing the fusion characteristic.
In the above method, preferably, the correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the prior knowledge of the region segmentation of each frame of auxiliary image includes:
And for each frame of auxiliary image, respectively correcting the target area and the non-target area of the auxiliary initial characteristic image sequence of the frame of auxiliary image by using the prior knowledge of the area segmentation of the frame of auxiliary image.
In the above method, preferably, the correcting the target region and the non-target region of the auxiliary initial feature map sequence of the frame auxiliary image by using the prior knowledge of region segmentation of the frame auxiliary image includes:
For each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image, determining a first weight of a target region of the auxiliary initial feature map and a second weight of a non-target region of the auxiliary initial feature map by using region segmentation priori knowledge of the frame auxiliary image;
And according to the first weight and the second weight, weighting and summing the target area and the non-target area of the auxiliary initial feature map to obtain the corrected auxiliary initial feature map of the auxiliary initial feature map.
In the above method, preferably, the correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the prior knowledge of the region segmentation of each frame of auxiliary image includes:
And correcting the auxiliary initial feature map sequence of each frame of auxiliary image according to the region segmentation priori knowledge of each frame of auxiliary image and the target initial feature map sequence.
In the above method, preferably, the correcting the auxiliary initial feature map sequence of each frame of auxiliary image according to the region segmentation priori knowledge of each frame of auxiliary image and the target initial feature map sequence includes:
For each frame of auxiliary image, correcting a target area and a non-target area of an auxiliary initial feature image sequence of the frame of auxiliary image by using the area segmentation priori knowledge of the frame of auxiliary image to obtain an initial correction result of the auxiliary initial feature image sequence of the frame of auxiliary image;
And correcting an initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence to obtain a corrected auxiliary initial feature map sequence of the frame auxiliary image.
In the above method, preferably, the correcting the initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence includes:
Determining a third weight of each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence;
And multiplying the initial correction result of each auxiliary initial feature image in the auxiliary initial feature image sequence of the frame auxiliary image by a third weight corresponding to the auxiliary initial feature image to obtain a corrected auxiliary initial feature image sequence of the frame auxiliary image.
In the above method, preferably, the fusing the corrected initial auxiliary feature map sequence of each frame of auxiliary image with the target initial feature map sequence includes:
Splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain the fusion feature;
Or alternatively
Determining a fourth weight of each target initial feature map in the target initial feature map sequence according to the target initial feature map sequence; multiplying each target initial feature map in the target initial feature map sequence by a corresponding fourth weight to obtain a corrected target initial feature map sequence; and splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the corrected target initial feature image sequence to obtain the fusion feature.
The above method, preferably, the image segmentation method is implemented by an image segmentation model, and the image segmentation model is obtained by training in the following manner:
acquiring a sample initial feature map sequence of a sample image and an auxiliary initial feature map sequence of at least one auxiliary image through the image segmentation model;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
Fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence through the image segmentation model to obtain a first fusion feature;
determining a region segmentation result of the sample image by using the first fusion feature through the image segmentation model;
And updating parameters of the image segmentation model by taking the image segmentation model at least aiming at the region segmentation priori knowledge that the region segmentation result of the sample image approaches to the region segmentation of the sample image.
The above method, preferably, further comprises:
Correcting the sample initial feature map sequence through the image segmentation model at least according to the region segmentation result of the sample image to obtain a corrected sample initial feature map sequence;
for each frame of auxiliary image, obtaining a second fusion feature corresponding to the frame of auxiliary image through fusion of the corrected sample initial feature image sequence and the auxiliary initial feature image sequence of the frame of auxiliary image;
Determining a region segmentation result of each frame of auxiliary image by using a second fusion characteristic corresponding to each frame of auxiliary image through the image segmentation model;
The updating, by the image segmentation model, parameters of the image segmentation model at least targeting a priori knowledge of a region segmentation result of the sample image approaching the region segmentation result of the sample image, includes:
and updating parameters of the image segmentation model by taking the image segmentation model as a target, wherein the region segmentation result of the sample image approaches to the region segmentation priori knowledge of the sample image, and the region segmentation result of each frame of auxiliary image approaches to the segmentation priori knowledge of the frame of auxiliary image.
An image segmentation apparatus comprising:
the feature extraction processing module is used for acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one frame of auxiliary image;
the correction processing module is used for correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
the fusion processing module is used for fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features;
and the segmentation processing module is used for determining a region segmentation result of the target image by utilizing the fusion characteristic.
An electronic device, comprising:
a memory for storing at least one set of instructions;
a processor for calling and executing the instruction set in the memory, by executing the instruction set:
acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one auxiliary image;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features;
and determining a region segmentation result of the target image by utilizing the fusion characteristic. .
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the image segmentation method as set forth in any one of the preceding claims.
According to the scheme, the image segmentation method, the device, the equipment and the storage medium provided by the application are used for acquiring the target initial feature image sequence of the target image and the auxiliary initial feature image sequence of at least one auxiliary image; correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image; fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features; and determining a region segmentation result of the target image by using the fusion characteristic. According to the image segmentation scheme, the auxiliary initial feature image sequence of each frame of auxiliary image is corrected according to the region segmentation priori knowledge of each frame of auxiliary image, the corrected initial auxiliary feature image sequence of each frame of auxiliary image is fused with the target initial feature image sequence to obtain the fusion feature fused with the region segmentation priori knowledge of the auxiliary image, the image segmentation is carried out by utilizing the fusion feature, the purpose of assisting the region segmentation of the target image by utilizing the region segmentation priori knowledge of the auxiliary image is achieved, and therefore the image segmentation precision is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed for the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an implementation of an image segmentation method according to an embodiment of the present application;
FIG. 2 is a flowchart of an implementation of correcting a target region and a non-target region of an auxiliary initial feature map sequence of the frame auxiliary image using region segmentation priori knowledge of the frame auxiliary image according to an embodiment of the present application;
FIG. 3 is a flowchart of an implementation of correcting an initial correction result of an auxiliary initial feature map sequence of an auxiliary image of a frame by using a target initial feature map sequence according to an embodiment of the present application;
FIG. 4 is a flowchart of an implementation of fusing a modified initial auxiliary feature map sequence of auxiliary images of each frame with a target initial feature map sequence according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an architecture of an image segmentation model according to an embodiment of the present application;
FIG. 6 is a flowchart of an implementation of training an image segmentation model according to an embodiment of the present application;
FIG. 7 is a flowchart of another implementation of training an image segmentation model according to an embodiment of the present application;
FIGS. 8a-8c are diagrams illustrating a framework for training an image segmentation model according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an image segmentation apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and in the above drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in other sequences than those illustrated herein.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without any inventive effort, are intended to be within the scope of the application.
An implementation flowchart of the image segmentation method provided by the application is shown in fig. 1, and may include:
step S11: a target initial feature map sequence of the target image and an auxiliary initial feature map sequence of at least one auxiliary image are acquired.
The target image is the image to be segmented. The image may be a medical image, in which case the image segmentation is typically performed as an organ segmentation, e.g. in a CT image. The image may also be a common RGB image, such as an image taken by a cell phone, or an image taken by other image capturing devices, such as a digital camera, an industrial camera, etc.
In the embodiment of the application, the auxiliary image is introduced when the target image is segmented. In addition to extracting features from the target image, feature extraction is performed on each frame of auxiliary image. And (3) carrying out feature extraction on the target image and the auxiliary image by adopting the same feature extraction method to obtain a feature image sequence, marking the feature image sequence extracted from the target image as a target initial feature image sequence for convenience of distinguishing, marking the feature image sequence extracted from each frame of auxiliary image as an auxiliary initial feature image sequence, namely extracting an auxiliary initial feature image sequence from each frame of auxiliary image.
The auxiliary image may have only one frame or may have multiple frames, and the auxiliary image and the target image belong to the same field and have the same image of the segmented target, but the auxiliary image and the target image are different.
For example, to segment a liver image from a CT image, i.e. to segment the target into a liver, the target image and the auxiliary image may both be abdominal CT images, but the auxiliary image and the target image are abdominal CT images acquired from different acquisition objects, or abdominal CT images acquired from the same acquisition object at different times.
For another example, to segment an eye image from a face image, i.e., segment the object into eyes, the object image and the auxiliary image may both be face images, but the auxiliary image and the object image are face images of different persons.
For another example, to segment a vehicle from an image with a vehicle, i.e., segment a target as a vehicle, both the target image and the auxiliary image are images with vehicles, but the auxiliary image and the target image are images with different vehicles.
Each frame of auxiliary image is associated with region segmentation priori knowledge, the region segmentation priori knowledge is a target region identification image in the auxiliary image, the values of a target region and a non-target region in the target region identification image are different, namely the target region identification image is a binary image. Taking an auxiliary image as an abdomen CT image as an example, assuming that the current objective is to segment a liver image from a target image, the region segmentation priori knowledge associated with each frame of auxiliary image is a liver region identification map in the frame of auxiliary image, and in the liver region identification map, a liver region and a non-liver region are characterized by different values. The prior knowledge of the region segmentation may be annotated by an expert in the field to which the auxiliary image belongs.
For another example, to perform ocular image segmentation, the region segmentation a priori knowledge associated with each frame of auxiliary image is an ocular region identification map in the frame of auxiliary image, the ocular region and non-ocular region in the ocular region identification map being characterized by different values.
For another example, to perform a vehicle segmentation, the region segmentation a priori knowledge associated with each frame of auxiliary image is a vehicle region identification map in the frame of auxiliary image, the vehicle region and non-vehicle regions in the vehicle region identification map being characterized by different values.
The at least one auxiliary image may be at least one auxiliary image randomly selected from a preset auxiliary image library, or may be at least one auxiliary image selected from a preset auxiliary image library according to a preset selection policy.
Step S12: and correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain the corrected auxiliary initial feature map sequence of each frame of auxiliary image.
Assuming that the specific number of the at least one frame of auxiliary images is K, that is, K is a positive integer greater than or equal to 1, for an i (i=1, 2,3, … …, K) th frame of auxiliary images in the at least one frame of auxiliary images, the auxiliary initial feature map sequence of the i th frame of auxiliary images may be corrected by at least using the region segmentation priori knowledge of the i th frame of auxiliary images, so as to obtain a corrected auxiliary initial feature map sequence of the i th frame of auxiliary images.
Step S13: and fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features.
Because the corrected initial auxiliary feature map sequence is obtained through the region segmentation priori knowledge of the auxiliary image, the fusion feature is a fusion feature which is integrated with the region segmentation priori knowledge of the auxiliary image.
Step S14: and determining a region segmentation result of the target image by using the fusion characteristic.
Because the fusion features are integrated with the prior knowledge of the region segmentation of the auxiliary image, the target image is segmented by the fusion features, the purpose of assisting the region segmentation of the target image by the prior knowledge of the region segmentation of the auxiliary image is achieved, and therefore the image segmentation precision is improved.
In an alternative embodiment, one implementation manner of correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the prior knowledge of region segmentation of each frame of auxiliary image may be: the correcting of the auxiliary initial feature map sequence of each frame of auxiliary image only uses the prior knowledge of the region segmentation of each frame of auxiliary image, and specifically may include:
And for each frame of auxiliary image, respectively correcting the target area and the non-target area of the auxiliary initial characteristic image sequence of the frame of auxiliary image by using the prior knowledge of the area segmentation of the frame of auxiliary image.
That is, for the i-th frame auxiliary image, the target region and the non-target region of the auxiliary initial feature map sequence of the i-th frame auxiliary image are respectively corrected by using the region segmentation priori knowledge of the i-th frame auxiliary image.
The target area refers to an area to be divided, and the non-target area is an area other than the target area. The target region may also be referred to as a foreground region and the non-target region may also be referred to as a background region.
For example, to perform liver image segmentation, the target region may be a liver region, and the non-target region may be a non-liver region. For another example, the target region may be an eye region and the non-target region may be a non-eye region for eye image segmentation. For another example, if the vehicle is to be segmented, the target area may be a vehicle area and the non-target area may be a non-vehicle area.
In the embodiment of the application, for each frame of auxiliary image, different correction modes are adopted for correcting the target area and the non-target area of the auxiliary initial feature image sequence of the frame of auxiliary image.
Optionally, an implementation flowchart for correcting the target region and the non-target region of the auxiliary initial feature map sequence of the frame auxiliary image by using the prior knowledge of region segmentation of the frame auxiliary image is shown in fig. 2, which may include:
Step S21: for each auxiliary initial feature map in the sequence of auxiliary initial feature maps of the frame auxiliary image, determining a first weight of a target region of the auxiliary initial feature map and a second weight of a non-target region of the auxiliary initial feature map using prior knowledge of region segmentation of the frame auxiliary image.
The first weight represents the characteristic quality of the target area, and the larger the first weight is, the higher the characteristic quality of the target area is represented; the second weight characterizes the feature quality of the non-target region, and the larger the second weight is, the higher the feature quality of the non-target region is.
Optionally, for the j (j=1, 2,3, … …, c) th (j=1, 3, … …, c) in the auxiliary initial feature map sequence of the i-th auxiliary image, c is the number of channels of the auxiliary initial feature map sequence, that is, the number of auxiliary initial feature maps in the auxiliary initial feature map sequence, where each auxiliary initial feature map corresponds to one channel), the first weight may be a distance of dividing the target area of the auxiliary initial feature map of the j-th channel from the area of the i-th auxiliary image by a priori knowledge, where the distance may be a cosine distance, or other distances, such as a euclidean distance, and the like.
Optionally, for the auxiliary initial feature map of the jth channel in the auxiliary initial feature map sequence of the ith auxiliary image, the second weight may be a distance between a non-target area of the auxiliary initial feature map of the jth channel and the auxiliary initial feature map of the jth channel, where the distance may be a cosine distance, or other distances, such as a euclidean distance, and the like.
The specific calculation modes of the first weight and the second weight are described below by taking the cosine distance as an example. For convenience of description, it is assumed that a first weight of a jth channel in the auxiliary initial feature map sequence of the ith frame auxiliary image is denoted as Q fg [ i ] [ j ], a second weight of a jth channel in the auxiliary initial feature map sequence of the ith frame auxiliary image is denoted as Q bg [ i ] [ j ], the auxiliary initial feature map sequence of the ith frame auxiliary image is denoted as F i supp, and a region segmentation priori knowledge of the ith frame auxiliary image, that is, a target region identification map of the ith frame auxiliary image is denoted as The value of the target area is 1, and the value of the non-target area is 0, then:
The first weight Q fg [ i ] [ j ] can be expressed as:
the second weight Q bg [ i ] [ j ] can be expressed as:
Wherein, An element (x, y) in the auxiliary initial feature map representing the j-th channel in the auxiliary initial feature map sequence F i supp of the i-th frame auxiliary image and a target region identification map of the i-th frame auxiliary imageThe element in (x, y) is multiplied to obtain the target area feature map in the auxiliary initial feature map of the jth channel, because of/>The value of the target area in the auxiliary initial feature map of the jth channel is 1, and the value of the non-target area is 0, so that the value of the target area in the auxiliary initial feature map of the jth channel is the value of the target area in the auxiliary initial feature map of the jth channel.
Step S22: and according to the first weight and the second weight, weighting and summing the target area and the non-target area of the auxiliary initial feature map to obtain the corrected auxiliary initial feature map of the auxiliary initial feature map.
And after the first weight of the auxiliary initial feature map target region and the second weight of the non-target region of the j-th channel in the auxiliary initial feature map sequence of the i-th frame auxiliary image are obtained, weighting and summing the auxiliary initial feature map target region and the non-target region of the j-th channel to obtain the corrected auxiliary initial feature map of the j-th channel. Assuming that the corrected auxiliary initial feature map of the j-th channel in the auxiliary initial feature map sequence of the i-th auxiliary image is written asThe corrected auxiliary initial feature of the jth channelThe formula can be expressed as:
Wherein, A non-target distinguishing identification map/>, representing an element at (x, y) in the auxiliary initial feature map of the j-th channel in the auxiliary initial feature map sequence F i supp, of the i-th frame auxiliary image, from the i-th frame auxiliary imageAnd (3) multiplying the element at (x, y) to obtain a non-target area characteristic diagram in the auxiliary initial characteristic diagram of the jth channel.
In the above embodiment, the auxiliary initial feature map sequence of each frame of auxiliary image is corrected only according to the prior knowledge of the region segmentation of each frame of auxiliary image. In an alternative embodiment, the auxiliary initial feature map sequence of each frame of auxiliary image may be modified based on the region segmentation priori knowledge of each frame of auxiliary image, and the target initial feature map sequence.
Optionally, the auxiliary initial feature map sequence of each frame of auxiliary image can be corrected according to the region segmentation priori knowledge of each frame of auxiliary image to obtain an initial correction result; and then, correcting the initial correction result by using the target initial feature map sequence to obtain a corrected auxiliary initial feature map sequence. In particular, the method comprises the steps of,
And for each frame of auxiliary image, respectively correcting the target area and the non-target area of the auxiliary initial feature image sequence of the frame of auxiliary image by using the area segmentation priori knowledge of the frame of auxiliary image to obtain the initial correction result of the auxiliary initial feature image sequence of the frame of auxiliary image.
For an auxiliary initial feature map of a j-th channel in an auxiliary initial feature map sequence of an i-th frame auxiliary image, determining a first weight of a target area of the auxiliary initial feature map of the j-th channel and a second weight of a non-target area of the auxiliary initial feature map of the j-th channel by using area segmentation priori knowledge of the i-th frame auxiliary image; the specific implementation process may refer to the embodiment shown in fig. 2, and will not be described herein.
And according to the first weight and the second weight, weighting and summing the target area and the non-target area of the auxiliary initial feature map of the jth channel to obtain a corrected auxiliary initial feature map of the jth channel, and taking the corrected auxiliary initial feature map as an initial correction result of the auxiliary initial feature map of the jth channel. The specific implementation process may refer to the embodiment shown in fig. 2, and will not be described herein.
And correcting an initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence to obtain a corrected auxiliary initial feature map sequence of the frame auxiliary image. Optionally, an implementation flowchart for correcting an initial correction result of an auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence provided in the embodiment of the present application is shown in fig. 3, and may include:
Step S31: a third weight of each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image is determined using the target initial feature map sequence.
The third weight of the auxiliary initial feature map of the jth channel in the auxiliary initial feature map sequence of the ith frame auxiliary image is used for determining the importance of the auxiliary initial feature map of the jth channel in the auxiliary initial feature map sequence of the ith frame auxiliary image to the target image, and the larger the third weight of the auxiliary initial feature map of the jth channel in the ith frame auxiliary image is, the higher the importance of the auxiliary initial feature map of the jth channel in the ith frame auxiliary image to the target image is, and the higher the importance of the auxiliary initial feature map of the jth channel in the ith frame auxiliary image to the target image is, so that the reference value of the auxiliary initial feature map of the jth channel in the ith frame auxiliary image is indicated to be higher.
Optionally, the third weight of the auxiliary initial feature map of the jth channel may be a similarity between the target initial feature map sequence and the auxiliary initial feature map of the jth channel of the ith frame of auxiliary image, where the similarity may be represented by a cosine distance, or may be represented by other distances, such as a euclidean distance, and so on. The following formulas illustrate a specific implementation manner of the third weight of the auxiliary initial feature map of the j-th channel of the i-th auxiliary image:
Wherein r smi [ i ] [ j ] represents the third weight of the auxiliary initial feature map of the jth channel of the ith frame auxiliary image; f que [ k, x, y ] represents the target initial feature map of the kth channel in the target initial feature map sequence F que; The auxiliary initial feature map F i supp [ j, x, y ] representing the jth channel of the ith auxiliary image is cosine spaced from the target initial feature map F que [ k, x, y ] of the kth channel in the target initial feature map sequence F que.
In the embodiment of the application, the maximum value of the cosine distance between the auxiliary initial feature map F i supp [ j, x, y ] of the jth channel of the ith frame auxiliary image and the target initial feature map of each channel in the target initial feature map sequence F que is used as the third weight of the auxiliary initial feature map of the jth channel of the ith frame auxiliary image.
Step S32: and multiplying the initial correction result of each auxiliary initial feature image in the auxiliary initial feature image sequence of the frame auxiliary image by a corresponding third weight to obtain a corrected auxiliary initial feature image sequence of the frame auxiliary image.
And multiplying the initial correction result of the auxiliary initial feature map of the jth channel in the auxiliary initial feature map sequence of the ith frame of auxiliary image by the third weight of the auxiliary initial feature map of the jth channel to obtain a corrected auxiliary initial feature map of the jth channel of the ith frame of auxiliary image. The purpose of carrying out larger enhancement processing on the feature map of the i-th frame auxiliary image is achieved as the reference value of the i-th frame auxiliary image is higher.
Based on this, the corrected auxiliary initial feature map of the jth channel in the auxiliary initial feature map sequence of the ith frame auxiliary imageThe formula can be expressed as:
in an alternative embodiment, an implementation manner of fusing the corrected initial auxiliary feature map sequence of each frame of auxiliary image with the target initial feature map sequence may be:
And directly splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features.
Assuming that the number of frames of the auxiliary image is K, the widths of feature images in the initial feature image sequence of the target initial feature image sequence and the initial feature image sequence of each frame of the auxiliary image are w, the heights are h, and the channel numbers are c, the corrected initial feature image sequence of each frame of the auxiliary image and the initial feature image sequence of the target initial feature image sequence are directly spliced, and the obtained fusion feature has the width of w× (K+1), the height of h and the channel number of c.
In order to further improve the image segmentation quality, in an alternative embodiment, the target initial feature sequence may be first self-corrected to obtain a self-corrected target initial feature sequence; and splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the self-corrected target initial feature image sequence to obtain fusion features. Specifically, an implementation flowchart for fusing the corrected initial auxiliary feature map sequence of each frame of auxiliary image with the target initial feature map sequence provided in the embodiment of the present application is shown in fig. 4, and may include:
Step S41: and determining fourth weights of all the target initial feature graphs in the target initial feature graph sequence according to the target initial feature graph sequence. The fourth weight of the target initial feature map of the kth channel in the target initial feature map sequence may represent the quality of the target initial feature map of the kth channel, and the larger the fourth weight of the target initial feature map of the kth channel, the higher the quality of the target initial feature map of the kth channel. Alternatively, the fourth weight of the target initial feature map of the kth channel may be obtained by:
calculating the average value of all the target initial feature images in the target initial feature image sequence to obtain an average value feature image; and calculating the similarity between the target initial feature map and the mean feature map of the kth channel, and taking the similarity as a fourth weight of the target initial feature map of the kth channel.
Step S42: multiplying each target initial feature map in the target initial feature map sequence by a corresponding fourth weight to obtain a corrected target initial feature map sequence.
That is, the target initial feature map of the kth channel in the target initial feature map sequence is multiplied by the fourth weight of the target initial feature map of the kth channel, so as to obtain a corrected target initial feature map of the kth channel.
Step S43: and splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the corrected target initial feature image sequence to obtain fusion features.
Assuming that the number of frames of the auxiliary image is K, the widths of feature images in the initial feature sequence of the target and the initial feature image sequences of the auxiliary images of each frame are w, the heights are h, and the channel numbers are c, splicing the corrected initial feature image sequences of the auxiliary images of each frame and the corrected target initial feature image sequences to obtain fusion feature widths w× (K+1), the heights are h, and the channel numbers are c.
In the embodiment of the application, besides the correction of the characteristic image of the auxiliary image, the characteristic image of the target image is also subjected to self-correction, so that the segmentation precision of the target image is further improved.
The image segmentation method provided by the embodiment of the application can be realized through a pre-trained image segmentation model, and optionally, an architecture schematic diagram of the image segmentation model provided by the embodiment of the application is shown in fig. 5, and may include:
an initial feature extraction module 51, a correction module 52, a fusion module 53 and a segmentation module 54; wherein,
The initial feature extraction module 51 is configured to obtain a target initial feature map sequence of the target image and an auxiliary initial feature map sequence of at least one auxiliary image.
The correction module 52 is configured to correct the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the prior knowledge of region segmentation of each frame of auxiliary image, so as to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image. The specific implementation can be referred to the foregoing embodiments, and will not be repeated here.
The fusion module 53 is configured to fuse the corrected initial auxiliary feature map sequence of each frame of auxiliary image with the target initial feature map sequence to obtain a fusion feature. The specific implementation can be referred to the foregoing embodiments, and will not be repeated here.
The segmentation module 54 is configured to determine a region segmentation result of the target image using the fusion feature. Alternatively, the segmentation module 54 may perform a dimension transformation on the fusion feature, and determine a region segmentation result of the target image using the dimension transformed fusion feature. The specific dimension transformation mode can refer to the existing scheme, and is not described in detail here.
An implementation flowchart for training an image segmentation model provided by an embodiment of the present application is shown in fig. 6, and may include:
step S61: and acquiring a sample initial feature map sequence of the sample image and an auxiliary initial feature map sequence of at least one auxiliary image through the image segmentation model.
The sample image and the auxiliary image are images belonging to the same field and having the same segmentation target, but the auxiliary image and the target image are different, for example, are both abdomen CT images, are both images having vehicles, are both images having faces, and the like.
Step S62: and correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model to obtain the corrected auxiliary initial feature map sequence of each frame of auxiliary image.
The specific implementation manner of correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image can refer to the foregoing embodiment, and will not be repeated here.
Step S63: and fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence through an image segmentation model to obtain a first fusion feature.
Step S64: and determining a region segmentation result of the sample image by using the first fusion characteristic through the image segmentation model.
Step S65: and updating parameters of the image segmentation model by taking the image segmentation model at least with the region segmentation priori knowledge of which the region segmentation result of the sample image approaches to that of the sample image as a target.
In the embodiment of the application, the parameters of the image segmentation model can be updated only by taking the prior knowledge of the region segmentation of the sample image, which is close to the region segmentation of the sample image, as a target. Specifically, the difference between the region segmentation result of the sample image and the region segmentation priori knowledge of the sample image can be calculated through a preset loss function, and the parameters of the image segmentation model are updated with the aim of minimizing the difference. Specific updating modes can refer to existing schemes, and are not described in detail herein.
It should be noted that, in the model training process, batch training is generally performed on the model, that is, each time the sample image of the input image segmentation model is multiple frames, for each frame of sample image in a batch of sample images of the input image segmentation model, the region segmentation is performed in a manner from step S61 to step S64, and correspondingly, a specific implementation manner of step S65 may be: and updating parameters of the image segmentation model by taking the image segmentation model at least taking the region segmentation result of each frame of sample image as a target, wherein the region segmentation priori knowledge of each frame of sample image is close to the region segmentation result corresponding to the frame of sample image. Specific updating modes can refer to existing schemes, and are not described in detail herein.
In order to further improve the generalization capability of the image segmentation model, another implementation flowchart for training the image segmentation model provided by the embodiment of the present application is shown in fig. 7, and may include:
Step S71: and acquiring a sample initial feature map sequence of the sample image and an auxiliary initial feature map sequence of at least one auxiliary image through the image segmentation model.
Step S72: and correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model to obtain the corrected auxiliary initial feature map sequence of each frame of auxiliary image.
Step S73: and fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence through an image segmentation model to obtain a first fusion feature.
Step S74: and determining a region segmentation result of the sample image by using the first fusion characteristic through the image segmentation model.
The specific implementation process of step S71 to step S74 is the same as step S61 to step S64, and will not be repeated here.
Step S75: and correcting the sample initial feature map sequence through the image segmentation model at least according to the region segmentation result of the sample image to obtain a corrected sample initial feature map sequence.
Alternatively, the target region and the non-target region of the sample initial feature map sequence of the sample image may be corrected by using the region segmentation result of the sample image, respectively. The specific implementation principle is the same as that of the above-mentioned specific implementation principle that the target area and the non-target area of the auxiliary initial feature map sequence of each frame of auxiliary image are respectively corrected by using the prior knowledge of the area segmentation of the frame of auxiliary image, and will not be described in detail here.
Or alternatively
And correcting the sample initial feature map sequence of the sample image according to the region segmentation result of the sample image and the auxiliary initial feature sequence of the i-th frame auxiliary image to obtain a corrected sample initial feature map sequence of the sample image corresponding to the i-th frame auxiliary image. The specific implementation principle can refer to the specific implementation principle that the auxiliary initial feature map sequence of each frame of auxiliary image is corrected according to the prior knowledge of the region segmentation of each frame of auxiliary image and the target initial feature map sequence, and will not be described in detail herein.
In a preferred embodiment, the step S72 and the step S75 adopt a symmetric algorithm, that is, if the step S72 corrects the auxiliary initial feature map sequence of each frame of auxiliary image only according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model, the step S75 corrects the sample initial feature map sequence only according to the region segmentation result of the sample image; if step S72 is specifically performed, when the image segmentation model corrects the auxiliary initial feature image sequence of each frame of auxiliary image according to the target initial feature sequence and the region segmentation priori knowledge of each frame of auxiliary image, step S75 is performed when the sample initial feature image sequence is corrected according to the auxiliary initial feature sequence of the i-th frame of auxiliary image and the region segmentation result of the sample image.
Step S76: and fusing the corrected sample initial feature image sequence with the auxiliary initial feature image sequence of the frame auxiliary image for each frame auxiliary image through the image segmentation model to obtain a second fusion feature corresponding to the frame auxiliary image.
And for the ith frame auxiliary image, obtaining a second fusion feature corresponding to the ith frame auxiliary image by fusing the corrected sample initial feature map sequence corresponding to the ith frame auxiliary image with the auxiliary initial feature map sequence of the ith frame auxiliary image.
Step S77: and determining the region segmentation result of each frame of auxiliary image by using the second fusion characteristic corresponding to each frame of auxiliary image through the image segmentation model.
And for the ith frame auxiliary image, determining a region segmentation result of the ith frame auxiliary image by using a second fusion characteristic corresponding to the ith frame auxiliary image.
Step S78: and updating parameters of the image segmentation model by taking the region segmentation result of the sample image approaching to the region segmentation priori knowledge of the sample image and the region segmentation result of each frame of auxiliary image approaching to the segmentation priori knowledge of the frame of auxiliary image as targets through the image segmentation model.
Specifically, a first difference between the region segmentation result of the sample image and the region segmentation priori knowledge of the sample image can be calculated through a preset first loss function, a second difference between the region segmentation result of each frame of auxiliary image and the region segmentation priori knowledge of the frame of auxiliary image can be calculated through a preset second loss function, and parameters of the image segmentation model are updated by taking the first difference and each second difference as targets. Specific updating modes can refer to existing schemes, and are not described in detail herein.
In the image segmentation model training process, the sample image is supported by the auxiliary image for segmentation, and the auxiliary image is supported by the sample image for segmentation, so that the information flow reversely flows back to the auxiliary image from the target image, and the generalization capability of the model is further improved.
Optionally, when the sample image of the image segmentation model is multiple frames, for each frame in the multiple frames of sample images, step S71 to step S77 are executed, and accordingly, step S78 may specifically be: and updating parameters of the image segmentation model by taking the prior knowledge that the region segmentation result of each frame of sample image approaches the region segmentation prior knowledge corresponding to the frame of sample image as a target through the image segmentation model. Specific updating modes can refer to existing schemes, and are not described in detail herein.
8A-8c, a frame example diagram for training an image segmentation model according to an embodiment of the present application is shown, where the target image is a CT image. The related functional modules mainly comprise:
The initial feature extraction module is used for carrying out feature extraction on the sample image and the auxiliary images to obtain a sample initial feature map sequence of the sample image and auxiliary initial feature map sequences of the auxiliary images.
And AKSU module, configured to modify the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image, so as to obtain a modified auxiliary initial feature map sequence of each frame of auxiliary image. And fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence to obtain a first fusion feature.
And the first segmentation module is used for determining a region segmentation result of the sample image by utilizing the first fusion characteristic.
And SGAU, a module is used for correcting the sample initial feature map sequence at least according to the region segmentation result of the sample image to obtain a corrected sample initial feature map sequence. And for each frame of auxiliary image, fusing the corrected sample initial feature image sequence with the auxiliary initial feature image sequence of the frame of auxiliary image to obtain a second fusion feature corresponding to the frame of auxiliary image.
And the second segmentation module is used for determining the region segmentation result of each frame of auxiliary image by utilizing the second fusion characteristic corresponding to each frame of auxiliary image.
The internal logic of the AKSU module is shown in fig. 8b, f self is to perform self-correction on the sample initial feature map sequence according to the sample initial feature map sequence to determine weights of all sample initial feature maps in the sample initial feature map sequence (denoted as Q self).fquery is to multiply each sample initial feature map in the sample initial feature map sequence by weights corresponding to the sample initial feature map to obtain a self-corrected sample initial feature map sequence, f cos is to determine weights of all auxiliary initial feature maps in the auxiliary initial feature map sequence of all frame auxiliary images by using the sample initial feature map sequence (denoted as r smi).fbgcos is to determine weights of non-target areas of all auxiliary initial feature maps by using the region segmentation prior knowledge of all frame auxiliary images) (denoted as Q bg).ffgcos is to determine weights of target areas of all auxiliary initial feature maps by using the region segmentation prior knowledge of all frame auxiliary images) (denoted as Q fg).fsupp is to sum the target areas and non-target areas of the auxiliary initial feature map sequence according to Q4238 and Q bg to obtain initial correction results of the auxiliary initial feature map sequence, and multiplying the initial correction results by r 85884 is to obtain the initial correction results, which is to implement the specific description of the logic of the module according to the embodiment of the logic of the module.
The internal logic of the SGAU module is shown in fig. 8c, f self represents that for each frame of auxiliary image, the auxiliary initial feature map sequence of the frame of auxiliary image is self-corrected according to the auxiliary initial feature map sequence of the frame of auxiliary image to determine the weight of each initial feature map in the auxiliary initial feature map sequence of the frame of auxiliary image (denoted as Q self).fsupp represents that each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame of auxiliary image is multiplied by a corresponding weight to obtain a corrected auxiliary initial feature map sequence of the frame of auxiliary image, f cos represents that the weight of each sample initial feature map in the sample initial feature map sequence is determined by using the auxiliary initial feature map sequence of the frame of auxiliary image (denoted as r smi).fbgcos represents that the weight of a non-target region of each sample initial feature map is determined by using the region segmentation priori knowledge of the sample image) (denoted as Q bg).ffgcos represents that the weight of a target region of each sample initial feature map is determined by using the region segmentation priori knowledge of the sample image (denoted as Q fg and Q bg, the target region of the sample initial feature map sequence is summed with the corresponding weight to obtain a corrected initial feature map sequence of the sample initial feature map, and the reference is performed before the sample initial feature map is corrected by the logic of the sample map corresponding to obtain a specific reference map of the sample map.
Corresponding to the method embodiment, the embodiment of the present application further provides an image segmentation apparatus, and a schematic structural diagram of the image segmentation apparatus provided in the embodiment of the present application is shown in fig. 9, which may include:
the feature extraction processing module 91, the correction processing module 92, the fusion processing module 93 and the segmentation processing module 94; wherein,
The feature extraction processing module 91 is configured to obtain a target initial feature map sequence of the target image and an auxiliary initial feature map sequence of at least one auxiliary image;
The correction processing module 92 is configured to correct the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image, so as to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
the fusion processing module 93 is configured to fuse the corrected initial auxiliary feature map sequence of each frame of auxiliary image with the target initial feature map sequence to obtain a fusion feature;
the segmentation processing module 94 is configured to determine a region segmentation result of the target image using the fusion feature.
According to the image segmentation device provided by the embodiment of the application, the auxiliary initial feature image sequence of each frame of auxiliary image is corrected according to the region segmentation priori knowledge of each frame of auxiliary image, the corrected initial auxiliary feature image sequence of each frame of auxiliary image is fused with the target initial feature image sequence to obtain the fusion feature fused with the region segmentation priori knowledge of the auxiliary image, the image segmentation is carried out by utilizing the fusion feature, the purpose of assisting the region segmentation of the target image by utilizing the region segmentation priori knowledge of the auxiliary image is realized, and therefore, the image segmentation precision is improved.
In an alternative embodiment, the correction processing module 92 may include:
And the first correction processing unit is used for correcting the target area and the non-target area of the auxiliary initial characteristic image sequence of the frame auxiliary image respectively by using the prior knowledge of the area segmentation of the frame auxiliary image.
In an alternative embodiment, the first correction processing unit may specifically be configured to:
For each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image, determining a first weight of a target region of the auxiliary initial feature map and a second weight of a non-target region of the auxiliary initial feature map by using region segmentation priori knowledge of the frame auxiliary image;
And according to the first weight and the second weight, weighting and summing the target area and the non-target area of the auxiliary initial feature map to obtain the corrected auxiliary initial feature map of the auxiliary initial feature map.
In an alternative embodiment, the correction processing module 92 may include:
And the second correction processing unit is used for correcting the auxiliary initial characteristic image sequence of each frame of auxiliary image according to the region segmentation priori knowledge of each frame of auxiliary image and the target initial characteristic image sequence.
In an alternative embodiment, the second correction processing unit may specifically be configured to:
For each frame of auxiliary image, correcting a target area and a non-target area of an auxiliary initial feature image sequence of the frame of auxiliary image by using the area segmentation priori knowledge of the frame of auxiliary image to obtain an initial correction result of the auxiliary initial feature image sequence of the frame of auxiliary image;
And correcting an initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence to obtain a corrected auxiliary initial feature map sequence of the frame auxiliary image.
In an alternative embodiment, the second correction processing unit may be specifically configured to, when correcting the initial correction result of the auxiliary initial feature map sequence of the auxiliary image of the frame by using the target initial feature map sequence:
Determining a third weight of each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence;
And multiplying the initial correction result of each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image by the corresponding third weight corresponding to the auxiliary initial feature map to obtain a corrected auxiliary initial feature map sequence of the frame auxiliary image.
In an alternative embodiment, the fusion processing module 93 may specifically be configured to:
Splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain the fusion feature;
Or alternatively
Determining a fourth weight of each target initial feature map in the target initial feature map sequence according to the target initial feature map sequence; multiplying each target initial feature map in the target initial feature map sequence by a corresponding fourth weight to obtain a corrected target initial feature map sequence; and splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the corrected target initial feature image sequence to obtain the fusion feature.
In an alternative embodiment, the image segmentation apparatus is implemented by an image segmentation model, and the image segmentation transpose may further include a training module, specifically configured to:
acquiring a sample initial feature map sequence of a sample image and an auxiliary initial feature map sequence of at least one auxiliary image through the image segmentation model;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
Fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence through the image segmentation model to obtain a first fusion feature;
determining a region segmentation result of the sample image by using the first fusion feature through the image segmentation model;
And updating parameters of the image segmentation model by taking the image segmentation model at least aiming at the region segmentation priori knowledge that the region segmentation result of the sample image approaches to the region segmentation of the sample image.
In an alternative embodiment, the training module may be further configured to:
Correcting the sample initial feature map sequence through the image segmentation model at least according to the region segmentation result of the sample image to obtain a corrected sample initial feature map sequence;
for each frame of auxiliary image, obtaining a second fusion feature corresponding to the frame of auxiliary image through fusion of the corrected sample initial feature image sequence and the auxiliary initial feature image sequence of the frame of auxiliary image;
Determining a region segmentation result of each frame of auxiliary image by using a second fusion characteristic corresponding to each frame of auxiliary image through the image segmentation model;
The updating, by the image segmentation model, parameters of the image segmentation model at least targeting a priori knowledge of a region segmentation result of the sample image approaching the region segmentation result of the sample image, includes:
and updating parameters of the image segmentation model by taking the image segmentation model as a target, wherein the region segmentation result of the sample image approaches to the region segmentation priori knowledge of the sample image, and the region segmentation result of each frame of auxiliary image approaches to the segmentation priori knowledge of the frame of auxiliary image.
Corresponding to the method embodiment, the application further provides an electronic device, and a schematic structural diagram of the electronic device is shown in fig. 10, which may include:
a memory 101 for storing at least one set of instructions;
A processor 102 for calling and executing the instruction set in the memory by executing the instruction set:
acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one auxiliary image;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features;
and determining a region segmentation result of the target image by utilizing the fusion characteristic.
Alternatively, the refinement and expansion functions of the instruction set may be as described above.
The embodiment of the application also provides a readable storage medium, on which a computer program is stored, which when being executed by a processor, realizes the following steps:
acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one auxiliary image;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features;
and determining a region segmentation result of the target image by utilizing the fusion characteristic.
Alternatively, the refinement and expansion functions of the instruction set may be as described above.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed systems, devices, and methods may be implemented in other manners. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
It should be understood that in the embodiments of the present application, the claims, the various embodiments, and the features may be combined with each other, so as to solve the foregoing technical problems.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An image segmentation method, comprising:
acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one auxiliary image;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
Fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features, so that the fusion features are fused into the region segmentation priori knowledge of each frame of auxiliary image;
and determining a region segmentation result of the target image by utilizing the fusion characteristic.
2. The method of claim 1, wherein the modifying the sequence of auxiliary initial feature maps for each frame of auxiliary images based at least on prior knowledge of region segmentation of each frame of auxiliary images, comprises:
And for each frame of auxiliary image, respectively correcting the target area and the non-target area of the auxiliary initial characteristic image sequence of the frame of auxiliary image by using the prior knowledge of the area segmentation of the frame of auxiliary image.
3. The method of claim 2, wherein the correcting the target region and the non-target region of the auxiliary initial feature map sequence of the frame auxiliary image using the region segmentation priori knowledge of the frame auxiliary image comprises:
For each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image, determining a first weight of a target region of the auxiliary initial feature map and a second weight of a non-target region of the auxiliary initial feature map by using region segmentation priori knowledge of the frame auxiliary image;
And according to the first weight and the second weight, weighting and summing the target area and the non-target area of the auxiliary initial feature map to obtain the corrected auxiliary initial feature map of the auxiliary initial feature map.
4. The method of claim 1, wherein the modifying the sequence of auxiliary initial feature maps for each frame of auxiliary images based at least on prior knowledge of region segmentation of each frame of auxiliary images, comprises:
And correcting the auxiliary initial feature map sequence of each frame of auxiliary image according to the region segmentation priori knowledge of each frame of auxiliary image and the target initial feature map sequence.
5. The method of claim 4, wherein the modifying the auxiliary initial feature map sequence for each frame of auxiliary image based on the prior knowledge of the region segmentation of each frame of auxiliary image and the target initial feature map sequence comprises:
For each frame of auxiliary image, correcting a target area and a non-target area of an auxiliary initial feature image sequence of the frame of auxiliary image by using the area segmentation priori knowledge of the frame of auxiliary image to obtain an initial correction result of the auxiliary initial feature image sequence of the frame of auxiliary image;
And correcting an initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence to obtain a corrected auxiliary initial feature map sequence of the frame auxiliary image.
6. The method of claim 5, wherein correcting the initial correction result of the auxiliary initial feature map sequence of the frame auxiliary image using the target initial feature map sequence comprises:
Determining a third weight of each auxiliary initial feature map in the auxiliary initial feature map sequence of the frame auxiliary image by using the target initial feature map sequence;
And multiplying the initial correction result of each auxiliary initial feature image in the auxiliary initial feature image sequence of the frame auxiliary image by a third weight corresponding to the auxiliary initial feature image to obtain a corrected auxiliary initial feature image sequence of the frame auxiliary image.
7. The method of any of claims 1-6, the fusing the modified initial auxiliary feature map sequence of each frame auxiliary image with the target initial feature map sequence, comprising:
Splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain the fusion feature;
Or alternatively
Determining a fourth weight of each target initial feature map in the target initial feature map sequence according to the target initial feature map sequence; multiplying each target initial feature map in the target initial feature map sequence by a corresponding fourth weight to obtain a corrected target initial feature map sequence; and splicing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the corrected target initial feature image sequence to obtain the fusion feature.
8. The method according to any of claims 1-6, the image segmentation method being implemented by an image segmentation model, the image segmentation model being trained by:
acquiring a sample initial feature map sequence of a sample image and an auxiliary initial feature map sequence of at least one auxiliary image through the image segmentation model;
Correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image through the image segmentation model to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
Fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the sample initial feature image sequence through the image segmentation model to obtain a first fusion feature;
determining a region segmentation result of the sample image by using the first fusion feature through the image segmentation model;
And updating parameters of the image segmentation model by taking the image segmentation model at least aiming at the region segmentation priori knowledge that the region segmentation result of the sample image approaches to the region segmentation of the sample image.
9. The method of claim 8, further comprising:
Correcting the sample initial feature map sequence through the image segmentation model at least according to the region segmentation result of the sample image to obtain a corrected sample initial feature map sequence;
for each frame of auxiliary image, obtaining a second fusion feature corresponding to the frame of auxiliary image through fusion of the corrected sample initial feature image sequence and the auxiliary initial feature image sequence of the frame of auxiliary image;
Determining a region segmentation result of each frame of auxiliary image by using a second fusion characteristic corresponding to each frame of auxiliary image through the image segmentation model;
The updating, by the image segmentation model, parameters of the image segmentation model at least targeting a priori knowledge of a region segmentation result of the sample image approaching the region segmentation result of the sample image, includes:
and updating parameters of the image segmentation model by taking the image segmentation model as a target, wherein the region segmentation result of the sample image approaches to the region segmentation priori knowledge of the sample image, and the region segmentation result of each frame of auxiliary image approaches to the segmentation priori knowledge of the frame of auxiliary image.
10. An image segmentation apparatus comprising:
the feature extraction processing module is used for acquiring a target initial feature map sequence of a target image and an auxiliary initial feature map sequence of at least one frame of auxiliary image;
the correction processing module is used for correcting the auxiliary initial feature map sequence of each frame of auxiliary image at least according to the region segmentation priori knowledge of each frame of auxiliary image to obtain a corrected auxiliary initial feature map sequence of each frame of auxiliary image;
The fusion processing module is used for fusing the corrected initial auxiliary feature image sequence of each frame of auxiliary image with the target initial feature image sequence to obtain fusion features, so that the fusion features are fused into the region segmentation priori knowledge of each frame of auxiliary image;
and the segmentation processing module is used for determining a region segmentation result of the target image by utilizing the fusion characteristic.
CN202010419140.8A 2020-05-18 2020-05-18 Image segmentation method, device, equipment and storage medium Active CN111583282B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010419140.8A CN111583282B (en) 2020-05-18 2020-05-18 Image segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010419140.8A CN111583282B (en) 2020-05-18 2020-05-18 Image segmentation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111583282A CN111583282A (en) 2020-08-25
CN111583282B true CN111583282B (en) 2024-04-23

Family

ID=72110906

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010419140.8A Active CN111583282B (en) 2020-05-18 2020-05-18 Image segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111583282B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112598686B (en) * 2021-03-03 2021-06-04 腾讯科技(深圳)有限公司 Image segmentation method and device, computer equipment and storage medium
CN113160243A (en) * 2021-03-24 2021-07-23 联想(北京)有限公司 Image segmentation method and electronic equipment
CN113947723B (en) * 2021-09-28 2024-07-02 浙江大学 High-resolution remote sensing scene target detection method based on size balance FCOS
CN114143517A (en) * 2021-10-26 2022-03-04 深圳华侨城卡乐技术有限公司 Fusion mask calculation method and system based on overlapping area and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation
CN107945212A (en) * 2017-11-29 2018-04-20 中国人民解放军火箭军工程大学 Infrared small and weak Detection of Moving Objects based on inertial navigation information auxiliary and background subtraction
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110163875A (en) * 2019-05-23 2019-08-23 南京信息工程大学 One kind paying attention to pyramidal semi-supervised video object dividing method based on modulating network and feature
CN110458859A (en) * 2019-07-01 2019-11-15 南开大学 A kind of segmenting system of the myelomatosis multiplex stove based on multisequencing MRI
CN111145209A (en) * 2019-12-26 2020-05-12 北京推想科技有限公司 Medical image segmentation method, device, equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8358819B2 (en) * 2005-06-24 2013-01-22 University Of Iowa Research Foundation System and methods for image segmentation in N-dimensional space
US7613360B2 (en) * 2006-02-01 2009-11-03 Honeywell International Inc Multi-spectral fusion for video surveillance
WO2014150457A2 (en) * 2013-03-15 2014-09-25 Nike, Inc. Feedback signals from image data of athletic performance
US9639737B2 (en) * 2015-09-29 2017-05-02 Eth Zürich (Eidgenöessische Technische Hochschule Zürich) Methods and systems of performing performance capture using an anatomically-constrained local model
US9652890B2 (en) * 2015-09-29 2017-05-16 Disney Enterprises, Inc. Methods and systems of generating an anatomically-constrained local model for performance capture

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105069808A (en) * 2015-08-31 2015-11-18 四川虹微技术有限公司 Video image depth estimation method based on image segmentation
CN107945212A (en) * 2017-11-29 2018-04-20 中国人民解放军火箭军工程大学 Infrared small and weak Detection of Moving Objects based on inertial navigation information auxiliary and background subtraction
CN110110617A (en) * 2019-04-22 2019-08-09 腾讯科技(深圳)有限公司 Medical image dividing method, device, electronic equipment and storage medium
CN110163875A (en) * 2019-05-23 2019-08-23 南京信息工程大学 One kind paying attention to pyramidal semi-supervised video object dividing method based on modulating network and feature
CN110458859A (en) * 2019-07-01 2019-11-15 南开大学 A kind of segmenting system of the myelomatosis multiplex stove based on multisequencing MRI
CN111145209A (en) * 2019-12-26 2020-05-12 北京推想科技有限公司 Medical image segmentation method, device, equipment and storage medium

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
A Cross-Modal Image Fusion Method Guided by Human Visual Characteristics;Aiqing Fang等;Computer Science;全文 *
Discovering Primary Objects in Videos by Saliency Fusion and Iterative Appearance Estimation;Jiong Yang等;Journal of latex class files;第04卷(第11期);全文 *
一种结合视觉显著性的GrabCut图像分割方法;高智勇等;中南民族大学学报(自然科学版)(02);全文 *
医学图像处理技术;马丽明等;现代医院(11);全文 *
基于3D卷积神经网络的肝脏自动分割方法;何兰等;中国医学物理学杂志(06);全文 *
基于先验知识和几何主动轮廓线的三维超声瓣膜分割;尚岩峰等;生物医学工程学杂志;20080225(第01期);全文 *
基于稀疏特征竞争和形状相似性的超声图像序列分割方法;倪波等;中国科学:信息科学;20170620(第06期);全文 *
多帧融合处理技术在运动模糊图像复原中的应用;冯清枝等;光学技术;20021120(第06期);全文 *

Also Published As

Publication number Publication date
CN111583282A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN111583282B (en) Image segmentation method, device, equipment and storage medium
Oron et al. Extended lucas-kanade tracking
CN108389224B (en) Image processing method and device, electronic equipment and storage medium
CN109272031A (en) A kind of training sample generation method and device, equipment, medium
CN111612024B (en) Feature extraction method, device, electronic equipment and computer readable storage medium
CN110909618B (en) Method and device for identifying identity of pet
Jeong et al. Stereo saliency map considering affective factors and selective motion analysis in a dynamic environment
CN110046622B (en) Targeted attack sample generation method, device, equipment and storage medium
CN113724379B (en) Three-dimensional reconstruction method and device for fusing image and laser point cloud
CN112818995B (en) Image classification method, device, electronic equipment and storage medium
CN111160351A (en) Fast high-resolution image segmentation method based on block recommendation network
CN115147705B (en) Face copying detection method and device, electronic equipment and storage medium
CN109685830A (en) Method for tracking target, device and equipment and computer storage medium
CN115311550B (en) Remote sensing image semantic change detection method and device, electronic equipment and storage medium
CN112464775A (en) Video target re-identification method based on multi-branch network
EP3121788A1 (en) Image feature estimation method and device
US20240185590A1 (en) Method for training object detection model, object detection method and apparatus
CN113205072A (en) Object association method and device and electronic equipment
Morén et al. Biologically based top-down attention modulation for humanoid interactions
CN116091524B (en) Detection and segmentation method for target in complex background
CN109961405B (en) Image filtering method and device
US20230035307A1 (en) Apparatus and method for detecting keypoint based on deep learniing using information change across receptive fields
CN113011468B (en) Image feature extraction method and device
CN115222621A (en) Image correction method, electronic device, storage medium, and computer program product
CN115630361A (en) Attention distillation-based federal learning backdoor defense method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant