CN117809092A - Medical image processing method and device, electronic equipment and storage medium - Google Patents

Medical image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117809092A
CN117809092A CN202311825448.2A CN202311825448A CN117809092A CN 117809092 A CN117809092 A CN 117809092A CN 202311825448 A CN202311825448 A CN 202311825448A CN 117809092 A CN117809092 A CN 117809092A
Authority
CN
China
Prior art keywords
image
correction
initial
feature map
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311825448.2A
Other languages
Chinese (zh)
Inventor
沈柄志
常璐璠
刘浩
丁佳
吕晨翀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yizhun Medical Technology Co ltd
Original Assignee
Beijing Yizhun Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yizhun Medical Technology Co ltd filed Critical Beijing Yizhun Medical Technology Co ltd
Priority to CN202311825448.2A priority Critical patent/CN117809092A/en
Publication of CN117809092A publication Critical patent/CN117809092A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The disclosure provides a medical image processing method, a medical image processing device, electronic equipment and a storage medium, wherein the medical image processing method comprises the following steps: acquiring an initial medical image; preprocessing the initial medical image to obtain a target medical image; determining a region of interest image based on a user's frame selection operation of the target medical image; inputting region of interest image into U 2 The Net model obtains an initial annotation image; acquiring a correction signal for correcting the initial marked imageThe correction information comprises a correction position and a correction type; determining a correction influence radius according to the correction position; encoding the initial annotation image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map; and correcting the initial annotation image according to the corrected feature image and the image segmentation model to obtain a target annotation image. By applying the method, the accuracy and the efficiency of labeling the medical image are improved, and the effect of labeling the medical image is better.

Description

Medical image processing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of image processing, and in particular relates to a medical image processing method, a medical image processing device, electronic equipment and a storage medium.
Background
At present, deep learning and image processing technologies are widely used, and when image processing is performed, a user guides a computer to perform target extraction by clicking different areas of an image, so that an image labeling function is realized. However, the existing image processing technology is often applied to labeling natural images, the labeling effect is better when the natural images are labeled, but the medical images have the characteristics of low image resolution, different imaging protocols, color deficiency, boundary blurring and the like, so that the effect is poor when the method for processing the natural images is applied to the medical images, the preset correction influence range is fixed when the processing is carried out, the method has certain limitation, the precision and the efficiency are lower when the medical images are processed, and complex segmentation scenes are difficult to deal with.
Disclosure of Invention
The present disclosure provides a medical image processing method, apparatus, electronic device, and storage medium, so as to at least solve the above technical problems in the prior art.
According to a first aspect of the present disclosure, there is provided a method of processing a medical image, the method comprising: acquiring an initial medical image; preprocessing the initial medical image to obtain a target medical image; determining a region of interest image based on a box selection operation for the target medical image; inputting the region of interest image into U 2 The Net model obtains an initial annotation image; acquiring correction information for correcting the initial marked image, wherein the correction informationIncluding correction location and correction type; determining a correction influence radius according to the correction position; encoding the initial annotation image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map; and correcting the initial annotation image according to the corrected feature image and the image segmentation model to obtain a target annotation image.
In an embodiment, the preprocessing the initial medical image to obtain a target medical image includes: converting the data format of the initial medical image into a preset data format; and adjusting the window width and the window level of the initial medical image according to the target object to be segmented to obtain a target medical image.
In an embodiment, the obtaining the correction information for correcting the initial labeling image includes: determining a first mask and a second mask according to the initial annotation image and the standard annotation image, wherein the first mask represents an area which is undersegmented into a target object in the initial annotation image, and the second mask represents an area which is oversegmented into the target object in the initial annotation image; performing distance transformation on the first mask and the second mask to obtain a first distance between each pixel point and the first mask and a second distance between each pixel point and the second mask, wherein the first distance is the nearest distance between the pixel point and the first mask, and the second distance is the nearest distance between the pixel point and the second mask; sequencing a plurality of first distances and a plurality of second distances obtained by a plurality of pixel points, and determining a maximum distance; and determining the correction position and the correction type according to the maximum distance.
In an embodiment, the determining the correction impact radius according to the correction position includes: performing edge detection on the initial annotation image, and determining an edge area binary image of the initial annotation image; labeling and performing distance transformation on the edge region binary image to obtain a distance transformation image; determining the edge distance corresponding to the correction position according to the distance conversion image; and determining the correction influence radius corresponding to the correction position according to the mapping relation between the edge distance and the correction influence radius.
In an embodiment, the encoding the initial labeling image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map includes: determining a distance matrix corresponding to the correction position and the correction type according to the initial annotation image; thresholding is carried out on the distance matrix according to the correction influence radius, so that an initial feature map is obtained; performing convolution processing on the initial feature map through a convolution neural network; coding the initial feature map after convolution processing through an attention module to obtain a first weight matrix; and carrying out feature enhancement processing on the initial feature map after convolution processing according to the first weight matrix to obtain a corrected feature map.
In an embodiment, the correcting the initial labeling image according to the corrected feature map and the image labeling model to obtain the target labeling image includes: inputting the initial annotation image into the image segmentation model, and carrying out first feature extraction on the initial annotation image through a first feature extraction unit of the image segmentation model to obtain a first feature image; fusing the first feature map and the correction feature map to obtain a fused feature map; encoding the fusion feature map through an attention module to obtain a second weight matrix; performing feature enhancement processing on the first feature map according to the second weight matrix to obtain a second feature map; and processing the second feature map through other units of the image processing model to obtain a target labeling image.
In an embodiment, the method further comprises: activating the target annotation image through an activation function, and determining a probability value corresponding to each pixel point in the target annotation image; and comparing the probability value with a preset probability threshold value, and determining the pixel point as a foreground or a background according to a comparison result.
According to a second aspect of the present disclosure, there is provided a medical image processing apparatus, the apparatus comprising: the first acquisition module is used for acquiring an initial medical image; a processing module for pre-processing the initial medical imageObtaining a target medical image; a first determining module, configured to determine a region of interest image based on a frame selection operation of the target medical image by a user; a first segmentation module for inputting the region of interest image into U 2 The Net model obtains an initial annotation image; the second acquisition module is used for acquiring correction information for correcting the initial annotation image, wherein the correction information comprises a correction position and a correction type; the second determining module is used for determining a correction influence radius according to the correction position; the second determining module is further configured to encode the initial labeling image according to the correction position, the correction type and the correction influence radius, so as to obtain a correction feature map; and the second segmentation module is used for correcting the initial annotation image according to the correction feature map and the image segmentation model to obtain a target annotation image.
In one embodiment, the processing module includes: the conversion sub-module is used for converting the data format of the initial medical image into a preset data format; and the adjusting sub-module is used for adjusting the window width and the window level of the initial medical image according to the target object to be segmented to obtain a target medical image.
In an embodiment, the second obtaining module includes: the first determining submodule is used for determining a first mask and a second mask according to the initial annotation image and the standard annotation image, wherein the first mask represents a region which is undersegmented into a target object in the initial annotation image, and the second mask represents a region which is oversegmented into the target object in the initial annotation image; the first transformation submodule is used for carrying out distance transformation on the first mask and the second mask to obtain a first distance between each pixel point and the first mask and a second distance between each pixel point and the second mask, wherein the first distance is the nearest distance between the pixel point and the first mask, and the second distance is the nearest distance between the pixel point and the second mask; the second determining submodule is used for sequencing a plurality of first distances and a plurality of second distances obtained by a plurality of pixel points and determining the maximum distance; the second determination submodule is used for determining a correction position and a correction type according to the maximum distance.
In an embodiment, the second determining module includes: the detection sub-module is used for carrying out edge detection on the initial annotation image and determining an edge area binary image of the initial annotation image; the second transformation submodule is used for carrying out distance transformation on the edge area binary image to obtain a distance transformation image; a third determining submodule, configured to determine an edge distance corresponding to the correction position according to the distance conversion image; and the third determination submodule is used for determining the correction influence radius corresponding to the correction position according to the mapping relation between the edge distance and the correction influence radius.
In an embodiment, the second determining module further includes: the third determining submodule is used for determining a distance matrix corresponding to the correction position and the correction type according to the initial annotation image; the first processing submodule is used for carrying out thresholding processing on the distance matrix according to the correction influence radius to obtain an initial feature map; the second processing sub-module is used for carrying out convolution processing on the initial feature map through a convolution neural network; the second processing sub-module is used for encoding the initial feature map after convolution processing through the attention module to obtain a first weight matrix; and the second processing sub-module is used for carrying out feature enhancement processing on the initial feature map after convolution processing according to the first weight matrix to obtain a corrected feature map.
In an embodiment, the second segmentation module includes: the extraction sub-module is used for inputting the initial annotation image into the image segmentation model, and carrying out first feature extraction on the initial annotation image through a first feature extraction unit of the image segmentation model to obtain a first feature map; the fusion sub-module is used for fusing the first feature map and the correction feature map to obtain a fusion feature map; the third processing sub-module is used for encoding the fusion feature map through the attention module to obtain a second weight matrix; the third processing sub-module is used for performing feature enhancement processing on the first feature map according to the second weight matrix to obtain a second feature map; and the third processing sub-module is used for processing the second feature map through other units of the image processing model to obtain a target labeling image.
In an embodiment, the device further comprises: the activation module is used for carrying out activation processing on the target annotation image through an activation function and determining a probability value corresponding to each pixel point in the target annotation image; and the comparison module is used for comparing the probability value with a preset probability threshold value and determining that the pixel point is foreground or background according to a comparison result.
According to a third aspect of the present disclosure, there is provided an electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods described in the present disclosure.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of the present disclosure.
After an initial medical image is acquired, preprocessing the initial medical image to obtain a target medical image, determining a region-of-interest image based on a frame selection operation of a user on the target medical image, and inputting the region-of-interest image into a U 2 The Net model obtains an initial annotation image, acquires correction information for correcting the initial annotation image, wherein the correction information comprises a correction position and a correction type, determines a correction influence radius according to the correction position, encodes the initial annotation image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map, and corrects the initial annotation image through the correction feature map and the image segmentation model to obtain a target annotation image.
By applying the method ofAcquiring a user's selection operation to determine an interested region image, obtaining a more accurate interested region image, and then firstly passing through U 2 The Net model processes the region of interest image to obtain an initial labeling image, then corrects the initial labeling image by means of the image segmentation model according to correction information to obtain a target labeling image, and the method comprises the steps of firstly passing through U 2 The Net model carries out first-stage labeling on the target medical image, a corresponding initial labeling image can be obtained, when the labeling precision of the first stage is not high, correction information for correcting the initial labeling image can be obtained, and second-stage labeling is carried out on the initial labeling image according to the correction information, so that a labeling image with higher precision is obtained; the accuracy and the efficiency of labeling the medical image are improved, the initial labeling image can be quickly corrected, the correction times can be reduced, and the medical image processing effect is better.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings, in which:
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Fig. 1 shows a schematic implementation flow diagram of a medical image processing method according to an embodiment of the disclosure;
fig. 2 shows a second implementation flow chart of a medical image processing method according to an embodiment of the disclosure;
FIG. 3 shows a third implementation flow diagram of a medical image processing method according to an embodiment of the disclosure;
fig. 4 shows a schematic implementation flow diagram of a medical image processing method according to an embodiment of the disclosure;
FIG. 5 illustrates the passage of an embodiment of the present disclosure through U 2 A schematic diagram of the Net model for correcting the initial annotation image;
FIG. 6 shows a block diagram of a medical image processing apparatus according to an embodiment of the present disclosure;
fig. 7 shows a schematic diagram of a composition structure of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, features and advantages of the present disclosure more comprehensible, the technical solutions in the embodiments of the present disclosure will be clearly described in conjunction with the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are only some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
Fig. 1 shows a schematic implementation flow diagram of a medical image processing method according to an embodiment of the disclosure, including:
step 101, an initial medical image is acquired.
An initial medical image, which may be a CT (computed tomography ) image, is first acquired, the initial medical image comprising the target object to be segmented. The method finally obtains the region of the target object to be segmented from the initial medical image.
Step 102, preprocessing the initial medical image to obtain a target medical image.
The contour of the target object to be segmented in the medical image can be clearer by preprocessing the initial medical image, the visibility of the target object to be segmented is higher, the difficulty of labeling the target object to be segmented in the initial medical image by the model is reduced, labeling of the target object to be segmented is facilitated, labeling precision is improved, and labeling efficiency is improved.
Step 103, determining a region of interest image based on a frame selection operation of the target medical image by the user.
And acquiring a frame selection operation of a user on the target medical image, and cutting out the region selected by the user frame from the target medical image according to the frame selection operation of the user, wherein the region is used as a region of interest, and an image of the region of interest is obtained, and the image of the region of interest comprises a target object to be segmented. The method has the advantages that the region-of-interest image obtained through the user frame selection operation is obtained, so that the region-of-interest image necessarily comprises the target object to be segmented, the problems of unclear boundaries and the like caused by incomplete obtained regions of interest are avoided, and the labeling precision is influenced.
Step 104, inputting the region of interest image into U 2 The Net model obtains an initial annotation image.
After obtaining the region of interest image, inputting the region of interest image into U 2 In the Net model, U is used for 2 The Net model segments the region of interest image to obtain an initial labeling image, wherein the initial labeling image is formed by U 2 And the Net model is used for dividing to obtain an initial contour image of the target object to be divided.
Wherein U is used at this stage 2 The Net model is trained in advance by a large-scale natural image data set medical image data set, and U is trained by a plurality of types of large-scale data sets 2 The Net model can enable the model to learn strong feature extraction capability, and further has better performance when the image is marked.
Step 105, obtaining correction information for correcting the initial marked image, wherein the correction information comprises a correction position and a correction type.
Through U 2 The Net model divides the target medical image, and the obtained initial labeling image and the actual region image of the target object to be divided may have deviation, so that the initial labeling image can be corrected to enable the finally obtained labeling image and the actual region image of the target object to be completely the same or have errors within an allowable range, the correction information for correcting the initial labeling image is obtained, the correction information characterizes the correction willingness of the initial labeling image, and the correction of the initial labeling image can be realized according to the correction information. The correction information includes a correction position, i.e. a position where the initial annotation image needs to be corrected, The corrected position may be a position in the region over-segmented into the target object to be segmented in the initial annotation image, or may be a position in the region under-segmented into the target object to be segmented. The correction type comprises positive correction and negative correction, and the correction type characterizes whether the region corresponding to the correction position should be segmented into the region of the target object to be segmented; the positive correction token should segment the region corresponding to the correction position to the region of the target object to be segmented, and the negative correction token should not segment the region corresponding to the correction position to the region of the target object to be segmented.
The correction information can be obtained from interaction with the user, for example, the user performs clicking or framing operation in the initial annotation image, and the obtaining of the clicking or framing operation by the user is to obtain the correction information for correcting the initial annotation image. Taking a click operation of a user on the initial annotation image as an example, the click operation comprises a click coordinate and a click type, wherein the click coordinate is a correction position, the click type is a correction type, and the click type comprises positive click and negative click.
And 106, determining a correction influence radius according to the correction position.
In order to make the correction of the initial marked image more accurate and reduce the number of correction times, the size of the correction area can be determined according to the correction position, and in general, the correction area can be represented by a circle, and then the size of the correction area can be determined by using the correction influence radius. The correction influence radius is determined according to the correction position, so that the correction influence radius can be changed according to the distance between the correction position and the edge of the target object to be segmented, which is segmented in the initial labeling image, and when the correction position is close to the edge of the target object to be segmented, the correction influence radius can be automatically reduced, so that more areas outside the edge are prevented from being influenced; when the correction influence radius is far away from the edge of the target object to be segmented, the correction influence radius automatically becomes larger, so that the correction position can influence a larger area. The correction influence radius is determined according to the correction position, so that the flexibility and the robustness of image processing can be improved.
And 107, encoding the initial marked image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map.
And encoding the initial marked image according to the correction position, the correction type and the correction influence radius so as to convert the information into a form which can be understood by a model to obtain a correction characteristic diagram, wherein the correction characteristic diagram can be used for correcting the initial marked image.
And step 108, correcting the initial annotation image according to the corrected feature map and the image segmentation model to obtain the target annotation image.
And inputting the corrected feature image and the initial labeling image into the image segmentation model, wherein the corrected feature image can be used as a basis for correcting the initial labeling image by the image segmentation model, so that the image segmentation model corrects the initial labeling image according to the information provided by the corrected feature image, and finally, a result output by the image segmentation model is the target labeling image. The target labeling image is an image corresponding to the region of the target object to be segmented.
According to the medical image processing method provided by the embodiment of the disclosure, after the acquired medical image is preprocessed to obtain the target medical image, firstly, the region-of-interest image comprising the target object to be segmented is determined from the target medical image according to the frame selection operation of a user, and then the U is used for processing the target medical image 2 The Net model processes the region of interest image to obtain an initial annotation image, then the correction influence radius is determined through the correction information, and the initial annotation image is corrected according to the correction information and the correction influence radius to obtain a target annotation image, wherein the target annotation image is the region image of the target object to be segmented. By the method, the accuracy and the efficiency of medical image processing can be improved, and the medical image can be quickly corrected.
In one embodiment, preprocessing the initial medical image to obtain the target medical image includes:
converting the data format of the initial medical image into a preset data format;
and adjusting the window width and the window level of the initial medical image according to the target object to be segmented to obtain a target medical image.
The initial medical image is typically a CT image, and when analyzing the medical image, the data format of the initial medical image may be converted into a format that facilitates model processing, for example MRI (magnetic resonance imaging ), CT image data is typically stored in DICOM (Digital Imaging and Communications in Medicine, digital imaging and communication standard), nifi (Neuroimaging Informatics Technology Initiative, neuro-imaging informatics technical initiative) or the like, whereas image processing or machine learning algorithms typically process general image data such as PNG (Portable Network Graphics, implantable network graphics), JPEG (Joint Photographic Experts Group ) or the like, so that after obtaining the initial medical image, the data format of the initial medical image may be converted into a preset data format that can be processed by the model.
Different window width and window level parameters can emphasize characteristics of different tissues in an initial medical image, so that the window width and window level parameters corresponding to the target object to be segmented are determined according to the target object to be segmented, the window width and window level parameters of the initial medical image are adjusted to the window width and window level parameters corresponding to the target object to be segmented, the target medical image is obtained, the characteristics of the target object to be segmented can be highlighted through adjustment of the window width and window level, the target object to be segmented can be clearly represented, the obtained target medical image is higher in visibility of the target object to be segmented, the labeling difficulty of the model to be segmented is reduced, the labeling precision is higher, and the labeling quality is improved.
Furthermore, the preprocessing of the initial medical image may further comprise: the method comprises the steps of carrying out standardization processing on an initial medical image, adjusting the size of the initial medical image, denoising the initial medical image and the like.
In one embodiment, obtaining correction information for correcting the initial annotation image includes:
determining a first mask and a second mask according to the initial annotation image and the standard annotation image, wherein the first mask represents an area which is undersegmented into target objects in the initial annotation image, and the second mask represents an area which is oversegmented into target objects in the initial annotation image;
Performing distance transformation on the first mask and the second mask to obtain a first distance between each pixel point and the first mask and a second distance between each pixel point and the second mask, wherein the first distance is the nearest distance between each pixel point and the first mask, and the second distance is the nearest distance between each pixel point and the second mask;
sequencing a plurality of first distances and a plurality of second distances obtained by a plurality of pixel points, and determining a maximum distance;
and determining the correction position and the correction type according to the maximum distance.
The process included in this embodiment is generally used in a model training stage, and training of the model is performed by means of standard labeling images, so that the obtained correction position and correction type can be used as simulated correction information. The initial marked image is marked by U 2 The Net model divides the region of interest image to obtain an initial region image of the target object to be divided, but the initial region image divided by the initial annotation image may deviate from the actual region image of the target object to be divided. The first mask and the second mask are thus determined from the sum of the initial annotation image and the standard annotation image. The standard annotation image is an actual area image of a target object to be segmented, the first mask is an area which is not segmented into the target object in the initial annotation image, namely an area which is included in the standard annotation image but is not included in the initial annotation image, and the second mask is an area which is segmented into the target object in the initial annotation image, namely an area which is not included in the standard annotation image but is included in the initial annotation image. Specifically, the first mask and the second mask are calculated by the following formula:
Wherein False Negative Mask in formula (1) is a first mask, false Positive Mask in formula (2) is a second mask, gtMask is a standard annotation image, and PredMask is an initial annotation image.
The method comprises the steps of performing distance conversion calculation on a first mask and a second mask respectively by using a distance conversion function in an open source computer vision and machine learning software library (Open Source Computter Vision Library, openCV), determining that the nearest distance between each pixel point and the first mask is a first distance of each pixel point, determining that the nearest distance between each pixel point and the second mask is a second distance of each pixel point, sequencing the first distances and the second distances of all the pixel points, determining the maximum distance from all the first distances and the second distances, determining that the pixel point corresponding to the maximum distance is a correction position, determining that the correction type is positive correction when the maximum distance is the first distance, and determining that the correction type is negative correction when the maximum distance is the second distance.
In one embodiment, as shown in FIG. 2, determining a correction impact radius based on the correction location includes:
step 201, performing edge detection on the initial annotation image, and determining an edge area binary image of the initial annotation image;
Step 202, performing distance transformation on the edge area binary image to obtain a distance transformation image;
step 203, determining an edge distance corresponding to the correction position according to the distance conversion image;
and 204, determining the correction influence radius corresponding to the correction position according to the mapping relation between the edge distance and the correction influence radius.
The edge detection algorithm is used for carrying out edge detection on the initial annotation image, an edge area obtained by segmenting the initial annotation image is determined, the edge area is the edge of a target object to be segmented in the initial annotation image, and an edge area binary image corresponding to the initial annotation image is obtained. And then, performing distance conversion on the binary image by using a distance conversion function in the OPENCV to obtain a distance conversion image corresponding to the binary image, wherein in the distance conversion image, a pixel value corresponding to each pixel point represents the nearest distance between the pixel point and the edge of the target object to be segmented. For each pixel point, determining the distance between the pixel point and the pixel point contained in the edge area, and selecting the minimum distance from the obtained distances to be the nearest distance between the corresponding pixel point and the edge area. And reading the pixel value of the pixel point where the correction position is located from the distance conversion image, wherein the pixel value is the distance from the correction position to the edge area of the target object to be segmented, namely the edge distance.
And determining the click influence radius of the correction position according to the mapping relation between the edge distance and the correction influence radius and the edge distance corresponding to the correction position. The mapping relation between the edge distance and the correction influence radius is as follows: r=a×e (-b×d) The method comprises the steps of carrying out a first treatment on the surface of the Wherein R is a correction influence radius, d is an edge distance corresponding to the correction position, a is a first correction parameter, b is a second correction parameter, and the click influence radius of the correction position can be controlled by adjusting the first correction parameter and the second correction parameter. The smaller the edge distance d corresponding to the correction position is, the closer the correction position is to the edge area, and the smaller the correction influence radius R is; the larger the edge distance d corresponding to the correction position, the farther the correction position is from the edge area, and the larger the correction influence radius R. In addition, the first correction parameter and the second correction parameter can be adjusted according to the requirements of specific application scenes. For example, if it is desired to modify the radius of influence to change faster, a larger second modification parameter may be set; if it is desired to modify the change in the influence radius more slowly, a second smaller modification influence parameter may be set. The value of the same first correction parameter determines the maximum value of the correction influence radius, and the first correction parameter can be determined by calculating the long and short sides of the target object to be segmented, for example, setting the value of the first correction parameter as the average value or the geometric average value of the sum of the long and short sides of the target object to be segmented. Therefore, the edge distance corresponding to the correction position can be changed along with the correction of the correction influence radius self-adaptation, so that the marking precision is higher, and the marking effect is better.
In one embodiment, as shown in fig. 3, the method for encoding the initial annotation image according to the correction position, the correction type and the correction influence radius to obtain the correction feature map includes:
step 301, determining a distance matrix corresponding to the correction position and the correction type according to the initial annotation image;
step 302, thresholding the distance matrix according to the corrected influence radius to obtain an initial feature map;
step 303, performing convolution processing on the initial feature map through a convolution neural network;
step 304, coding the initial feature map after convolution processing through an attention module to obtain a first weight matrix;
and 305, performing feature enhancement processing on the initial feature map after the convolution processing according to the first weight matrix to obtain a corrected feature map.
Generating a two-dimensional array according to the size of the initial annotation image, constructing a coordinate tensor through the generated two-dimensional array, initializing the tensor into a stack of row and column coordinates, wherein each element of the tensor contains position information relative to the correction position, and reflecting the relative position of each pixel point in space. And calculating the coordinate tensor according to the correction type, determining the distance of each pixel point relative to the correction position, and generating a distance matrix, wherein the distance is expressed by the square of the Euclidean distance. And thresholding the distance matrix, wherein the correction influence radius is used as a threshold value, grid coordinate points with the distance smaller than the correction influence radius are set to be 1, and grid coordinate points with the distance larger than the correction influence radius are set to be 0, so that an initial feature map is obtained.
And then the initial feature map is subjected to dimension reduction and abstraction through a convolutional neural network, a first convolutional layer with the convolution kernel size of 1 is firstly used, the initial feature map is used as input, a third feature map with the channel number of 16 is output, then the third feature map is used as input, a fourth feature map with the channel number of 64 is output through a second convolutional layer, and the second convolutional layer performs downsampling on the input third feature map, so that the spatial information of an image is ensured. After the convolution operation is performed on the initial feature map by the first convolution layer to obtain a third feature map, the activation function may further perform activation processing on the third feature map to increase the nonlinearity of the third feature map, where in this embodiment, the activation processing may be performed by using a ReLU activation function. After the fourth feature map is obtained, the attention module is used for encoding the fourth feature map to obtain a first weight matrix, and specifically, the first weight matrix can be calculated by adopting the spatial channel attention module. Multiplying the first weight matrix by the fourth feature map to realize feature enhancement processing of the fourth feature map, and increasing the weight of important information in the fourth feature map to obtain a corrected feature map.
In one embodiment, as shown in fig. 4, the correcting the initial labeling image according to the corrected feature map and the image segmentation model to obtain the target labeling image includes:
Step 401, inputting an initial labeling image into an image segmentation model, and performing first feature extraction on the initial labeling image through a first feature extraction unit of the image segmentation model to obtain a first feature map;
step 402, fusing the first feature map and the corrected feature map to obtain a fused feature map;
step 403, coding the fusion feature map through the attention module to obtain a second weight matrix;
step 404, performing feature enhancement processing on the first feature map according to the second weight matrix to obtain a second feature map;
and step 405, processing the second feature map through other units of the image processing model to obtain a target labeling image.
Firstly, inputting an initial labeling image into an image segmentation model for feature extraction, wherein in the application, the image segmentation model is still U in order to enable the image processing effect to be better 2 Net model, FIG. 5 shows the passage through U 2 The Net model corrects the initial labeling image. Through U 2 The first feature extraction unit in the Net model firstly performs feature extraction on the initial labeling image to obtain a first feature image, adds and fuses the first feature image and the corrected feature image to obtain a fused feature image, and adds and fuses the first feature image and the corrected feature image to enhance the information density of the first feature image, introduce user intention and provide rich information for subsequent image processing. The fusion feature map is then input into the attention module Coding to obtain a second weight matrix, specifically, the application can use a spatial channel attention mechanism module to multiply the second weight matrix with the first feature map, and redistribute weights of the first feature map through the second weight matrix to optimize the extracted features, specifically, the application is realized through the following formula: f (F) Fuse =f.x, where F Fuse And (3) representing a first characteristic diagram with re-assigned weights, wherein F is the first characteristic diagram, and X is the second weight matrix. Inputting the first characteristic diagram with reassigned weights into U 2 A second feature extraction unit of the Net model, consisting of U 2 And the rest units of the Net model process the second feature map to obtain a target labeling image. The second weight matrix is calculated by correcting the feature map and the first feature map, so that the characteristic of dynamically adjusting the attention weight is realized, and the accurate labeling of the key position can be realized.
To alleviate the problem of information loss during network deepening and weight distribution, the method can also be used in U 2 The second feature extraction unit of the Net model adds the first feature map with the re-assigned weights in a manner of skipping connection so as to integrate the features of the first feature map with the re-assigned weights into the model for processing, wherein the specific formula is as follows: f (F) stage2 =f stage2 (F fuse ;θ)+F fuse The method comprises the steps of carrying out a first treatment on the surface of the Wherein F is stage2 Representing the features extracted by the second feature extraction unit, f stage2 U representing the second stage 2 Net network, θ represents U 2 Parameters of Net model, F fuse A first profile representing reassignment weights. Thereafter by U 2 And the other units of the Net model process the features extracted by the second feature extraction unit to finally obtain the target labeling image.
In addition, if the obtained target annotation image has deviation from the standard annotation image of the target object to be segmented, the obtained target annotation image can be used as a new initial annotation image, correction information is acquired again according to the new initial annotation image, and the new initial annotation image is corrected based on the correction information until the finally obtained target annotation image is consistent with the standard annotation image of the target object to be segmented or the error is within a preset range.
U 2 Each unit of the Net model is of a special neural network structure, and can further extract and learn the characteristics of the image in the processing process, and the learning process ensures U 2 The Net model can gradually adapt and understand various complex features of the input image, thereby achieving high labeling accuracy in interactive medical image processing.
In an embodiment, the method further comprises:
Activating the target annotation image through an activation function, and determining a probability value corresponding to each pixel point in the target annotation image;
and comparing the probability value with a preset probability threshold value, and determining the pixel point as a foreground or a background according to the comparison result.
After the target annotation image is obtained, activating the target annotation image through a sigmoid activating function, converting a pixel value corresponding to each pixel point in the target annotation image into a probability value between 0 and 1, obtaining the probability that each pixel in the target annotation image is foreground or background, comparing the probability value with a preset probability threshold value through threshold value processing of the probability value, obtaining a comparison result, and determining the pixel point as foreground or background according to the comparison result so as to carry out visual display on the target annotation image.
Fig. 6 shows a block diagram of a medical image processing apparatus according to an embodiment of the present disclosure.
Referring to fig. 6, according to a second aspect of an embodiment of the present disclosure, there is provided a medical image processing apparatus, the apparatus including: a first acquisition module 601, configured to acquire an initial medical image; a processing module 602, configured to pre-process the initial medical image to obtain a target medical image; a first determining module 603, configured to determine a region of interest image based on a frame selection operation of the target medical image by the user; a first segmentation module 604 for inputting the region of interest image into the U 2 The Net model obtains an initial annotation image; a second obtaining module 605, configured to obtain correction information for correcting the initial labeling image, where the correction information includes a correction position and a correction type; second determining mouldA block 606 for determining a correction impact radius based on the correction location; the second determining module 606 is further configured to encode the initial labeling image according to the correction position, the correction type and the correction influence radius, so as to obtain a correction feature map; the second segmentation module 607 is configured to correct the initial labeling image according to the corrected feature map and the image segmentation model, so as to obtain the target labeling image.
In one embodiment, the processing module 602 includes: a transformation submodule 6021 for transforming the data format of the initial medical image into a preset data format; an adjustment submodule 6022 is used for adjusting the window width and the window level of the initial medical image according to the target object to be segmented to obtain a target medical image.
In one embodiment, the second acquisition module 605 includes: a first determining submodule 6051, configured to determine a first mask and a second mask according to the initial annotation image and the standard annotation image, where the first mask represents an area under-segmented into the target object in the initial annotation image, and the second mask represents an area over-segmented into the target object in the initial annotation image; the first transformation submodule 6052 is configured to perform distance transformation on the first mask and the second mask to obtain a first distance between each pixel point and the first mask and a second distance between each pixel point and the second mask, where the first distance is a closest distance between each pixel point and the first mask, and the second distance is a closest distance between each pixel point and the second mask; a second determining submodule 6053, configured to sequence a plurality of first distances and a plurality of second distances obtained by a plurality of pixel points, and determine a maximum distance; a second determination submodule 6053 for determining a correction position and a correction type based on the maximum distance.
In an embodiment, the second determining module 606 includes: a detection submodule 6061, configured to perform edge detection on the initial annotation image, and determine an edge region binary image of the initial annotation image; a second transformation submodule 6062, configured to perform distance transformation on the edge area binary image to obtain a distance transformed image; a third determining submodule 6063 for determining an edge distance corresponding to the correction position according to the distance conversion image; a third determining submodule 6063 is configured to determine a correction impact radius corresponding to the correction position according to the mapping relationship between the edge distance and the correction impact radius.
In an embodiment, the second determining module 606 further includes: a third determining submodule 6063, configured to determine a distance matrix corresponding to the correction position and the correction type according to the initial annotation image; a first processing sub-module 6064, configured to perform thresholding on the distance matrix according to the corrected influence radius, to obtain an initial feature map; a second processing sub-module 6065, configured to perform convolution processing on the initial feature map through a convolutional neural network; a second processing sub-module 6065, configured to encode the initial feature map after convolution processing by using the attention module, to obtain a first weight matrix; the second processing sub-module 6065 is configured to perform feature enhancement processing on the convolved initial feature map according to the first weight matrix, so as to obtain a corrected feature map.
In one embodiment, the second splitting module 607 includes: the extraction submodule 6071 is used for inputting the initial annotation image into the image segmentation model, and performing first feature extraction on the initial annotation image through a first feature extraction unit of the image segmentation model to obtain a first feature map; a fusion submodule 6072, configured to fuse the first feature map and the corrected feature map to obtain a fused feature map; a third processing sub-module 6073, configured to encode the fusion feature map through the attention module to obtain a second weight matrix; the third processing sub-module 6073 is configured to perform feature enhancement processing on the first feature map according to the second weight matrix to obtain a second feature map; and the third processing sub-module 6073 is configured to process the second feature map through other units of the image processing model to obtain a target labeling image.
In an embodiment, the apparatus further comprises: the activation module 608 is configured to perform activation processing on the target annotation image through an activation function, and determine a probability value corresponding to each pixel point in the target annotation image; and the comparison module is used for comparing the probability value with a preset probability threshold value and determining the pixel point as a foreground or a background according to the comparison result.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device and a readable storage medium.
Fig. 7 illustrates a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the apparatus 700 includes a computing unit 701 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 may also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in device 700 are connected to I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, etc.; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, an optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 701 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 701 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the respective methods and processes described above, for example, a medical image processing method. For example, in some embodiments, a method of processing medical images may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 700 via ROM 702 and/or communication unit 709. When the computer program is loaded into the RAM 703 and executed by the computing unit 701, one or more steps of a medical image processing method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform a medical image processing method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present disclosure, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
The foregoing is merely specific embodiments of the disclosure, but the protection scope of the disclosure is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the disclosure, and it is intended to cover the scope of the disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (10)

1. A method of processing a medical image, the method comprising:
acquiring an initial medical image;
preprocessing the initial medical image to obtain a target medical image;
determining a region of interest image based on a user's frame selection operation of the target medical image;
inputting the region of interest image into U 2 The Net model obtains an initial annotation image;
acquiring correction information for correcting the initial marked image, wherein the correction information comprises a correction position and a correction type;
determining a correction influence radius according to the correction position;
encoding the initial annotation image according to the correction position, the correction type and the correction influence radius to obtain a correction feature map;
and correcting the initial annotation image according to the corrected feature image and the image segmentation model to obtain a target annotation image.
2. The method of claim 1, wherein the preprocessing the initial medical image to obtain a target medical image comprises:
converting the data format of the initial medical image into a preset data format;
and adjusting the window width and the window level of the initial medical image according to the target object to be segmented to obtain a target medical image.
3. The method of claim 1, wherein the obtaining correction information for correcting the initial annotation image comprises:
determining a first mask and a second mask according to the initial annotation image and the standard annotation image, wherein the first mask represents an area which is undersegmented into a target object in the initial annotation image, and the second mask represents an area which is oversegmented into the target object in the initial annotation image;
performing distance transformation on the first mask and the second mask to obtain a first distance between each pixel point and the first mask and a second distance between each pixel point and the second mask, wherein the first distance is the nearest distance between the pixel point and the first mask, and the second distance is the nearest distance between the pixel point and the second mask;
sequencing a plurality of first distances and a plurality of second distances obtained by a plurality of pixel points, and determining a maximum distance;
and determining the correction position and the correction type according to the maximum distance.
4. The method of claim 1, wherein said determining a correction impact radius from said correction location comprises:
performing edge detection on the initial annotation image, and determining an edge area binary image of the initial annotation image;
Performing distance transformation on the edge region binary image to obtain a distance transformation image;
determining the edge distance corresponding to the correction position according to the distance conversion image;
and determining the correction influence radius corresponding to the correction position according to the mapping relation between the edge distance and the correction influence radius.
5. The method of claim 1, wherein said encoding said initial annotation image based on said correction location, said correction type, and said correction impact radius, results in a correction profile, comprising:
determining a distance matrix corresponding to the correction position and the correction type according to the initial annotation image;
thresholding is carried out on the distance matrix according to the correction influence radius, so that an initial feature map is obtained;
performing convolution processing on the initial feature map through a convolution neural network;
coding the initial feature map after convolution processing through an attention module to obtain a first weight matrix;
and carrying out feature enhancement processing on the initial feature map after convolution processing according to the first weight matrix to obtain a corrected feature map.
6. The method according to claim 1, wherein the correcting the initial annotation image according to the corrected feature map and the image segmentation model to obtain the target annotation image comprises:
Inputting the initial annotation image into the image segmentation model, and carrying out first feature extraction on the initial annotation image through a first feature extraction unit of the image segmentation model to obtain a first feature image;
fusing the first feature map and the correction feature map to obtain a fused feature map;
encoding the fusion feature map through an attention module to obtain a second weight matrix;
performing feature enhancement processing on the first feature map according to the second weight matrix to obtain a second feature map;
and processing the second feature map through other units of the image processing model to obtain a target labeling image.
7. The method according to claim 1, wherein the method further comprises:
activating the target annotation image through an activation function, and determining a probability value corresponding to each pixel point in the target annotation image;
and comparing the probability value with a preset probability threshold value, and determining the pixel point as a foreground or a background according to a comparison result.
8. A medical image processing apparatus, the apparatus comprising:
the first acquisition module is used for acquiring an initial medical image;
The processing module is used for preprocessing the initial medical image to obtain a target medical image;
a first determining module, configured to determine a region of interest image based on a frame selection operation of the target medical image by a user;
a first segmentation module for inputting the region of interest image into U 2 The Net model obtains an initial annotation image;
the second acquisition module is used for acquiring correction information for correcting the initial annotation image, wherein the correction information comprises a correction position and a correction type;
the second determining module is used for determining a correction influence radius according to the correction position;
the second determining module is further configured to encode the initial labeling image according to the correction position, the correction type and the correction influence radius, so as to obtain a correction feature map;
and the second segmentation module is used for correcting the initial annotation image according to the correction feature map and the image segmentation model to obtain a target annotation image.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-7.
CN202311825448.2A 2023-12-27 2023-12-27 Medical image processing method and device, electronic equipment and storage medium Pending CN117809092A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311825448.2A CN117809092A (en) 2023-12-27 2023-12-27 Medical image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311825448.2A CN117809092A (en) 2023-12-27 2023-12-27 Medical image processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117809092A true CN117809092A (en) 2024-04-02

Family

ID=90419484

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311825448.2A Pending CN117809092A (en) 2023-12-27 2023-12-27 Medical image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117809092A (en)

Similar Documents

Publication Publication Date Title
WO2022105125A1 (en) Image segmentation method and apparatus, computer device, and storage medium
CN112308866B (en) Image processing method, device, electronic equipment and storage medium
CN111340820B (en) Image segmentation method and device, electronic equipment and storage medium
CN112396605B (en) Network training method and device, image recognition method and electronic equipment
CN114565763B (en) Image segmentation method, device, apparatus, medium and program product
CN116109824A (en) Medical image and pixel-level label generation method and device based on diffusion model
CN115409990B (en) Medical image segmentation method, device, equipment and storage medium
CN114445904A (en) Iris segmentation method, apparatus, medium, and device based on full convolution neural network
CN115565177B (en) Character recognition model training, character recognition method, device, equipment and medium
CN113837965A (en) Image definition recognition method and device, electronic equipment and storage medium
CN116245832B (en) Image processing method, device, equipment and storage medium
CN113516697A (en) Image registration method and device, electronic equipment and computer-readable storage medium
CN114565953A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN115482248B (en) Image segmentation method, device, electronic equipment and storage medium
CN114926322B (en) Image generation method, device, electronic equipment and storage medium
CN114972361B (en) Blood flow segmentation method, device, equipment and storage medium
CN115861255A (en) Model training method, device, equipment, medium and product for image processing
CN116363561A (en) Time sequence action positioning method, device, equipment and storage medium
CN113065585B (en) Training method and device of image synthesis model and electronic equipment
CN113610856B (en) Method and device for training image segmentation model and image segmentation
CN117809092A (en) Medical image processing method and device, electronic equipment and storage medium
CN114937149A (en) Image processing method, image processing device, electronic equipment and storage medium
CN114820686B (en) Matting method and device, electronic equipment and storage medium
CN115578564B (en) Training method and device for instance segmentation model, electronic equipment and storage medium
Achaibou et al. Guided depth completion using active infrared images in time of flight systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination