CN111080655A - Image segmentation and model training method, device, medium and electronic equipment - Google Patents

Image segmentation and model training method, device, medium and electronic equipment Download PDF

Info

Publication number
CN111080655A
CN111080655A CN201911226263.3A CN201911226263A CN111080655A CN 111080655 A CN111080655 A CN 111080655A CN 201911226263 A CN201911226263 A CN 201911226263A CN 111080655 A CN111080655 A CN 111080655A
Authority
CN
China
Prior art keywords
feature
feature map
level
image
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911226263.3A
Other languages
Chinese (zh)
Inventor
赵若涵
伍健荣
朱艳春
李仁�
曹世磊
马锴
郑冶枫
陈景亮
杨昊臻
常佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911226263.3A priority Critical patent/CN111080655A/en
Publication of CN111080655A publication Critical patent/CN111080655A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides an image segmentation method, an image segmentation device, a model training device, a medium and electronic equipment. The image segmentation method comprises the following steps: the method comprises the steps of carrying out multi-scale down-sampling processing on an image to be segmented to obtain an initial feature map corresponding to each feature level, sequentially generating derivative feature maps of lower feature levels in the two feature levels through an up-sampling attention mechanism based on the initial feature maps of the two adjacent feature levels, finally fusing the initial feature map and the derivative feature maps of the lowest feature level in the feature levels to obtain the feature map of the image to be segmented, processing the feature maps of the two adjacent feature levels based on the up-sampling attention mechanism to obtain the feature maps of the feature levels, fusing the features in the feature levels, further effectively extracting semantic information of the image to be segmented, ensuring that a more accurate feature map of the image to be segmented is obtained, and being beneficial to improving the accuracy of image segmentation.

Description

Image segmentation and model training method, device, medium and electronic equipment
Technical Field
The application relates to the technical field of computers and communication, in particular to an image segmentation and model training method, device, medium and electronic equipment.
Background
Image segmentation is a technique and process that divides an image into several specific regions with unique properties and proposes an object of interest. Conventional image segmentation methods are generally threshold-based segmentation methods, region-based segmentation methods, and the like. In the processing process, the method often cannot retain the features in the original image, and particularly, under the condition that the features in the original image are not obviously different from the features in other areas, the problem of inaccurate image segmentation easily occurs.
Disclosure of Invention
Embodiments of the present application provide an image segmentation method, an image segmentation device, an image segmentation model training method, an image segmentation device, a medium, and an electronic device, so that semantic information of an image to be segmented can be effectively extracted at least to a certain extent, and accuracy of image segmentation is improved.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned by practice of the application.
According to an aspect of an embodiment of the present application, there is provided an image segmentation method, including: carrying out multi-scale down-sampling processing on an image to be segmented to obtain a plurality of initial feature maps, wherein each initial feature map corresponds to a feature level; sequentially generating a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism based on the initial feature maps of the two feature levels; fusing the initial feature map and the derived feature map in the lowest feature level in all the feature levels to obtain the feature map of the image to be segmented; and carrying out segmentation processing on the image to be segmented according to the characteristic diagram of the image to be segmented.
According to an aspect of an embodiment of the present application, there is provided an image segmentation apparatus including: the device comprises a first sampling unit, a second sampling unit and a third sampling unit, wherein the first sampling unit is used for carrying out multi-scale down-sampling processing on an image to be segmented to obtain a plurality of initial feature maps, and each initial feature map corresponds to a feature level; the second sampling unit is used for sequentially generating a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism on the basis of the initial feature maps of the two feature levels; the first fusion unit is used for fusing the initial feature map and the derived feature map in the lowest feature level in all the feature levels to obtain the feature map of the image to be segmented; and the segmentation unit is used for carrying out segmentation processing on the image to be segmented according to the characteristic diagram of the image to be segmented.
In some embodiments of the present application, based on the foregoing scheme, the second sampling unit includes: a first mixing unit configured to generate a mixed feature map of an ith feature map of a first feature level and an ith feature map of a second feature level based on an upsampling attention mechanism for an adjacent first feature level and the second feature level higher than the first feature level, where i is a natural number greater than 1; and the second fusion unit is used for performing fusion processing on all feature maps between the initial feature map of the first feature level and the ith feature map and the mixed feature map to obtain an (i + 1) th feature map corresponding to the first feature level.
In some embodiments of the present application, based on the foregoing solution, the first mixing unit includes: a first updating unit, configured to update an ith feature map of the first feature level based on an ith feature map of the second feature level, to obtain an updated feature map of the ith feature map of the first feature level; and the second mixing unit is used for generating the mixed feature map according to the updated feature map and the ith feature map of the second feature level.
In some embodiments of the present application, based on the foregoing solution, the second mixing unit includes: the third sampling unit is used for performing up-sampling on the ith feature map of the second feature level to obtain high-level sampling features, or performing bilinear difference processing on the ith feature map of the second feature level to obtain high-level sampling features, or performing up-sampling and bilinear difference processing on the ith feature map of the second feature level to obtain the high-level sampling features; and the third mixing unit is used for fusing the updated feature map and the high-level sampling feature to obtain the mixed feature map.
In some embodiments of the present application, based on the foregoing scheme, the third mixing unit includes: a fourth mixing unit, configured to add the feature element value corresponding to the updated feature map and the feature element value corresponding to the high-level sampling feature to obtain a feature element value corresponding to the mixed feature map; and the fifth mixing unit is used for generating the mixed feature map according to the feature element values corresponding to the mixed feature map.
In some embodiments of the present application, based on the foregoing scheme, the first updating unit includes: the first vector unit is used for carrying out linear mapping processing on the ith feature map of the second feature level to obtain a vector operator; and the second updating unit is used for updating the features in the ith feature map of the first feature level based on the vector operator to obtain the updated feature map.
In some embodiments of the present application, based on the foregoing scheme, the second sampling unit is configured to: multiplying the vector operator by a characteristic element value contained in the ith characteristic diagram of the first characteristic level to obtain an updated characteristic element value; and generating the updated feature map according to the updated feature element value.
In some embodiments of the present application, based on the foregoing, the first vector unit is configured to: and carrying out nonlinear activation processing on the result of the linear mapping processing to obtain the vector operator.
In some embodiments of the present application, based on the foregoing, the second fusion unit is configured to: all feature maps between the initial feature map of the first feature level and the ith feature map are connected in series based on a residual error network to obtain a series feature map; and carrying out convolution fusion processing on the serial feature map and the mixed feature map to obtain the (i + 1) th feature map corresponding to the first feature level.
In some embodiments of the present application, based on the foregoing scheme, the first sampling unit is configured to: preprocessing the image to be segmented to obtain a preprocessed image; extracting target features corresponding to preset color channels in the preprocessed image; and carrying out multi-scale down-sampling processing on the target features to obtain an initial feature map corresponding to each feature level.
In some embodiments of the present application, based on the foregoing solution, the image segmentation apparatus further includes: an image acquisition unit for acquiring the medical image; and the image processing unit is used for carrying out segmentation processing on the medical image to obtain an organ segmentation image.
According to an aspect of an embodiment of the present application, there is provided a method for training an image segmentation model, including: constructing a segmentation network model; inputting a sample image into the segmentation network model to obtain a segmentation result output by the segmentation network model, wherein the segmentation network model performs multi-scale down-sampling processing on the sample image to obtain an initial feature map corresponding to each feature level, and generates a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism to generate the segmentation result based on a fusion feature map between the initial feature map and the derivative feature map of the lowest feature level in all the feature levels; determining loss information of the segmentation network model according to the segmentation result and a segmentation label corresponding to the sample image; and updating parameters of the segmentation network model according to the loss information to obtain an image segmentation model.
According to an aspect of an embodiment of the present application, there is provided an apparatus for training an image segmentation model, including: the construction unit is used for constructing a segmentation network model; the segmentation unit is used for inputting a sample image into the segmentation network model to obtain a segmentation result output by the segmentation network model, wherein the segmentation network model performs multi-scale down-sampling processing on the sample image to obtain an initial feature map corresponding to each feature level, and generates a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism to generate the segmentation result based on a fusion feature map between the initial feature map and the derivative feature map of the lowest feature level in all the feature levels; the loss unit is used for determining loss information of the segmentation network model according to the segmentation result and the segmentation label corresponding to the sample image; and the model unit is used for updating the parameters of the segmentation network model according to the loss information to obtain an image segmentation model.
According to an aspect of embodiments of the present application, there is provided a computer readable medium, on which a computer program is stored, which, when being executed by a processor, implements an image segmentation method as described in the above embodiments, or implements a training method of an image segmentation model as described in the above embodiments.
According to an aspect of an embodiment of the present application, there is provided an electronic device including: one or more processors; a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement an image segmentation method as described in the above embodiments, or a training method of an image segmentation model as described in the above embodiments.
In the technical solutions provided by some embodiments of the present application, an initial feature map corresponding to each feature level is obtained by performing multi-scale down-sampling processing on an image to be segmented, based on the initial feature maps of two adjacent feature levels, a derivative feature map of a lower feature level of the two feature levels is sequentially generated by an up-sampling attention mechanism, and finally an initial feature map and a derivative feature map of a lowest feature level of the feature levels are fused to obtain a feature map of the image to be segmented, so that the feature maps of the two adjacent feature levels can be processed based on the up-sampling attention mechanism to obtain the feature maps of each feature level, and then a plurality of feature maps of the lowest feature level are fused to obtain the feature map of the image to be segmented, and the processing method fuses features of the plurality of feature levels to further more effectively extract semantic information of the image to be segmented, and a more accurate characteristic map of the image to be segmented is ensured, and the image segmentation accuracy is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application. It is obvious that the drawings in the following description are only some embodiments of the application, and that for a person skilled in the art, other drawings can be derived from them without inventive effort. In the drawings:
FIG. 1 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
FIG. 2 shows a schematic diagram of an exemplary system architecture to which aspects of embodiments of the present application may be applied;
fig. 3 schematically shows a schematic diagram of an exemplary system architecture according to an embodiment of the present application;
FIG. 4 schematically shows a flow chart of an image segmentation method according to an embodiment of the present application;
FIG. 5 schematically illustrates multi-scale down-sampling of an image to be segmented according to an embodiment of the application;
FIG. 6 schematically illustrates a flow diagram of an upsampling attention mechanism generating a derivative feature map according to one embodiment of the present application;
FIG. 7 schematically illustrates a structural diagram of a feature map corresponding to each feature level according to an embodiment of the present application;
FIG. 8 schematically illustrates a flow diagram for generating a hybrid signature graph according to an embodiment of the present application;
FIG. 9 schematically illustrates a flow diagram for updating an ith feature map of a first feature hierarchy according to one embodiment of the present application;
FIG. 10 schematically illustrates a diagram for updating a feature map based on a multi-scale upsampling mechanism according to an embodiment of the present application;
FIG. 11 schematically illustrates a flow diagram for generating a hybrid signature graph according to an embodiment of the present application;
FIG. 12 schematically illustrates a schematic diagram of upsampling a high-level feature map according to one embodiment of the present application;
FIG. 13 schematically illustrates a schematic diagram of a series of feature maps according to one embodiment of the present application;
FIG. 14 schematically illustrates a diagram of dense feature fusion, according to an embodiment of the present application;
FIG. 15 schematically shows a flow diagram of a method of training an image segmentation model according to an embodiment of the present application;
FIG. 16 schematically shows a schematic diagram of a segmented network model according to an embodiment of the present application;
FIG. 17 schematically shows a block diagram of an image segmentation apparatus according to an embodiment of the present application;
FIG. 18 schematically shows a block diagram of a training apparatus for an image segmentation model according to an embodiment of the present application;
FIG. 19 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the application. One skilled in the relevant art will recognize, however, that the subject matter of the present application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations, or operations have not been shown or described in detail to avoid obscuring aspects of the application.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 1, the system architecture may include a terminal device (e.g., one or more of a smartphone 101, a tablet computer 102, and a portable computer 103 shown in fig. 1, but may also be a desktop computer, etc.), a network 104, and a server 105. The network 104 serves as a medium for providing communication links between terminal devices and the server 105. Network 104 may include various connection types, such as wired communication links, wireless communication links, and so forth.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like.
A user may use a terminal device to interact with the server 105 over the network 104 to receive or send messages or the like. The server 105 may be a server that provides various services. For example, a user uploads an image to be segmented to the server 105 by using the terminal device 103 (or the terminal device 101 or 102), the server 105 performs multi-scale down-sampling on the image to be segmented to obtain a plurality of initial feature maps, each initial feature map corresponds to a feature level, based on the initial feature maps of two adjacent feature levels, sequentially generating derivative feature maps of lower feature levels in two feature levels by an up-sampling attention mechanism, finally fusing an initial feature map and the derivative feature maps in the lowest feature level in the feature levels to obtain a feature map of an image to be segmented, and the image to be segmented is segmented according to the characteristic diagram of the image to be segmented, and the characteristic diagrams of all characteristic levels are sampled and fused, therefore, the semantic information of the image to be segmented is more effectively extracted, and the accuracy of image segmentation is improved.
Fig. 2 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 2, the system architecture may include a server 201 and display devices, wherein the display devices may include, but are not limited to, a display 202, a printer 203, and the like.
In the system architecture shown in fig. 2, the image to be segmented is stored in the server 201, the server 201 may perform a single segmentation process on one image to be segmented to obtain a segmented image, and may perform a batch process on a plurality of images to be segmented stored therein to obtain a segmentation result of a batch of images composed of the plurality of images to be segmented.
For example, in the medical field, in order to determine a lesion of a patient, a plurality of image pictures are required to be taken continuously, so as to combine the segmentation results of the plurality of image pictures to obtain a more accurate lesion position.
Specifically, based on the system architecture in this embodiment, in the process of processing an image to be segmented, the server 201 performs multi-scale down-sampling on the image to be segmented to obtain a plurality of initial feature maps, each initial feature map corresponds to one feature level, based on the initial feature maps of two adjacent feature levels, sequentially generates a derivative feature map of a lower feature level of the two feature levels through an up-sampling attention mechanism, and finally fuses the initial feature map and the derivative feature map in the lowest feature level of the feature levels to obtain a feature map of the image to be segmented, and performs segmentation on the image to be segmented according to the feature map of the image to be segmented. After the server 201 obtains the segmentation result, the segmentation result is sent to the display 202 for display, or sent to the printer 203 for printing, so as to be viewed by the user. By sampling and fusing the feature maps of the feature levels, the semantic information of the image to be segmented is more effectively extracted, the image segmentation accuracy is improved, the finally obtained segmentation result is sent to the display equipment, and the processing efficiency and the observation intuitiveness of the segmentation result are improved.
It should be noted that the image segmentation method provided in the embodiment of the present application is generally executed by the server 201, and accordingly, the image segmentation apparatus is generally disposed in the server 201. However, in other embodiments of the present application, the terminal device may also have a similar function as the server, so as to execute the scheme of image segmentation provided by the embodiments of the present application.
Fig. 3 shows a schematic diagram of an exemplary system architecture to which the technical solution of the embodiments of the present application can be applied.
As shown in fig. 3, the system architecture may include an image capturing device 301, a server 302, and a terminal device 303, wherein the terminal device 303 may include but is not limited to: one or more of a smart phone, a tablet computer and a portable computer, of course, a desktop computer, etc. In the system architecture of the embodiment, the image to be segmented is captured by the image capturing device 301 and sent to the terminal device 303, or the server 302 sends the image to be segmented stored therein to the terminal device 303. After receiving the image to be segmented, the terminal device 303 performs multi-scale down-sampling processing on the image to be segmented to obtain a plurality of initial feature maps, each initial feature map corresponds to one feature level, based on the initial feature maps of two adjacent feature levels, sequentially generates a derivative feature map of a lower feature level of the two feature levels through an up-sampling attention mechanism, fuses the initial feature map and the derivative feature map in the lowest feature level of the feature levels to obtain the feature map of the image to be segmented, performs segmentation processing on the image to be segmented according to the feature map of the image to be segmented, and finally directly displays the obtained image segmented image on a display interface for a user to directly view.
The implementation details of the technical solution of the embodiment of the present application are set forth in detail below:
FIG. 4 illustrates a flow diagram of an image segmentation method according to an embodiment of the present application, which may be performed by a server, which may be the server shown in FIG. 1 or FIG. 2; the image segmentation method may also be performed by a terminal device, which may be the terminal device in fig. 3. Referring to fig. 4, the image segmentation method at least includes steps S410 to S440, which are described in detail as follows:
in step S410, a multi-scale down-sampling process is performed on the image to be segmented, so as to obtain a plurality of initial feature maps, where each initial feature map corresponds to a feature level.
In an embodiment of the application, after the image to be segmented is obtained, downsampling processing is performed on the image to be segmented first to obtain a feature map of the image to be segmented. The down-sampling processing in this embodiment is multi-scale down-sampling processing performed on an image to be segmented to obtain initial feature maps with different feature depths, so that each initial feature map corresponds to one feature level, and multi-level and more comprehensive semantic information is extracted from the image to be segmented based on the feature maps of multiple levels.
It should be noted that a plurality of times in this embodiment is used to indicate at least two times. In this embodiment, at least two initial feature maps are obtained by performing downsampling processing of different scales at least twice, and the number of the initial feature maps is the same as the number of feature levels. And each initial feature map is the first feature map in the corresponding feature level, and is used for obtaining the rest of derived feature maps in each feature level based on the first feature map in the feature level.
Specifically, in the embodiment, the multi-scale down-sampling may also be performed by performing maximum pooling on the image to be segmented, performing convolution processing on a result of the maximum pooling, and performing activation processing and random deactivation processing on a result obtained by the convolution processing to obtain down-sampling results of different scales.
For example, in the process of performing multi-scale down-sampling, the image to be segmented with size M × N is down-sampled by different multiples, so as to obtain down-sampling results with different scales, for example, the image to be segmented is down-sampled by s times to obtain a resolution image with size (M/s) ((N/s)).
Referring to fig. 5, fig. 5 is a schematic diagram illustrating the multi-scale down-sampling of the image to be segmented according to the present embodiment. Wherein P (501) is used for representing an image to be segmented, and the initial feature map P is obtained by carrying out down-sampling processing on P (501) once1Then, the P (501) is processed by multi-scale down sampling to obtain an initial characteristic graph P2(502)、Pi(503)···Pn(504). Wherein, the initial characteristic diagram P1、P2、Pi、···、PnThe characteristic levels are respectively corresponding to different characteristic levels and used for representing the first characteristic diagram in each characteristic level.
In an embodiment of the present application, a process of performing multi-scale down-sampling on an image to be segmented to obtain a plurality of initial feature maps, where each initial feature map corresponds to a feature level specifically includes the following steps:
preprocessing the image to be segmented to obtain a preprocessed image;
extracting target features corresponding to preset color channels in the preprocessed image;
and carrying out multi-scale down-sampling processing on the target features to obtain an initial feature map corresponding to each feature level.
Specifically, the way of preprocessing the image to be segmented in this embodiment includes, but is not limited to: histogram equalization, image drying, image enhancement, image sharpening, and the like, which are not described herein in detail. After an image to be segmented is preprocessed to obtain a preprocessed image, extracting features corresponding to preset color channels in the preprocessed image to serve as target features, and performing multi-scale down-sampling processing on the image to be segmented based on the target features to obtain initial feature maps corresponding to feature levels.
In one embodiment of the present application, the preset color channel includes, but is not limited to, a Green channel, a Red channel, or a Blue channel in a Red-Green-Blue (RGB) image. The purpose of using the preset channel in the present embodiment is to reduce the amount of calculation of image processing on the one hand; on the other hand, better contrast and information quantity can be provided, so that more accurate segmentation results are obtained finally.
Illustratively, the preprocessed image is obtained by histogram equalization of the image to be segmented. And extracting a green channel from the preprocessed image, and processing the image to be segmented based on the green channel in the image to be segmented.
In step S420, based on the initial feature maps of two adjacent feature levels, a derived feature map of a lower feature level of the two feature levels is sequentially generated through an upsampling attention mechanism.
In one embodiment of the application, each initial feature map corresponds to a feature level, the initial feature map in each feature level serves as a first feature map in the feature level, and each feature level further comprises a derived feature map generated based on the initial feature maps in addition to the initial feature maps, so that comprehensive and accurate features are extracted based on the initial feature maps and the derived feature maps in all feature levels.
In the embodiment, when the derived feature maps except the initial feature map in the feature levels are generated, based on the initial feature maps in two adjacent feature levels, the derived feature maps of lower feature levels in the two feature levels are sequentially generated through an up-sampling attention mechanism, so as to obtain the derived feature maps except the initial feature map corresponding to each feature level.
In an embodiment of the present application, as shown in fig. 6, a process of sequentially generating a derived feature map of a lower feature level of two adjacent feature levels through an upsampling attention mechanism based on initial feature maps of the two feature levels specifically includes the following steps S610 to S620, which are described in detail as follows:
in step S610, for an adjacent first feature level and a second feature level higher than the first feature level, a mixed feature map of an ith feature map of the first feature level and an ith feature map of the second feature level is generated based on an up-sampling attention mechanism, where i is a natural number greater than 1.
In one embodiment of the present application, in a feature level corresponding to a plurality of initial feature maps, for an adjacent first feature level and a second feature level higher than the first feature level, an ith feature map of the first feature level is generated based on an up-sampling attention mechanism
Figure BDA0002302295550000111
And a mixed feature map of an ith feature map of a second feature hierarchy
Figure BDA0002302295550000112
Wherein d represents the number of layers of the feature level, and the larger the numerical value of d is, the higher the number of layers of the feature level is; i is a natural number greater than 1, and represents the number of feature maps of each feature level.
Illustratively, as shown in FIG. 7, the 2 nd bit of the first feature level is generated based on an upsampling attention mechanismSign graph
Figure BDA0002302295550000113
And the 2 nd feature map of the second feature hierarchy
Figure BDA0002302295550000114
The mixed feature map of (1).
Specifically, for the initial feature map of the first feature level and the initial feature map of the second feature level, a mixed feature map of the initial feature map of the first feature level and the initial feature map of the second feature level is generated based on an up-sampling attention mechanism, and the mixed feature map and the initial feature map of the first feature level are subjected to fusion processing to obtain a second feature map of the first feature level.
As shown in fig. 7, fig. 7 is a schematic structural diagram of a feature map corresponding to each feature level shown in this embodiment. In this embodiment, the initial feature maps at the higher levels in two adjacent feature levels correspond to the second feature maps at the lower levels, and the subsequent feature maps are sequentially pushed back, and the number of feature maps at the higher feature levels is less than that of feature maps at the lower levels by 1. In the context of figure 7 of the drawings,
Figure BDA0002302295550000121
Figure BDA0002302295550000122
and
Figure BDA0002302295550000123
and each initial feature map is corresponding to the feature level, and each initial feature map has the corresponding feature level, so that the multi-scale hierarchical structure of the feature map is formed.
Illustratively, in the feature hierarchy of the first layer, a feature map is included
Figure BDA0002302295550000124
In the feature hierarchy of the second layer, a feature map is included
Figure BDA0002302295550000125
By analogy, the feature map is included in the feature hierarchy of the fifth layer, namely the feature hierarchy of the highest layer
Figure BDA0002302295550000126
I.e. the initial feature map corresponding to the feature level of the fifth level.
Note that, the feature level in fig. 7 is 5 levels, where the number of feature maps corresponding to the first feature level is 5, which is an example in this embodiment. In other embodiments of the present application, the number of feature levels is not limited, and the number of feature maps corresponding to each feature level is not limited.
In an embodiment of the present application, as shown in fig. 8, a process of generating a mixed feature map of an ith feature map of the first feature level and an ith feature map of the second feature level based on an upsampling attention mechanism for an adjacent first feature level and a second feature level higher than the first feature level specifically includes the following steps S810 to S820, which are described in detail as follows:
in step S810, the ith feature map of the first feature level is updated based on the ith feature map of the second feature level, so as to obtain an updated feature map of the ith feature map of the first feature level.
In an embodiment of the application, in the process of obtaining the mixed feature map according to the feature maps of the first feature level and the second feature level, the ith feature map of the first feature level is updated based on the ith feature map of the second feature level, so as to obtain an updated feature map of the ith feature map of the first feature level. Based on the multi-scale hierarchical structure in fig. 7, the high-level features, i.e., the features of the second feature level, are made to guide the low-level features to be updated by learning more effectively.
Specifically, the process of updating the ith feature map of the first feature level based on the ith feature map of the second feature level includes steps S910 to S920, which are described in detail as follows:
in step S910, a linear mapping process is performed on the ith feature map of the second feature level to obtain a vector operator.
Referring to fig. 10, fig. 10 is a schematic diagram illustrating an updating feature map based on a multi-scale upsampling mechanism according to an embodiment of the present application.
In one embodiment of the present application, assume the ith feature map of the second feature hierarchy
Figure BDA0002302295550000131
And ith feature map of the first feature hierarchy
Figure BDA0002302295550000132
Wherein H, W denotes the width and height of the feature map, C1、C2Representing the number of channels, R represents the spatial dimension,
Figure BDA0002302295550000133
representing the product of the corresponding feature element values in the feature map.
In order to utilize feature information interaction of different levels as much as possible, in this embodiment, the spatial dependency relationship between the feature information and the first feature map is described by an attention operator, linear mapping processing is performed on the ith feature map of the second feature level based on an attention mechanism, and a vector operator is obtained by the following formula
Figure BDA0002302295550000134
Figure BDA0002302295550000135
Wherein,
Figure BDA0002302295550000136
H. w denotes the width and height of the ith feature map of the second feature level, i.e., the ith feature map of the second feature level below in FIG. 10
Figure BDA0002302295550000137
i, j represent the coordinates of the pixel points.
Besides, in addition to the way of obtaining the vector operator in step S910, linear mapping processing and nonlinear activation processing may be performed on the ith feature map of the second feature level, that is, the operator vector model is optimized by the nonlinear activation function, so as to obtain the vector operator. For example, in this embodiment, the ith feature map of the second feature level may pass through a Linear rectification function (ReLU) to overcome the problem that the pixel gradient in the ith feature map of the second feature level disappears, and optimize an operator vector model to obtain a vector operator.
In step S920, updating features in the ith feature map of the first feature level based on the vector operator to obtain the updated feature map.
After the vector operator is obtained, the vector operator is used as a weight operator for updating the ith feature map of the first feature level, and the weight operator is used for guiding the updating of the low-level features to obtain an updated feature map
Figure BDA0002302295550000138
The specific formula is as follows:
Figure BDA0002302295550000139
wherein,
Figure BDA00023022955500001310
which represents the product of the element-wise multiplication, i.e. the multiplication of the values of the characteristic elements in the characteristic map. In a specific calculation process, vector operators broadcast along operator space dimensions to obtain an updated feature map of the refined low-level features
Figure BDA00023022955500001311
It should be noted that, because the feature map in the high feature level and the feature map in the low feature level have different feature information, and in order to more comprehensively utilize the feature information of the image to be segmented during image segmentation, the two features are combined to restore the spatial information of the image to be segmented as much as possible and enable the segmentation network model to better learn the features.
In step S820, the mixed feature map is generated according to the updated feature map and the ith feature map of the second feature level.
Referring again to FIG. 10, the updated profile is obtained
Figure BDA0002302295550000141
And then, performing convolution and upsampling processing on the ith feature map of the second feature level, and fusing the obtained processing result with the updated feature map to obtain a mixed feature map.
In an embodiment of the application, as shown in fig. 11, the process of generating the mixed feature map according to the updated feature map and the ith feature map of the second feature level specifically includes the following steps S1110 to S1120, which are described in detail as follows:
in step S1110, the ith feature map of the second feature level is upsampled to obtain a high-level sampling feature, or the ith feature map of the second feature level is subjected to bilinear difference processing to obtain a high-level sampling feature, or the ith feature map of the second feature level is subjected to upsampling and bilinear difference processing to obtain the high-level sampling feature.
In one embodiment of the present application, the two necessary elements to obtain the mixed feature map are one of the updated feature map at the lower level and the other of the corresponding sampled feature at the higher level. When obtaining the high-level sampling feature, the present embodiment may be performed in an alternative manner of upsampling or bilinear difference processing, or may be performed in a manner of combining upsampling and bilinear difference processing.
Referring to fig. 12, fig. 12 is a schematic diagram illustrating an upsampling process performed on a high-level feature map according to an embodiment of the present application. When the ith feature map of the second feature level is up-sampled, the feature value of each feature in the ith feature map of the second feature level is determined, which can be a pixel value, then the pixel value is convoluted, and each pixel value in the result obtained by the convolution processing is mapped and filled into the corresponding area of the feature map output up-sampled and is the same value as the full filling.
For example, if the feature values corresponding to the obtained convolution processing result are "7, 6, 5, 2" in the left image in fig. 12, each numerical value is mapped and filled in the corresponding region of the feature map of the output up-sample according to the corresponding position of each feature value in the feature map, and the numerical values are all filled in the same manner, so that the right image in fig. 12 is obtained.
In one embodiment of the present application, the bilinear interpolation processing on the ith feature map of the second feature level is an extension of linear interpolation on a two-dimensional rectangular grid, that is, linear interpolation is performed once in each of two directions.
In an embodiment of the present application, if the upsampling and the bilinear difference processing are combined in this embodiment, some high-level feature maps may be upsampled, and other high-level feature maps may be bilinear difference processed, which is not limited herein.
It should be noted that the effect of the upsampling attention mechanism and the bilinear difference processing in this embodiment is the same, but experiments prove that the recall rate will be improved to some extent when the attention mechanism is introduced, so that a natural number k smaller than the number of feature maps is introduced here to adjust the processing times of the upsampling processing and the bilinear difference processing. In this embodiment, it is not necessary to replace all bilinear difference processing with attention-based upsampling processing, and by introducing the natural number k, the computational power in the image processing process can be adjusted, and the image processing efficiency can be improved.
In step S1120, the updated feature map and the high-level sampling feature are fused to obtain the mixed feature map.
After obtaining the updated feature map corresponding to the ith feature map of the first feature level and the high-level sampling feature corresponding to the ith feature map of the second feature level, carrying out fusion processing on the updated feature map and the high-level sampling feature to obtain a mixed feature map.
Referring again to FIG. 10, the updated profile is obtained
Figure BDA0002302295550000151
After the high-level sampling characteristic, adding the characteristic element value corresponding to the updated characteristic diagram and the characteristic element value corresponding to the high-level sampling characteristic to obtain a characteristic element value corresponding to the mixed characteristic diagram; and generating a mixed feature map according to the feature element values corresponding to the mixed feature map.
In step S620, all feature maps between the initial feature map and the ith feature map of the first feature level and the mixed feature map are fused to obtain an i +1 th feature map corresponding to the first feature level.
In an embodiment of the application, after obtaining the feature maps corresponding to the feature levels based on the above scheme, all the feature maps between the initial feature map and the ith feature map of the first feature level are fused, and the feature maps are mixed to obtain the (i + 1) th feature map corresponding to the first feature level. That is, in the feature maps of the same feature level, one feature map is obtained by fusing the corresponding mixed feature map and all the feature maps preceding the feature map.
In an embodiment of the present application, a process of obtaining an i +1 th feature map corresponding to the first feature level by performing fusion processing on all feature maps between the initial feature map and the i-th feature map of the first feature level and the mixed feature map specifically includes the following steps S6201 to S6202, which are described in detail as follows:
in step S6201, all feature maps between the initial feature map of the first feature level and the ith feature map are concatenated based on a residual error network to obtain a concatenated feature map.
In one embodiment of the application, the accuracy can be improved by increasing the equivalent depth, and the internal residual block uses jump connection, so that the problem of gradient disappearance caused by increasing the depth in a deep neural network is relieved. Specifically, all feature maps between an initial feature map and an ith feature map of a first feature level are connected in series based on a residual error network to obtain a series feature map, that is, each feature map is used as an input of a subsequent feature map, and the method comprises the following steps ofFormulation of the series characteristic diagram in the present embodiment
Figure BDA0002302295550000161
Figure BDA0002302295550000162
Where d denotes the number of the feature level to which the feature map belongs, and n denotes the number of each feature map obtained from the initial feature map in one feature level, i.e., the column index of the feature map.
As shown in fig. 13, fig. 13 is a schematic diagram of feature diagram concatenation provided in the embodiment of the present application, where d is 1, that is, a first feature level. Illustratively, when n is 2, it will be
Figure BDA0002302295550000163
As a series profile, when n > 2, will
Figure BDA0002302295550000164
And obtaining a tandem characteristic diagram through tandem fusion. For example, when n is 3, will
Figure BDA0002302295550000165
Performing series fusion to obtain a series characteristic diagram 1301; when n is 5, will
Figure BDA0002302295550000166
And
Figure BDA0002302295550000167
and performing tandem fusion to obtain a tandem characteristic diagram 1302.
In step S6202, performing convolution fusion processing on the series feature map and the mixed feature map to obtain an i +1 th feature map corresponding to the first feature level.
In an embodiment of the application, after the series feature map is obtained, convolution fusion processing is performed between the series feature map and the mixed feature map to obtain an i +1 th feature map corresponding to the first feature level.
As shown in fig. 14, fig. 14 is a schematic diagram of dense feature fusion provided by the embodiment of the present application. Wherein, the d feature hierarchy includes a feature map
Figure BDA0002302295550000168
And
Figure BDA0002302295550000169
the arrow connecting lines between the characteristic diagrams are used for representing the fusion processing of the characteristic diagrams, and the arrows below the characteristic diagrams are used for representing that the attention-based up-sampling mechanism obtains the mixed characteristics.
Illustratively according to
Figure BDA0002302295550000171
To obtain
Figure BDA0002302295550000172
First, according to the 2 nd feature map of the first feature level
Figure BDA0002302295550000173
And the 2 nd feature map of the second feature hierarchy
Figure BDA0002302295550000174
Obtaining a mixed characteristic between the two, and then mixing the mixed characteristic with the
Figure BDA0002302295550000175
Fusing to obtain
Figure BDA0002302295550000176
In the same manner, it is possible to obtain
Figure BDA0002302295550000177
And
Figure BDA0002302295550000178
in step S430, fusing the initial feature map and the derived feature map in the lowest feature level of all feature levels to obtain the feature map of the image to be segmented.
In one embodiment of the present application, after the feature map according to the higher feature level is processed by the attention-based upsampling process and the fusion of the feature maps of the same layer, the derived feature map of the lowest feature level is obtained. And in the lowest feature level, an initial feature map and a derivative feature map except the initial feature map are included, wherein the number of the derivative feature maps is at least one. And obtaining the feature map of the image to be segmented by fusing the initial feature map and the derived feature map in the lowest feature level in all the feature levels.
Illustratively, the top layer in fig. 7 is the lowest feature level, and feature maps in the feature level are fused
Figure BDA0002302295550000179
And obtaining a characteristic map of the image to be segmented.
In step S440, the image to be segmented is segmented according to the feature map of the image to be segmented.
After the feature map of the image to be segmented is obtained, the feature map comprises a plurality of feature values, according to a preset feature threshold and the feature values in the feature map, a region with the feature value higher than the threshold or lower than the threshold is identified, and the image to be segmented is segmented according to the region to obtain a segmentation result.
In an embodiment of the present application, the image to be segmented may be a medical image, and the medical image is segmented according to a feature map of the image to be segmented and a target organ to be segmented, so as to obtain a segmented image of the target organ. For example, a Computed Tomography (CT) image, the CT image is compressed and transmitted to an image segmentation device to perform organ image segmentation, the segmentation result is displayed at the front end, and corresponding judgment is made according to the segmentation result, so that image segmentation of the human organ CT can be efficiently realized.
According to the scheme, multi-scale down-sampling processing is carried out on an image to be segmented to obtain a plurality of initial feature maps, each initial feature map corresponds to one feature level, derivative feature maps of lower feature levels in the two feature levels are sequentially generated through an up-sampling attention mechanism on the basis of the initial feature maps of the two adjacent feature levels, finally, the initial feature maps and the derivative feature maps in the lowest feature levels in the feature levels are fused to obtain the feature map of the image to be segmented, and segmentation processing is carried out on the image to be segmented according to the feature map of the image to be segmented. According to the technical scheme of the embodiment of the application, the feature graphs of all feature levels are up-sampled based on an attention mechanism, and the up-sampled result and the feature graphs of the same feature level are progressively gathered to obtain the fusion feature, so that the semantic information of the image to be segmented is more effectively extracted, and the accuracy of image segmentation is improved.
Fig. 15 illustrates a training method of an image segmentation model according to an embodiment of the present application, which may be performed by a server or a terminal device. Referring to fig. 15, the training method of the image segmentation model at least includes steps S1510 to S1530, which are described in detail as follows:
in step S1510, a sample image is input into the segmentation network model, so as to obtain a segmentation result output by the segmentation network model, where the segmentation network model is obtained by pre-construction, the segmentation network model performs multi-scale down-sampling on the sample image to obtain an initial feature map corresponding to each feature level, and generates a derived feature map of a lower feature level in two adjacent feature levels by an up-sampling attention mechanism, so as to generate a segmentation result based on a fused feature map between the initial feature map and the derived feature map of a lowest feature level in all feature levels.
Referring to fig. 16, fig. 16 is a schematic diagram of a split network model according to an embodiment of the present disclosure. In one embodiment of the present application, the input of the segmentation network model is a sample image 1601, and the output is a segmentation result 1602 obtained after segmenting the sample image 1601, wherein the sample image 1601 and the segmentation result 1602 have the same width and height, such as H × W shown in the figure.
In one embodiment of the present application, the image segmentation model is constructed based on an upsampling attention mechanism. In the embodiment, the original interpolation upsampling is replaced by the upsampling attention mechanism, and the attention method of the space and the channel is combined. In a structure in which a multi-scale hierarchy is formed by each feature level, it is possible to guide lower-level features to learn and update more efficiently. Meanwhile, the low-level features can also guide the up-sampling 1605 of the high-level features, so that the semantic information of the image sample can be more effectively extracted.
And inputting the sample image 1601 into the segmentation network model for processing to obtain a segmentation result output by the segmentation network model. Specifically, in the process of processing the sample image 1601, the sample image 1601 is firstly subjected to convolution processing 1603 to obtain a convolution image; the convolved image is then subjected to a multi-scale down-sampling 1604 to obtain an initial feature map. The initial feature maps in fig. 16 are 1611, 1622, 1633, 1644 and 1655, as represented by solid squares in fig. 16.
It should be noted that each initial feature map in this embodiment corresponds to a different feature level, and the feature maps in the different feature levels include image features of different depths. The low-level features are opposite to the high-level features, and the high-level features are relatively lower in fig. 16. For example, features of a line, a point, a texture, or the like in the feature map in the low hierarchy are mainly included, and semantic features used for detecting an object are included in the feature map in the high hierarchy.
It should be noted that fig. 16 provided in this embodiment is only one example of a segmentation network model, where the feature level includes 5 layers, and in other embodiments besides this embodiment, the feature level may also be other numbers of feature levels, such as 3 layers, 9 layers, and the like. The image precision obtained by different feature levels and the calculation force, time delay and the like required in the calculation process are different, and the number of the feature levels obtained by down sampling is not limited.
After the initial feature map is obtained, a derivative feature map of a lower feature level of the two adjacent feature levels is generated by the upsampling attention mechanism 1605, and the derivative feature map as shown in fig. 16 is a feature map other than the initial feature map, such as 1612, 1613, 1614, 1623, and the like. For details of the process of specifically generating the derived feature map of the lower feature level of the two adjacent feature levels, please refer to the detailed description in step S420 of the previous embodiment, which is not described herein again.
It should be noted that, in this embodiment, in addition to generating the derived feature map of the lower feature level of the two adjacent feature levels through the upsampling attention mechanism 1605, the upsampling may also be performed in another form, for example, in a manner of the bilinear difference 1606, which is not limited herein.
After generating the derived feature map of the lower feature level of the two adjacent feature levels by the up-sampling attention mechanism, the segmentation result 1602 is generated based on the fused feature map of the plurality of feature maps of the lowest feature level of all the feature levels.
Specifically, in this embodiment, feature maps in one feature level are fused by constructing a dense block 1607 in the split network model, each feature map in the dense block is used as an input of a subsequent feature map, and finally, a plurality of feature maps in the lowest feature level are fused to obtain a split result 1602.
In step S1520, loss information of the segmented network model is determined according to the segmentation result and the segmentation label corresponding to the sample image.
In an embodiment of the present application, optionally, when determining the loss information, the method may be performed by using a binary cross entropy method. Referring to fig. 16, after obtaining the derived feature maps (1612, 1613, 1614, and 1615) in the first feature hierarchy excluding the initial feature map 1611, loss information corresponding to each derived feature map is calculated according to the feature information of the derived feature maps, and the loss information of the segmented network model is obtained by summing (1609), i.e., Σ loss, the loss information of the segmented network model.
In step S1530, the parameters of the segmentation network model are updated according to the loss information, so as to obtain an image segmentation model.
In an embodiment of the present application, after the loss information is obtained through calculation, parameters of the segmentation network model are updated according to the loss information, and an image segmentation model is obtained. In an embodiment of the present application, the image segmentation model in this embodiment may be used to segment features in an image, so as to obtain a segmented image including only a target feature region. For example, the lung CT image 1601 is segmented by an image segmentation model to obtain a segmentation result 1602 including only lung regions.
According to the scheme, a segmentation network model is constructed; inputting a sample image into a segmentation network model to obtain a segmentation result output by the segmentation network model, wherein the segmentation network model carries out multi-scale down-sampling processing on the sample image to obtain an initial feature map corresponding to each feature level, generates a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism, generates a segmentation result based on a fusion feature map between the initial feature map and the derivative feature map of the lowest feature level in all the feature levels, and determines loss information of the segmentation network model according to the segmentation result and a segmentation label corresponding to the sample image; and finally, updating the parameters of the segmentation network model according to the loss information to obtain an image segmentation model. According to the technical scheme of the embodiment of the application, the feature graphs of all feature levels are subjected to up-sampling based on an attention mechanism, and the up-sampling result and the feature graphs of the same feature level are subjected to progressive gathering to obtain the fusion feature, so that the semantic information of the sample image is more effectively extracted, and the accuracy of image segmentation model training is improved.
The following describes embodiments of the apparatus of the present application, which may be used to perform the image segmentation method and the training method of the image segmentation model in the above embodiments of the present application. For details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the image segmentation method and the training method of the image segmentation model described above.
Fig. 17 shows a block diagram of an image segmentation apparatus according to an embodiment of the present application.
Referring to fig. 17, an image segmentation apparatus 1700 according to an embodiment of the present application includes: the first sampling unit 1710, the second sampling unit 1720, the first fusion unit 1730, and the first fusion unit 1730.
The first sampling unit 1710 is configured to perform multi-scale down-sampling on an image to be segmented to obtain a plurality of initial feature maps, where each initial feature map corresponds to a feature level; a second sampling unit 1720, configured to sequentially generate a derived feature map of a lower feature level of two adjacent feature levels through an upsampling attention mechanism based on the initial feature maps of the two feature levels; a first fusing unit 1730, configured to fuse the initial feature map and the derived feature map in the lowest feature level in all feature levels to obtain a feature map of the image to be segmented; a segmentation unit 1740, configured to perform segmentation processing on the image to be segmented according to the feature map of the image to be segmented.
In one embodiment of the present application, the second sampling unit 1720 includes:
a first mixing unit configured to generate a mixed feature map of an ith feature map of a first feature level and an ith feature map of a second feature level based on an upsampling attention mechanism for an adjacent first feature level and the second feature level higher than the first feature level, where i is a natural number greater than 1;
and the second fusion unit is used for performing fusion processing on all feature maps between the initial feature map of the first feature level and the ith feature map and the mixed feature map to obtain an (i + 1) th feature map corresponding to the first feature level.
In one embodiment of the present application, the first mixing unit includes:
a first updating unit, configured to update an ith feature map of the first feature level based on an ith feature map of the second feature level, to obtain an updated feature map of the ith feature map of the first feature level;
and the second mixing unit is used for generating the mixed feature map according to the updated feature map and the ith feature map of the second feature level.
In one embodiment of the present application, the second mixing unit includes:
the third sampling unit is used for performing up-sampling on the ith feature map of the second feature level to obtain high-level sampling features, or performing bilinear difference processing on the ith feature map of the second feature level to obtain high-level sampling features, or performing up-sampling and bilinear difference processing on the ith feature map of the second feature level to obtain the high-level sampling features;
and the third mixing unit is used for fusing the updated feature map and the high-level sampling feature to obtain the mixed feature map.
In one embodiment of the present application, the third mixing unit includes:
a fourth mixing unit, configured to add the feature element value corresponding to the updated feature map and the feature element value corresponding to the high-level sampling feature to obtain a feature element value corresponding to the mixed feature map;
and the fifth mixing unit is used for generating the mixed feature map according to the feature element values corresponding to the mixed feature map.
In one embodiment of the present application, the first updating unit includes:
the first vector unit is used for carrying out linear mapping processing on the ith feature map of the second feature level to obtain a vector operator;
and the second updating unit is used for updating the features in the ith feature map of the first feature level based on the vector operator to obtain the updated feature map.
In one embodiment of the present application, the second sampling unit 1720 is configured to: multiplying the vector operator by a characteristic element value contained in the ith characteristic diagram of the first characteristic level to obtain an updated characteristic element value; and generating the updated feature map according to the updated feature element value.
In one embodiment of the present application, the first vector unit is configured to: and carrying out nonlinear activation processing on the result of the linear mapping processing to obtain the vector operator.
In one embodiment of the present application, the second fusion unit is configured to: all feature maps between the initial feature map of the first feature level and the ith feature map are connected in series based on a residual error network to obtain a series feature map; and carrying out convolution fusion processing on the serial feature map and the mixed feature map to obtain the (i + 1) th feature map corresponding to the first feature level.
In one embodiment of the present application, the first sampling unit is configured to:
preprocessing the image to be segmented to obtain a preprocessed image;
extracting target features corresponding to preset color channels in the preprocessed image;
and carrying out multi-scale down-sampling processing on the target features to obtain an initial feature map corresponding to each feature level.
In an embodiment of the present application, the image segmentation apparatus 1700 further includes:
an image acquisition unit for acquiring the medical image;
and the image processing unit is used for carrying out segmentation processing on the medical image to obtain an organ segmentation image.
Fig. 18 shows a block diagram of an image segmentation apparatus according to an embodiment of the present application.
Referring to fig. 18, an apparatus 1800 for training an image segmentation model according to an embodiment of the present application includes: segmentation unit 1810, loss unit 1820, and model unit 1830.
The segmentation unit 1810 is configured to input a sample image into the segmentation network model to obtain a segmentation result output by the segmentation network model, where the segmentation network model is obtained by pre-construction, the segmentation network model performs multi-scale down-sampling on the sample image to obtain an initial feature map corresponding to each feature level, and generates a derived feature map of a lower feature level in two adjacent feature levels by an up-sampling attention mechanism, so as to generate a segmentation result based on a fused feature map between the initial feature map and the derived feature map of a lowest feature level in all feature levels; a loss unit 1820, configured to determine loss information of the segmented network model according to the segmentation result and the segmentation label corresponding to the sample image; the model unit 1830 is configured to update parameters of the segmentation network model according to the loss information to obtain an image segmentation model.
FIG. 19 illustrates a schematic structural diagram of a computer system suitable for use in implementing the electronic device of an embodiment of the present application.
It should be noted that the computer system 1900 of the electronic device shown in fig. 19 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 19, a computer system 1900 includes a Central Processing Unit (CPU)1901, which can perform various appropriate actions and processes, such as executing the method described in the above-described embodiment, according to a program stored in a Read-Only Memory (ROM) 1902 or a program loaded from a storage section 1908 into a Random Access Memory (RAM) 1903. In the RAM 1903, various programs and data necessary for system operation are also stored. The CPU 1901, ROM 1902, and RAM 1903 are connected to one another via a bus 1904. An Input/Output (I/O) interface 1905 is also connected to the bus 1904.
The following components are connected to the I/O interface 1905: an input section 1906 including a keyboard, a mouse, and the like; an output section 1907 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, a speaker, and the like; a storage section 1908 including a hard disk and the like; and a communication section 1909 including a network interface card such as a LAN (Local area network) card, a modem, or the like. The communication section 1909 performs communication processing via a network such as the internet. Drivers 1910 are also connected to I/O interface 1905 as needed. A removable medium 1911 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 1910 as necessary, so that a computer program read out therefrom is mounted in the storage section 1908 as necessary.
In particular, according to embodiments of the application, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising a computer program for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via communications portion 1909 and/or installed from removable media 1911. When the computer program is executed by the Central Processing Unit (CPU)1901, various functions defined in the system of the present application are executed.
It should be noted that the computer readable medium shown in the embodiments of the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a Read-Only Memory (ROM), an Erasable Programmable Read-Only Memory (EPROM), a flash Memory, an optical fiber, a portable Compact Disc Read-Only Memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with a computer program embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. The computer program embodied on the computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software, or may be implemented by hardware, and the described units may also be disposed in a processor. Wherein the names of the elements do not in some way constitute a limitation on the elements themselves.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by an electronic device, cause the electronic device to implement the method described in the above embodiments.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the application. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Through the above description of the embodiments, those skilled in the art will readily understand that the exemplary embodiments described herein may be implemented by software, or by software in combination with necessary hardware. Therefore, the technical solution according to the embodiments of the present application can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, a usb disk, a removable hard disk, etc.) or on a network, and includes several instructions to enable a computing device (which can be a personal computer, a server, a touch terminal, or a network device, etc.) to execute the method according to the embodiments of the present application.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the embodiments disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (15)

1. An image segmentation method, comprising:
carrying out multi-scale down-sampling processing on an image to be segmented to obtain a plurality of initial feature maps, wherein each initial feature map corresponds to a feature level;
sequentially generating a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism based on the initial feature maps of the two feature levels;
fusing the initial feature map and the derived feature map in the lowest feature level in all the feature levels to obtain the feature map of the image to be segmented;
and carrying out segmentation processing on the image to be segmented according to the characteristic diagram of the image to be segmented.
2. The method of claim 1, wherein sequentially generating a derived feature map of a lower feature level of two adjacent feature levels by an upsampling attention mechanism based on initial feature maps of the two feature levels comprises:
generating a mixed feature map of an ith feature map of the first feature level and an ith feature map of the second feature level based on an up-sampling attention mechanism for an adjacent first feature level and a second feature level higher than the first feature level, wherein i is a natural number greater than 1;
and performing fusion processing on all feature maps between the initial feature map and the ith feature map of the first feature level and the mixed feature map to obtain an i +1 th feature map corresponding to the first feature level.
3. The method of claim 2, wherein generating a blended feature map of the ith feature map of the first feature hierarchy and the ith feature map of the second feature hierarchy based on an upsampling attention mechanism comprises:
updating the ith feature map of the first feature level based on the ith feature map of the second feature level to obtain an updated feature map of the ith feature map of the first feature level;
and generating the mixed feature map according to the updated feature map and the ith feature map of the second feature hierarchy.
4. The method of claim 3, wherein generating the hybrid feature map from the updated feature map and an ith feature map of the second feature hierarchy comprises:
up-sampling the ith feature map of the second feature level to obtain high-level sampling features, or performing bilinear difference processing on the ith feature map of the second feature level to obtain high-level sampling features, or up-sampling and performing bilinear difference processing on the ith feature map of the second feature level to obtain the high-level sampling features;
and fusing the updated feature map and the high-level sampling feature to obtain the mixed feature map.
5. The method of claim 4, wherein fusing the updated feature map and the higher-level sampling features to obtain the mixed feature map comprises:
adding the characteristic element value corresponding to the updated characteristic diagram and the characteristic element value corresponding to the high-level sampling characteristic to obtain a characteristic element value corresponding to the mixed characteristic diagram;
and generating the mixed feature map according to the feature element values corresponding to the mixed feature map.
6. The method according to claim 3, wherein updating the ith feature map of the first feature level based on the ith feature map of the second feature level to obtain an updated feature map of the ith feature map of the first feature level comprises:
performing linear mapping processing on the ith feature map of the second feature level to obtain a vector operator;
and updating the features in the ith feature map of the first feature level based on the vector operator to obtain the updated feature map.
7. The method of claim 6, wherein updating the feature in the ith feature map of the first feature level based on the vector operator to obtain the updated feature map comprises:
multiplying the vector operator by a characteristic element value contained in the ith characteristic diagram of the first characteristic level to obtain an updated characteristic element value;
and generating the updated feature map according to the updated feature element value.
8. The method according to claim 6, further comprising, after performing a linear mapping process on an ith feature map of the first feature level:
and carrying out nonlinear activation processing on the result of the linear mapping processing to obtain the vector operator.
9. The method according to claim 2, wherein the step of fusing all feature maps between the initial feature map and the ith feature map of the first feature level and the mixed feature map to obtain an i +1 th feature map corresponding to the first feature level comprises:
all feature maps between the initial feature map of the first feature level and the ith feature map are connected in series based on a residual error network to obtain a series feature map;
and carrying out convolution fusion processing on the serial feature map and the mixed feature map to obtain the (i + 1) th feature map corresponding to the first feature level.
10. The method of claim 1, wherein performing a multi-scale down-sampling process on the image to be segmented to obtain a plurality of initial feature maps comprises:
preprocessing the image to be segmented to obtain a preprocessed image;
extracting target features corresponding to preset color channels in the preprocessed image;
and carrying out multi-scale down-sampling processing on the target features to obtain an initial feature map corresponding to each feature level.
11. The method according to any one of claims 1-10, wherein the image to be segmented comprises a medical image;
the image to be segmented is segmented according to the feature map of the image to be segmented, and the segmentation comprises the following steps:
and according to the feature map of the image to be segmented and the target organ to be segmented, carrying out segmentation processing on the medical image to obtain a segmented image of the target organ.
12. A training method of an image segmentation model is characterized by comprising the following steps:
inputting a sample image into a segmentation network model to obtain a segmentation result output by the segmentation network model, wherein the segmentation network model is obtained by pre-construction, the segmentation network model performs multi-scale down-sampling processing on the sample image to obtain an initial feature map corresponding to each feature level, a derivative feature map of a lower feature level in two adjacent feature levels is generated through an up-sampling attention mechanism, and the segmentation result is generated based on a fusion feature map between the initial feature map and the derivative feature map of the lowest feature level in all the feature levels;
determining loss information of the segmentation network model according to the segmentation result and a segmentation label corresponding to the sample image;
and updating parameters of the segmentation network model according to the loss information to obtain an image segmentation model.
13. An image segmentation apparatus, comprising:
the device comprises a first sampling unit, a second sampling unit and a third sampling unit, wherein the first sampling unit is used for carrying out multi-scale down-sampling processing on an image to be segmented to obtain a plurality of initial feature maps, and each initial feature map corresponds to a feature level;
the second sampling unit is used for sequentially generating a derivative feature map of a lower feature level in two adjacent feature levels through an up-sampling attention mechanism on the basis of the initial feature maps of the two feature levels;
the first fusion unit is used for fusing the initial feature map and the derived feature map in the lowest feature level in all the feature levels to obtain the feature map of the image to be segmented;
and the segmentation unit is used for carrying out segmentation processing on the image to be segmented according to the characteristic diagram of the image to be segmented.
14. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the image segmentation method as claimed in one of the claims 1 to 11 or the training method of the image segmentation model as claimed in claim 12.
15. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement an image segmentation method as claimed in any one of claims 1 to 11 or a training method of an image segmentation model as claimed in claim 12.
CN201911226263.3A 2019-12-04 2019-12-04 Image segmentation and model training method, device, medium and electronic equipment Pending CN111080655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911226263.3A CN111080655A (en) 2019-12-04 2019-12-04 Image segmentation and model training method, device, medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911226263.3A CN111080655A (en) 2019-12-04 2019-12-04 Image segmentation and model training method, device, medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111080655A true CN111080655A (en) 2020-04-28

Family

ID=70312746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911226263.3A Pending CN111080655A (en) 2019-12-04 2019-12-04 Image segmentation and model training method, device, medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111080655A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951280A (en) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 Image segmentation method, device, equipment and storage medium
CN112949651A (en) * 2021-01-29 2021-06-11 Oppo广东移动通信有限公司 Feature extraction method and device, storage medium and electronic equipment
CN113066018A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method and related device
CN113255265A (en) * 2021-06-07 2021-08-13 上海国微思尔芯技术股份有限公司 Segmentation and verification method, device, electronic equipment and storage medium
CN113963166A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Training method and device of feature extraction model and electronic equipment
CN114495110A (en) * 2022-01-28 2022-05-13 北京百度网讯科技有限公司 Image processing method, generator training method, device and storage medium
CN115082490A (en) * 2022-08-23 2022-09-20 腾讯科技(深圳)有限公司 Anomaly prediction method, and training method, device and equipment of anomaly prediction model
CN117593619A (en) * 2024-01-18 2024-02-23 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
CN109948707A (en) * 2019-03-20 2019-06-28 腾讯科技(深圳)有限公司 Model training method, device, terminal and storage medium
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110176012A (en) * 2019-05-28 2019-08-27 腾讯科技(深圳)有限公司 Target Segmentation method, pond method, apparatus and storage medium in image
CN110188765A (en) * 2019-06-05 2019-08-30 京东方科技集团股份有限公司 Image, semantic parted pattern generation method, device, equipment and storage medium
US20190279074A1 (en) * 2018-03-06 2019-09-12 Adobe Inc. Semantic Class Localization Digital Environment
CN110532955A (en) * 2019-08-30 2019-12-03 中国科学院宁波材料技术与工程研究所 Example dividing method and device based on feature attention and son up-sampling

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3171297A1 (en) * 2015-11-18 2017-05-24 CentraleSupélec Joint boundary detection image segmentation and object recognition using deep learning
US20190279074A1 (en) * 2018-03-06 2019-09-12 Adobe Inc. Semantic Class Localization Digital Environment
CN109948707A (en) * 2019-03-20 2019-06-28 腾讯科技(深圳)有限公司 Model training method, device, terminal and storage medium
CN110120047A (en) * 2019-04-04 2019-08-13 平安科技(深圳)有限公司 Image Segmentation Model training method, image partition method, device, equipment and medium
CN110176012A (en) * 2019-05-28 2019-08-27 腾讯科技(深圳)有限公司 Target Segmentation method, pond method, apparatus and storage medium in image
CN110188765A (en) * 2019-06-05 2019-08-30 京东方科技集团股份有限公司 Image, semantic parted pattern generation method, device, equipment and storage medium
CN110532955A (en) * 2019-08-30 2019-12-03 中国科学院宁波材料技术与工程研究所 Example dividing method and device based on feature attention and son up-sampling

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HUAFENG KUANG等: "Multi-modal Multi-layer Fusion Network with Average Binary Center Loss for Face Anti-spoofing", PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 15 October 2019 (2019-10-15) *
QI WANG等: "Beauty Product Image Retrieval Based on Multi-Feature Fusion and Feature Aggregation", MM \'18: PROCEEDINGS OF THE 26TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 15 October 2018 (2018-10-15) *
XIAOMENG LI等: "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes", IEEE TRANSACTIONS ON MEDICAL IMAGING, 11 June 2018 (2018-06-11) *
XUEYING CHEN等: "Feature Fusion Encoder Decoder Network For Automatic Liver Lesion Segmentation", ARXIV:1903.11834, 28 March 2019 (2019-03-28) *
孙辉: "基于全卷积神经网络的全视野乳腺钼靶肿块分割研究", 中国优秀硕士论文全文数据库 信息科技辑, 15 September 2019 (2019-09-15) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111951280A (en) * 2020-08-10 2020-11-17 中国科学院深圳先进技术研究院 Image segmentation method, device, equipment and storage medium
WO2022032823A1 (en) * 2020-08-10 2022-02-17 中国科学院深圳先进技术研究院 Image segmentation method, apparatus and device, and storage medium
CN111951280B (en) * 2020-08-10 2022-03-15 中国科学院深圳先进技术研究院 Image segmentation method, device, equipment and storage medium
CN112949651A (en) * 2021-01-29 2021-06-11 Oppo广东移动通信有限公司 Feature extraction method and device, storage medium and electronic equipment
CN113066018A (en) * 2021-02-27 2021-07-02 华为技术有限公司 Image enhancement method and related device
CN113255265A (en) * 2021-06-07 2021-08-13 上海国微思尔芯技术股份有限公司 Segmentation and verification method, device, electronic equipment and storage medium
CN113963166A (en) * 2021-10-28 2022-01-21 北京百度网讯科技有限公司 Training method and device of feature extraction model and electronic equipment
CN114495110A (en) * 2022-01-28 2022-05-13 北京百度网讯科技有限公司 Image processing method, generator training method, device and storage medium
CN115082490A (en) * 2022-08-23 2022-09-20 腾讯科技(深圳)有限公司 Anomaly prediction method, and training method, device and equipment of anomaly prediction model
CN117593619A (en) * 2024-01-18 2024-02-23 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and storage medium
CN117593619B (en) * 2024-01-18 2024-05-14 腾讯科技(深圳)有限公司 Image processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111080655A (en) Image segmentation and model training method, device, medium and electronic equipment
CN111104962B (en) Semantic segmentation method and device for image, electronic equipment and readable storage medium
EP3961500B1 (en) Medical image detection method based on deep learning, and related device
CN110222220B (en) Image processing method, device, computer readable medium and electronic equipment
EP3767523A1 (en) Image processing method and apparatus, and computer readable medium, and electronic device
CN106204474B (en) Local multi-grade tone mapping arithmetic unit
Panetta et al. Tmo-net: A parameter-free tone mapping operator using generative adversarial network, and performance benchmarking on large scale hdr dataset
CN109409432B (en) A kind of image processing method, device and storage medium
CN115409755B (en) Map processing method and device, storage medium and electronic equipment
CN112785493B (en) Model training method, style migration method, device, equipment and storage medium
CN110852980A (en) Interactive image filling method and system, server, device and medium
CN107408294A (en) Intersect horizontal image blend
CN107862664A (en) A kind of image non-photorealistic rendering method and system
CN110166759B (en) Image processing method and device, storage medium and electronic device
CN105979283A (en) Video transcoding method and device
CN108595211B (en) Method and apparatus for outputting data
CN112435197A (en) Image beautifying method and device, electronic equipment and storage medium
CN104184791A (en) Image effect extraction
CN112700460A (en) Image segmentation method and system
CN109241930B (en) Method and apparatus for processing eyebrow image
CN115222845A (en) Method and device for generating style font picture, electronic equipment and medium
CN110969641A (en) Image processing method and device
Montefalcone et al. Inpainting CMB maps using partial convolutional neural networks
CN117671254A (en) Image segmentation method and device
CN110751251B (en) Method and device for generating and transforming two-dimensional code image matrix

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40021053

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination