CN114549853A - Image processing method and related model training method, device and equipment - Google Patents

Image processing method and related model training method, device and equipment Download PDF

Info

Publication number
CN114549853A
CN114549853A CN202210178699.5A CN202210178699A CN114549853A CN 114549853 A CN114549853 A CN 114549853A CN 202210178699 A CN202210178699 A CN 202210178699A CN 114549853 A CN114549853 A CN 114549853A
Authority
CN
China
Prior art keywords
feature
target
region
characteristic
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210178699.5A
Other languages
Chinese (zh)
Inventor
胡志强
刘子豪
李卓威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Intelligent Technology Co Ltd
Priority to CN202210178699.5A priority Critical patent/CN114549853A/en
Publication of CN114549853A publication Critical patent/CN114549853A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an image processing method, a related model training device, an image processing device and a storage medium, wherein the image processing method comprises the following steps: extracting the features of the target image to obtain an original feature map; performing feature optimization on corresponding feature areas in the original feature map by using uncertainty parameters corresponding to the feature areas in the original feature map to obtain a target feature map, wherein in the target feature map, the influence of target feature information corresponding to the feature areas is related to the uncertainty parameters corresponding to the feature areas; and obtaining a detection result of the target image based on the target characteristic diagram. By the method, the accuracy of the detection result of the target image is improved.

Description

Image processing method and related model training method, device and equipment
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and a method, an apparatus, and a device for training a relevant model.
Background
With the rapid development of deep learning, it has become a normal state that various industries use neural network models to work. For example, the neural network can be used for processing images to perform tasks such as image detection, image classification, image segmentation and the like, and the application of the neural network greatly improves the work efficiency of people.
The method improves the prediction accuracy of the neural network, and is a pursuit target of people with little cumin. Currently, in order to improve the prediction accuracy of the neural network, attention mechanisms and other methods are proposed. However, the existing method has a certain boundary for improving the prediction accuracy of the neural network, and has the problem of unobvious effect.
Therefore, how to further improve the prediction accuracy of the neural network has important significance.
Disclosure of Invention
The application at least provides an image processing method, a related model training device and a related model training storage medium.
A first aspect of the present application provides an image processing method, including: extracting the features of the target image to obtain an original feature map; performing feature optimization on corresponding feature areas in the original feature map by using uncertainty parameters corresponding to the feature areas in the original feature map to obtain a target feature map, wherein in the target feature map, the influence of target feature information corresponding to the feature areas is related to the uncertainty parameters corresponding to the feature areas; and obtaining a detection result of the target image based on the target characteristic diagram.
Therefore, the uncertainty parameters corresponding to the feature regions in the original feature map are used for performing feature optimization on the corresponding feature regions in the original feature map, so that the influence on the feature information of the feature regions is distinguished, the robustness of pixels with high uncertainty in the target image is improved, and the accuracy of the detection result of the target image is improved.
The above method for performing feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map to obtain the target feature map includes: obtaining the corresponding certainty parameters of each characteristic region based on the corresponding uncertainty parameters of each characteristic region; and correspondingly adjusting the original characteristic information of each characteristic region in the original characteristic diagram by using the corresponding deterministic parameter of each characteristic region to obtain the target characteristic information of each characteristic region in the target characteristic diagram.
Therefore, by obtaining the certainty parameter corresponding to each feature region based on the uncertainty parameter corresponding to the feature region, the original feature information of the feature region can be adjusted by using the certainty parameter corresponding to the feature region, so as to obtain the target feature information of the feature region.
The obtaining of the certainty parameter corresponding to each feature region based on the uncertainty parameter corresponding to each feature region includes: for each characteristic region, taking the difference between the first numerical value and the uncertain parameter corresponding to the characteristic region as the uncertain parameter corresponding to the characteristic region;
therefore, the determination of the certainty parameter of the feature region is achieved by taking the difference between the first value and the uncertainty parameter corresponding to the feature region as the certainty parameter corresponding to the feature region.
The above correspondingly adjusting the original feature information of each feature region in the original feature map by using the corresponding certainty parameter of each feature region to obtain the target feature information of each feature region in the target feature map includes: for each characteristic region, acquiring the sum of the certainty parameter corresponding to the characteristic region and the second numerical value as the adjustment weight of the characteristic region; and weighting the original characteristic information of the characteristic region by using the adjustment weight of the characteristic region to obtain the target characteristic information of the characteristic region.
Therefore, by using the sum of the certainty parameter corresponding to the feature region and the second numerical value as the adjustment weight of the feature region, the original feature information of the feature region can be weighted by using the adjustment weight of the feature region, and the target feature information of the feature region can be obtained.
Before the uncertainty parameter corresponding to each feature region in the original feature map is utilized to perform feature optimization on the corresponding feature region in the original feature map to obtain the target feature map, the method further comprises the following steps: and determining uncertainty parameters corresponding to the characteristic regions based on the initial characteristic information in the original characteristic diagram.
Therefore, the uncertainty parameters corresponding to the feature regions are obtained based on the initial feature information in the original feature map, so that uncertainty estimation of the initial feature information is realized, and the accuracy of the detection result of the target image is improved.
The determining uncertainty parameters corresponding to each feature region based on the initial feature information in the original feature map includes: transforming the initial characteristic information in the original characteristic diagram to obtain a characteristic confidence coefficient corresponding to each characteristic region; and obtaining uncertainty parameters corresponding to the characteristic regions based on the characteristic confidence degrees corresponding to the characteristic regions.
Therefore, by obtaining the feature confidence corresponding to each feature region, the uncertainty parameter corresponding to the feature region can be correspondingly determined based on the feature confidence corresponding to the feature region.
The feature confidence corresponding to the feature region comprises a category confidence of a plurality of channels, and the category confidence of each channel represents the confidence that the feature region belongs to one corresponding category; the obtaining of the uncertainty parameter corresponding to each feature region based on the feature confidence corresponding to each feature region includes: and for each characteristic region, performing information entropy processing based on the category confidence of a plurality of channels corresponding to the characteristic region to obtain an uncertainty parameter corresponding to the characteristic region.
Therefore, the confidence of the feature region belonging to a corresponding class is represented by setting the class confidence of each channel of the feature confidence corresponding to the feature region, so that the classification condition can be visually represented by the feature confidence. In addition, information entropy processing is carried out by utilizing the confidence coefficient of the feature region category, so that uncertainty parameters corresponding to the feature region can be obtained.
Each feature point in the original feature map is used as a feature area.
Therefore, each feature point in the original feature map is used as a feature area, so that the uncertainty parameter of each feature point can be determined, and the most comprehensive uncertainty parameter can be obtained, thereby being beneficial to the accuracy of the detection result of the target image.
The image processing method is executed by an image processing model.
A second aspect of the present application provides a training method for an image processing model, the method including: taking the first sample image as a target image, and processing the target image by using an image processing model to execute the method described in the first aspect to obtain a first detection result of the first sample image; obtaining a first loss value based on the first detection result and the first labeling information of the first sample image; network parameters of the image processing model are adjusted based on the first loss value.
Therefore, by using the first sample image as the target image and performing the method of the above-described image processing method embodiment by using the image processing model to process the target image, the first loss value can be obtained based on the obtained first detection result and the first annotation information of the first sample image, and further, the training of the image processing model can be realized by using the first loss value.
The uncertainty parameter corresponding to each feature region in the original feature map of the first sample image is determined based on the feature confidence corresponding to each feature region, and the feature confidence corresponding to each feature region is used for determining the category to which the feature region belongs; before adjusting the network parameters of the image processing model based on the first loss value, the method further comprises: obtaining a second detection result of the first sample image based on the feature confidence corresponding to each feature region, wherein the second detection result represents the category to which the first sample image belongs; obtaining a second loss value by using second marking information and a second detection result corresponding to the original characteristic diagram; adjusting network parameters of the image processing model based on the first loss value, comprising: network parameters of the image processing model are adjusted based on the first loss value and the second loss value.
Therefore, by acquiring the second loss value, the accuracy of the feature confidence corresponding to the feature region for obtaining the uncertain parameter can be evaluated more accurately, thereby being beneficial to improving the accuracy of the uncertain parameter.
Before processing the target image by using the image processing model to execute the method described in the first aspect, the method further includes: performing feature extraction on the second sample image by using a feature extraction network in the image processing model to obtain a sample feature map of the second sample image; determining feature similarity of a plurality of groups of region combinations by using a sample feature map, wherein the sample image comprises a plurality of local regions, and the feature similarity of each group of region combinations represents the result of at least two local regions included in the region combinations; determining a reference relation parameter corresponding to each group of area combination based on the labeling information of each group of area combination, wherein the reference relation parameter of the area combination represents the actual difference condition between local areas in the area combination; and adjusting the network parameters of the feature extraction network by using the feature similarity and the reference relation parameters of each group of regional combinations.
Therefore, by combining the reference relation parameters and the similarity corresponding to the region combination, the feature similarity is corrected, so that the similarity conditions of different regions in the same image can be compared subsequently, the training of the feature extraction network can be realized by using the contrast learning method, the contrast learning is applied to one image, the application range of the contrast learning training method is expanded, and the extraction accuracy of the feature extraction network on the same image information is improved. In addition, the contrast learning is applied to one image, so that the method is more applicable to the image segmentation task compared with the contrast learning by using different images, and the image processing accuracy of the image processing model is improved.
A third aspect of the present application provides an image processing apparatus comprising: the system comprises an acquisition module, a processing module and an output module, wherein the acquisition module is used for extracting the characteristics of a target image to obtain an original characteristic diagram; the processing module is used for performing feature optimization on corresponding feature areas in the original feature map by using uncertainty parameters corresponding to the feature areas in the original feature map to obtain a target feature map, wherein in the target feature map, the influence of target feature information corresponding to the feature areas is related to the uncertainty parameters corresponding to the feature areas; the output module is used for obtaining the detection result of the target image based on the target characteristic diagram.
A fourth aspect of the present application provides an apparatus for training an image processing model, the apparatus comprising: the image processing method comprises an acquisition module, a determination module and an adjustment module, wherein the acquisition module is used for taking a first sample image as a target image and processing the target image by using an image processing model to execute the method described in the first aspect so as to obtain a first detection result of the first sample image; the determining module is used for obtaining a first loss value based on the first detection result and the first marking information of the first sample image; the adjusting module is used for adjusting network parameters of the image processing model based on the first loss value.
A fifth aspect of the present application provides an electronic device, which includes a memory and a processor coupled to each other, wherein the processor is configured to execute program instructions stored in the memory to implement the image processing method in the first aspect or the image processing model training method in the second aspect.
A fourth aspect of the present application provides a computer-readable storage medium on which program instructions are stored, the program instructions, when executed by a processor, implementing the image processing method of the first aspect described above, or the image processing model training method of the second aspect.
According to the scheme, the corresponding characteristic regions in the original characteristic diagram are subjected to characteristic optimization by using the uncertainty parameters corresponding to the characteristic regions in the original characteristic diagram, so that the influence on the characteristic information of the characteristic regions is distinguished, the robustness of pixels with high uncertainty in the target image is improved, and the accuracy of the detection result of the target image is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and, together with the description, serve to explain the principles of the application.
FIG. 1 is a schematic flow chart diagram of a first embodiment of an image processing method of the present application;
FIG. 2 is a schematic flow chart diagram of a second embodiment of the image processing method of the present application;
FIG. 3 is a schematic flow chart of a third embodiment of the image processing method of the present application;
FIG. 4 is a schematic flow chart of a target feature map obtained in the image processing method of the present application;
FIG. 5 is a schematic flowchart of a first embodiment of an image processing model training method according to the present application;
FIG. 6 is a flowchart illustrating a second embodiment of the method for training an image processing model according to the present application;
FIG. 7 is a schematic overall flow chart of the training method of the image processing model of the present application;
FIG. 8 is a block diagram of an embodiment of an image processing apparatus according to the present disclosure;
FIG. 9 is a block diagram of an embodiment of an apparatus for training an image processing model according to the present application;
FIG. 10 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 11 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes in detail the embodiments of the present application with reference to the drawings attached hereto.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as particular system structures, interfaces, techniques, etc. in order to provide a thorough understanding of the present application.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Referring to fig. 1, fig. 1 is a schematic flow chart of a first embodiment of an image processing method according to the present application. Specifically, the method may include the steps of:
step S11: and performing feature extraction on the target image to obtain an original feature map.
In one embodiment, the target image may be a medical image, such as a two-dimensional image, a three-dimensional image, or the like, obtained by a medical imaging method. Medical Imaging methods include Computed Tomography (CT), Magnetic Resonance Imaging (MRI), or Ultrasound Imaging (US), among others. The target image is, for example, a medical image containing a human organ, a bone, or the like.
In one embodiment, the original feature map may be a feature map output by an encoder, or may be a feature map output by a decoder, and specifically may be a feature map output by any intermediate network layer of the encoder or the decoder.
In one embodiment, the image processing method of the present application is performed by an image processing model.
Step S12: and performing characteristic optimization on the corresponding characteristic regions in the original characteristic diagram by using the uncertainty parameters corresponding to the characteristic regions in the original characteristic diagram to obtain a target characteristic diagram.
In this embodiment, in the target feature map, the magnitude of the influence of the target feature information corresponding to the feature region is related to the uncertainty parameter corresponding to the feature region, and specifically, the magnitude of the influence of the strength of the target feature information may be determined by using the uncertainty parameter. The uncertainty parameter corresponding to the feature region may indicate semantic uncertainty of the feature information of the feature region, and may also indicate a degree of probability that a detection result of the target image obtained based on the feature information of the feature region is inaccurate. The magnitude of the influence of the target feature information corresponding to the feature region may indicate the magnitude of the acting force of the target feature information corresponding to the feature region on the detection result of the obtained target image, that is, the greater the influence of the target feature information corresponding to the feature region, the greater the acting force thereof on the detection result of the obtained target image. On the contrary, the smaller the influence of the target characteristic information corresponding to the characteristic region is, the smaller the acting force of the target characteristic information on the detection result of the target image is. Therefore, by acquiring the uncertainty parameter, the uncertainty parameter can be used to distinguish the magnitude of influence on the feature information of the feature region, so as to contribute to improving the accuracy of the detection result of the target image.
In one embodiment, the feature region may be a feature point of the original feature map, or a region composed of several feature points. In one embodiment, each feature point in the original feature map may be used as a feature region, so that the uncertainty parameter of each feature point may be determined, and then the most comprehensive uncertainty parameter may be obtained, thereby contributing to the accuracy of the detection result of the target image.
In one embodiment, before performing step S12, it may further perform: and determining uncertainty parameters corresponding to the characteristic regions based on the initial characteristic information in the original characteristic diagram. In this embodiment, the initial feature information in the original feature map may reflect the image information of the target image, and therefore, the processing may be performed based on the initial feature information in the original feature map, specifically, based on uncertainty processing using the initial feature information in the original feature map, for example, uncertainty estimation using the initial feature information in the original feature map, so as to obtain the uncertainty parameter. Therefore, the uncertainty parameters corresponding to the feature regions are obtained based on the initial feature information in the original feature map, so that uncertainty estimation of the initial feature information is realized, and the accuracy of the detection result of the target image is improved.
Step S13: and obtaining a detection result of the target image based on the target characteristic diagram.
In this embodiment, the obtained target feature map may be used to replace the original feature map, and then image processing may be performed based on the target feature map, so as to obtain a detection result of the target image. For example, after a certain intermediate network layer performs feature extraction on a target image, an original feature map is output, and after the target feature map is obtained, the target feature map may be used as the output of the intermediate network layer and input into a next network layer to continue image processing, and finally, a detection result of the target image is obtained.
Therefore, the uncertainty parameters corresponding to the feature regions in the original feature map are used for performing feature optimization on the corresponding feature regions in the original feature map, so that the influence on the feature information of the feature regions is distinguished, the robustness of pixels with high uncertainty in the target image is improved, and the accuracy of the detection result of the target image is improved.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image processing method according to a second embodiment of the present disclosure. In this embodiment, the step of "determining the uncertainty parameter corresponding to each feature region based on the initial feature information in the original feature map" specifically includes steps S21 and S22.
Step S21: and transforming the initial characteristic information in the original characteristic diagram to obtain the characteristic confidence corresponding to each characteristic region.
The transformation processing of the initial feature information in the original feature map may be a general processing method in deep learning, such as performing one or more of convolution processing, activation, batch normalization, and normalization processing on the initial feature information in the original feature map.
Feature confidence may be considered a confidence representation of the detection results of the target image. Specifically, the feature confidence may be a direct representation of the detection result of the target image, that is, the feature confidence may directly represent the detection result confidence of the target image, for example, the confidence of the classification. The feature confidence may also be an indirect representation of the detection result of the target image, that is, the feature confidence may be further used to obtain the detection result confidence of the target image, and at this time, the feature confidence may be considered as an intermediate result.
Step S22: and obtaining uncertainty parameters corresponding to the characteristic regions based on the characteristic confidence degrees corresponding to the characteristic regions.
On the basis of the feature confidence, the uncertainty of the initial feature information corresponding to the feature region is obtained preliminarily, at this time, the uncertainty parameter corresponding to each feature region can be calculated and obtained further on the basis of the feature confidence corresponding to the feature region, and the specific calculation method can be the same uncertainty quantification method in the deep learning field.
Therefore, by obtaining the feature confidence corresponding to each feature region, the uncertainty parameter corresponding to the feature region can be correspondingly determined based on the feature confidence corresponding to the feature region.
In an embodiment, the feature confidence corresponding to the feature region includes a category confidence of a plurality of channels, and the category confidence of each channel represents a confidence that the feature region belongs to a corresponding category. That is, the number of channels of the feature confidence is the same as the number of classifications of the target detection result. In this case, the feature confidence may be directly expressed by the detection result of the target image. For example, the classification number of the target detection result is 3, including normal tissue, tumor tissue and background, and the number of channels of the feature confidence is also 3 channels, and the numerical value of each channel represents the class confidence of the normal tissue, the tumor tissue and the background, respectively. Therefore, the confidence of the feature region belonging to a corresponding class is represented by setting the class confidence of each channel of the feature confidence corresponding to the feature region, so that the classification condition can be visually represented by the feature confidence. In this case, the step of "obtaining the uncertainty parameter corresponding to each feature region based on the feature confidence corresponding to each feature region" specifically includes: and for each characteristic region, performing information entropy processing based on the category confidence of a plurality of channels corresponding to the characteristic region to obtain an uncertainty parameter corresponding to the characteristic region. The specific calculation of the information entropy may be a general calculation method. Therefore, the information entropy processing is carried out by utilizing the confidence coefficient of the feature region category, so that the uncertainty parameter corresponding to the feature region can be obtained.
In one embodiment, the uncertainty parameter u corresponding to the feature region can be calculated by the following equation (1)ij
Figure BDA0003521410870000081
Wherein M represents the number of channels for the confidence of the feature, M represents a specific channel for the confidence of the feature,
Figure BDA0003521410870000082
and representing the feature confidence of the feature points of the ith row and the jth column in the original feature map.
Therefore, by obtaining the feature confidence corresponding to the feature region, the uncertainty parameter corresponding to the feature region can be obtained based on the feature confidence corresponding to the feature region.
Referring to fig. 3, fig. 3 is a flowchart illustrating an image processing method according to a third embodiment of the present application. In this embodiment, the step "performing feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map to obtain the target feature map" specifically includes steps S31 and S32.
Step S31: and obtaining the certainty parameters corresponding to the characteristic areas based on the uncertainty parameters corresponding to the characteristic areas.
It is understood that the larger the uncertainty parameter corresponding to a feature region, the smaller the corresponding uncertainty parameter. Therefore, the certainty parameter corresponding to the feature region can be obtained based on the uncertainty parameter corresponding to each feature region based on the negative correlation between the uncertainty parameter and the certainty parameter.
In one embodiment, for each feature region, a difference between the first value and the uncertainty parameter corresponding to the feature region is taken as the certainty parameter corresponding to the feature region. The first value is, for example, 1 or another value, and is not limited herein.
In one embodiment, the certainty parameter can be calculated by the following equation (2)
Figure BDA0003521410870000085
Figure BDA0003521410870000083
Wherein U represents an uncertainty parameter corresponding to the characteristic region,
Figure BDA0003521410870000084
indicating the corresponding certainty parameter of the characteristic region.
Therefore, the determination of the certainty parameter of the feature region is achieved by taking the difference between the first value and the uncertainty parameter corresponding to the feature region as the certainty parameter corresponding to the feature region.
Step S32: and correspondingly adjusting the original characteristic information of each characteristic region in the original characteristic diagram by using the corresponding deterministic parameter of each characteristic region to obtain the target characteristic information of each characteristic region in the target characteristic diagram.
After the certainty parameters corresponding to the feature areas are obtained, the original feature information of each feature area in the original feature map can be correspondingly adjusted by using the certainty parameters corresponding to the feature areas. In one embodiment, the original feature information of each feature region in the original feature map may be weighted by using a certainty parameter, so as to obtain the target feature information of each feature region in the target feature map. In other embodiments, the processing may be performed by other calculation methods, which is not limited herein.
Therefore, by obtaining the certainty parameter corresponding to each feature region based on the uncertainty parameter corresponding to the feature region, the original feature information of the feature region can be adjusted by using the certainty parameter corresponding to the feature region, so as to obtain the target feature information of the feature region.
In one embodiment, the step of "correspondingly adjusting the original feature information of each feature region in the original feature map by using the certainty parameter corresponding to each feature region to obtain the target feature information of each feature region in the target feature map" specifically includes step S321 and step S322 (not shown).
Step S321: and for each characteristic region, acquiring the sum of the certainty parameter corresponding to the characteristic region and the second numerical value as the adjustment weight of the characteristic region.
The second value is, for example, 1, or other specific values, and is not limited herein.
Step S322: and weighting the original characteristic information of the characteristic region by using the adjustment weight of the characteristic region to obtain the target characteristic information of the characteristic region.
In one embodiment, the target feature information of the feature region may be obtained by the following formula (3)
Figure BDA0003521410870000091
Figure BDA0003521410870000092
Wherein H represents the original feature information of the feature region, 1 is a second numerical value,
Figure BDA0003521410870000093
target feature information representing the feature area,
Figure BDA0003521410870000094
indicating the corresponding certainty parameter of the characteristic region.
Therefore, by using the sum of the certainty parameter corresponding to the feature region and the second numerical value as the adjustment weight of the feature region, the original feature information of the feature region can be weighted by using the adjustment weight of the feature region, and the target feature information of the feature region can be obtained.
In one embodiment, the detection result of the target image may be to classify each pixel of the target image, so as to implement segmentation of the target image, for example, segmentation of a medical image. In another embodiment, the detection result of the target image may also be a target detection result of the target image, or a task type of other image processing fields, which is not limited herein.
In one embodiment, when the image processing method is performed by using an image processing model, an uncertainty processing model module may be provided in the image processing model, and used for performing the relevant steps mentioned in the above embodiments after the original feature map is processed to obtain the target feature map.
Referring to fig. 4, fig. 4 is a schematic flow chart illustrating obtaining of a target feature map in the image processing method of the present application. In fig. 4, the image 101 is an original feature map H, which has a size H × w and the number of channels c. F (■) represents the feature confidence for processing the original feature map to obtain the feature confidence corresponding to each feature region, where n is the number of channels of the feature confidence. Encopy (■) represents that the Shannon Entropy is calculated by using the characteristic confidence coefficient corresponding to each characteristic region, so as to obtain an uncertainty parameter U, and finally, a target characteristic diagram can be obtained by using the uncertainty parameter U and the original characteristic diagram 101
Figure BDA0003521410870000095
Referring to fig. 5, fig. 5 is a flowchart illustrating a first embodiment of a training method for an image processing model according to the present application. In the present embodiment, the training method of the image processing model includes steps S41 to S43.
Step S41: and taking the first sample image as a target image, and processing the target image by using an image processing model to obtain a first detection result of the first sample image.
In this embodiment, the image processing model may perform the method of any one of claims 1-7 to process the target image. For a detailed description of this step, please refer to the related description of the above embodiments, which is not repeated herein.
Step S42: and obtaining a first loss value based on the first detection result and the first labeling information of the first sample image.
Step S43: network parameters of the image processing model are adjusted based on the first loss value.
The first detection result is a result corresponding to the first label information. For example, the first labeling information is a classification result of a pixel point in the first sample image, and the first detection result may also be a classification result of a pixel point in the first sample image. For another example, the first annotation information is a detection result of a plurality of targets in the first sample image, and the first detection result may also be a detection result of a plurality of targets in the first sample image. Therefore, the processing effect of the image processing model can be determined by comparing the difference between the first detection result and the first annotation information of the first sample image and calculating the first loss value of the first detection result and the first annotation information of the first sample image. The calculation method of the first loss value may be a calculation method commonly used in the art, and will not be described herein.
After the first loss value is determined, the network parameters of the image processing model may be adjusted according to the first loss value, and the specific adjustment method may be a general method in the art, and is not described herein again.
Therefore, by using the first sample image as the target image and performing the method of the above-described image processing method embodiment by using the image processing model to process the target image, the first loss value can be obtained based on the obtained first detection result and the first annotation information of the first sample image, and further, the training of the image processing model can be realized by using the first loss value.
Referring to fig. 6, fig. 6 is a flowchart illustrating a training method of an image processing model according to a second embodiment of the present application. In this embodiment, the uncertainty parameter corresponding to each feature region in the original feature map of the first sample image is determined based on the feature confidence corresponding to each feature region, and the feature confidence corresponding to the feature region is used to determine the category to which the feature region belongs. In this case, before performing the above-mentioned "adjusting the network parameters of the image processing model based on the first loss value", the training method of the image processing model may further include step S51 and step S52.
Step S51: and obtaining a second detection result of the first sample image based on the feature confidence corresponding to each feature region.
Step S52: and obtaining a second loss value by using second marking information and a second detection result corresponding to the original characteristic diagram.
Since the feature confidence corresponding to the feature region is used to determine the category to which the feature region belongs, in this case, for image processing, the second detection result of the first sample image can be directly obtained according to the feature confidence corresponding to the feature region. In this case, the second detection result indicates the category to which the first sample image belongs, and the second detection result may specifically be a result corresponding to the second label information.
In one embodiment, the second annotation information can be derived from the first annotation information. For example, when the second detection result is classification of the target image, the first annotation information can be directly used as the second annotation information at this time, because the first annotation information is also the annotation classification information of the target image. For another example, when each feature point of the original feature map is classified by the second detection result, and the size of the original feature map is smaller than that of the target image, the first labeling information may be downsampled at this time to obtain second labeling information having the same size as that of the original feature map. When down-sampling is performed, for a plurality of pixel points which need to be combined into one point, the marking information of the point obtained after combination can be determined according to the specific classification condition of the pixel points which need to be combined. In a specific embodiment, the number of the pixel points belonging to a certain classification of the pixel points to be combined is the largest, and the certain classification can be determined according to the labeling information of the point obtained after combination, so that the second labeling information obtained after down-sampling can be more accurate, and the training effect of the image processing model can be further improved.
In this case, the aforementioned step of "adjusting the network parameter of the image processing model based on the first loss value" specifically includes: network parameters of the image processing model are adjusted based on the first loss value and the second loss value.
Because two loss values are obtained, the network parameters of the image processing model can be adjusted according to the first loss value and the second loss value, so that the training of the image processing model is realized.
In one embodiment, a final loss value may be calculated based on the following equation (4), and the network parameters of the image processing model may be adjusted based on the final loss value.
L=αL1+βL2 (4)
Wherein L is1Is a first loss value, L2For the second loss value, α and β are adjustable tuning parameters, and L is the final loss value.
Therefore, by acquiring the second loss value, the accuracy of the feature confidence corresponding to the feature region for obtaining the uncertainty parameter can be more accurately evaluated, thereby being beneficial to improving the accuracy of the uncertainty parameter.
In the present embodiment, before the above-described image processing method is performed by using the image processing model to process the target image, steps S61 to S64 (not shown) may be further performed.
Step S61: and performing feature extraction on the second sample image by using a feature extraction network in the image processing model to obtain a sample feature map of the second sample image.
Step S61: and performing feature extraction on the second sample image by using a feature extraction network to obtain a sample feature map of the second sample image.
In this embodiment, the second sample image may be a medical image. The second sample image may be a two-dimensional image or a three-dimensional image. The three-dimensional image may be a three-dimensional image obtained by scanning an organ. For example, the second sample image may be obtained by three-dimensional imaging by a Computed Tomography (CT) imaging technique. The two-dimensional image is, for example, a second sample image obtained by an ultrasonic imaging technique or an X-ray imaging technique. It is to be understood that the imaging method of the second sample image is not limited.
The feature extraction network may be an integral feature extraction network in the image processing model or a feature extraction network in an intermediate layer of the integral feature extraction network. For example, an image processing model includes an encoder network and a decoder, and in this case, the encoder may be used as the feature extraction network of the present application, or an intermediate feature extraction network in the encoder may be used as the feature extraction network of the application. In one embodiment, when the feature extraction network is an intermediate feature extraction network in the encoder, the feature extraction network may include only the convolutional layer, or may include the convolutional layer, and the pooling layer, the activation layer, and the like after the convolutional layer.
And the characteristic extraction network can obtain a sample characteristic diagram after carrying out characteristic extraction on the second sample image. In this embodiment, the input of the feature extraction network may be the second sample image, or may be a feature image output by a network on a layer above the feature extraction network, and it can be understood that both cases may be considered as a case where the feature extraction network described in this application performs feature extraction on the second sample image. For example, when the feature extraction network is a first-layer network, the input of the feature extraction network may be the second sample image, and the feature extraction network may further perform feature extraction on the input second sample image to obtain a sample feature map. When the feature extraction network is a second-layer network, the input of the feature extraction network may be a feature image output by a previous-layer network, and the feature extraction network can further perform feature extraction on the input feature image to obtain a sample feature map.
Step S62: and determining the feature similarity of the combination of the groups of regions by using the sample feature map.
In this embodiment, the second sample image includes a plurality of local regions, and each group of region combinations includes at least two local regions. The local area may be an area in which the feature points in the sample feature map correspond to the second sample image, that is, the local area may be determined by a correspondence relationship between the feature points in the sample feature map and an area in the second sample image, for example, the local area may be an area in which each feature point in the sample feature map corresponds to the second sample image. As another example, the local region may be a region in which every two feature points in the sample feature map correspond to the second sample image. The number of the partial regions included in the region combination may be two or three as long as it is not less than two.
In one embodiment, the size of the local region may be determined by the following equation (5).
Figure BDA0003521410870000121
Wherein, L represents the number of layers where the feature extraction network is located, L represents a specific certain layer of feature extraction network, the feature extraction network of the L-th layer is the feature extraction network in the application, and k islDenotes the size of the convolution kernel of the l-layer network and s denotes the step size.
In this embodiment, the feature similarity of each group of region combinations indicates that the at least two local regions included in the region combination are obtained. Specifically, the feature information of the local region may be determined based on the corresponding relationship between the local region and the feature point on the sample feature map, and the feature similarity of each group of region combinations may be determined according to the feature information of the local region included in the region combinations. The similarity of the comparison feature information may be a similarity calculation method commonly used in the art, and will not be described herein again.
In one embodiment, each two different local regions in the second sample image may be combined into a group of regions, that is, two local regions in the second sample image may be combined as a group of regions. Therefore, the feature similarity of two local regions in the second sample image can be obtained by combining every two different local regions in the second sample image into a group of regions.
In one embodiment, the feature similarity of each group of area combinations may be determined through steps S621 and S622.
Step S621: and for each group of area combination, acquiring the characteristic information corresponding to each local area in the area combination from the sample characteristic diagram.
Specifically, the feature information of the local region may be determined based on the correspondence between the local region and the feature point on the sample feature map. For example, each local region corresponds to each feature point in the sample feature map, and feature information, such as a feature vector, of each feature point in the sample feature map is feature information of the local region. For another example, every two local regions correspond to each feature point in the sample feature map, and the feature information of every two feature points in the sample feature map is the feature information of the local region.
Step S622: and obtaining the feature similarity of the area combination by using the feature information corresponding to each local area in the area combination.
After determining the feature information corresponding to each local region in the region combination, the similarity of the feature information of each local region in the region combination can be compared according to the similarity calculation method, and the feature similarity of the region combination can be obtained correspondingly.
Therefore, by acquiring the feature information corresponding to each local region in the region combination, the feature similarity of the region combination can be correspondingly obtained.
Step S63: and determining reference relation parameters corresponding to the area combinations of each group based on the labeling information of the area combinations of each group.
In this embodiment, the second sample image is labeled with labeling information, so that the labeling information of the local region can be correspondingly determined, and further the labeling information of the region combination can be determined. In one embodiment, the annotation information may be pixel-level annotation information, that is, each pixel point in the second sample image is annotated with annotation information. In another embodiment, each local region may be labeled with labeling information, that is, one local region may be labeled with labeling information as a whole. The label information is, for example, classification information. For example, the classification information of each pixel point in the second sample image, or the classification information of each local area, etc.
In the present embodiment, the reference relationship parameter of the area combination indicates an actual difference situation between the local areas in the area combination. The actual difference condition may be a difference condition of the classification information of each pixel point in the region combination, or a difference condition of the classification information between local regions. In a specific embodiment, in the case that the label information is label information at a pixel level, the actual difference condition can be obtained by comparing the number of pixels belonging to each category. When the label information is label information of the entire local region, the actual difference can be obtained by comparing the number of local regions belonging to each classification.
In one embodiment, the reference relation parameter of the area combination is in positive correlation with the actual degree of class similarity between the local areas in the area combination. That is, it can be considered that the larger the reference relationship parameter is, the larger the degree of actual category similarity between the local regions is. When the classification information includes at least three types of classifications, the actual class similarity may be the similarity of a certain class between local regions, or may be the similarity obtained by integrating some or all of the classes between local regions. For example, the classification information includes four classifications, and the actual class similarity degree may be a class similarity degree of one class between local regions or a class similarity degree of all classes. The determination method of the category similarity degree may refer to a calculation method of an actual difference condition, and is not described herein again. Therefore, the reference relation parameter of the area combination is set to be in positive correlation with the actual class similarity between the local areas in the area combination, so that the actual class similarity between the local areas in the area combination can be intuitively reflected through the reference relation parameter.
Step S64: and adjusting the network parameters of the feature extraction network by using the feature similarity and the reference relation parameters of each group of regional combinations.
In the application, the feature similarity of each group of regional combinations can be regarded as the feature similarity on a feature information level, and the feature similarity of each group of regional combinations is processed by using the reference relation parameters, so that the feature similarity can be corrected based on the difference of the labeling information between the local regions of the regional combinations, and further more accurate similarity of the feature information of the local regions of the regional combinations can be obtained. Subsequently, the network parameters of the feature extraction network can be adjusted based on a comparison learning method, and further training of the feature extraction network is achieved.
Therefore, by combining the reference relation parameters and the similarity corresponding to the region combination, the feature similarity is corrected, so that the similarity conditions of different regions in the same image can be compared subsequently, the training of the feature extraction network can be realized by using the contrast learning method, the contrast learning is applied to one image, the application range of the contrast learning training method is expanded, and the extraction accuracy of the feature extraction network on the same image information is improved. In addition, the contrast learning is applied to one image, so that the method is more applicable to the image segmentation task compared with the contrast learning by using different images, and the image processing accuracy of the image processing model is improved.
In one embodiment, the feature extraction network is a part of the image processing model, and the feature extraction network may be an entire encoder or a part of the encoder. The image processing model is used for predicting the second sample image based on the sample feature map of the second sample image, for example, classifying pixel points of the second sample image, so as to implement segmentation of the second sample image. In addition, the training method of the feature extraction network is executed in the pre-training stage of the image processing model. Therefore, the training method of the feature extraction network is applied to the pre-training stage of the image processing model, and is beneficial to improving the pre-training effect, so that the training effect of the subsequent image processing model is improved.
In an embodiment, the labeling information of the area combination includes a preset classification to which the pixel point in each local area of the area combination belongs. The preset classification may be a predetermined classification. Such as dogs, cats, people, and background, etc.
In an embodiment, the step "determining the reference relationship parameter of each group of area combinations based on the label information of each group of area combinations" mentioned in the above embodiment specifically includes steps S71 to S73 (not shown).
Step S71: the respective area combinations are set as target area combinations.
When the reference relationship parameters of each group of area combinations are obtained, each area combination can be used as a target area combination, so that each target area combination can be processed subsequently.
Step S72: and respectively determining the category parameters of each target local area about each preset classification based on the labeling information of the target area combination.
In the present embodiment, the target local region is a local region in the target region combination. For example, if a region combination includes two local regions, the target region combination may include two target local regions after the region combination is used as the target region combination.
The labeling information of this embodiment may be classification information, specifically, classification information of each pixel point of the target local region, or classification information of the target local region as a whole. The category parameter related to the preset classification represents the condition of the pixel points belonging to the preset classification in the target local region, namely the category parameter can reflect the condition that each pixel point in the target local region belongs to the preset classification.
In an embodiment, when the annotation information is classification information of each pixel point of the target local region, the classification parameter of the preset classification may be further determined according to the number of the pixel points belonging to the preset classification in the target local region and then according to the number of the pixel points belonging to the preset classification. For example, if the preset classification includes dog, cat, person and background, the category parameter of each target local region with respect to the dog, the category parameter of the cat, the category parameter of the person and the category parameter of the background may be determined according to the number of pixel points in the target local region that belong to the dog, the cat, the person and the background, respectively. In one embodiment, when the annotation information is classification information of the entire target local region, the category parameter of the preset classification may be further determined according to the specific classification of the target local region. For example, if the preset classification includes a dog, a cat, and a background, the category parameters of the target local area belonging to each dog, cat, and background can be determined according to the specific classification of the target local area.
In one embodiment, the step of "respectively determining the category parameter of each target local area with respect to each preset classification based on the labeling information of the target area combination" specifically includes step S721 and step S722 (not shown).
Step S721: and taking each preset classification as a target classification.
When the category parameter of each preset category is obtained, each preset category can be used as a target category, so that each target category can be processed subsequently and respectively.
Step S722: for each target local area, counting the number of pixel points belonging to the target classification in the target local area, and determining the category parameter of the target local area belonging to the target classification based on the number of the pixel points belonging to the target classification.
In this embodiment, the label information is classification information of each pixel point of the target local region. At this time, for each target local region, the number of pixels belonging to the target classification in the target local region is counted, for example, the preset classification includes dogs, cats, people and backgrounds, the size of the target local region is 10 × 10, and 100 pixels are counted, the number of pixels belonging to the dogs in the target local region is 25, the number of pixels belonging to the cats is 30, the number of pixels belonging to the people is 35, and the number of pixels belonging to the backgrounds is 10. Further, the category parameter of the target local area belonging to the target classification can be further determined based on the number of the pixel points belonging to the target classification. For example, the category parameter of a certain target classification may be determined according to the number of pixels of the certain target classification and the number of pixels of other target classifications.
In a specific embodiment, a ratio between the number of pixels belonging to the target classification and the total number of pixels in the target local region may be used as a classification parameter that the target local region belongs to the target classification. Specifically, the category parameter of the target classification can be obtained by the following formula (6).
Figure BDA0003521410870000161
In the present embodiment of the invention, it is preferred,
Figure BDA0003521410870000162
class parameter, R, representing the classification of an object into miThe number of all the pixels of the target local area is represented,
Figure BDA0003521410870000163
and representing the number of pixel points belonging to the category m in the target local area. For example, m is classified as cat, the cat's category parameter may be determined to be 0.3.
Therefore, the ratio of the number of the pixel points belonging to the target classification to the total number of the pixel points of the target local area is used as the category parameter of the target local area belonging to the target classification, so that the category parameter is obtained.
In other specific embodiments, the ratio between the number of pixels belonging to a certain target category and the total number of pixels belonging to other target categories may be
Therefore, the number of the pixel points belonging to the target classification in the target local area is counted, so that the category parameter of the target local area belonging to the target classification can be determined subsequently based on the number of the pixel points belonging to the target classification, and the category parameter can be determined by utilizing the labeling information.
Step S73: and obtaining a reference relation parameter of the target area combination based on the category parameter.
In this embodiment, since the category parameter is obtained by using the label information of the target area combination, specifically, the label information of each target local area combination, it can be considered that the category parameter can reflect the classification condition of each target local area combination. Therefore, based on the category parameters, the classification difference of each target classification in the target area combination can be obtained, and further, the reference relation parameters of the target area combination can be obtained. Specifically, the reference relationship parameter of the target area combination may be obtained by comparing the category parameter of each target classification in different target local areas in the same target area combination.
Therefore, the reference relation parameters of the target area combination can be obtained by respectively determining the category parameters of each target local area about each preset classification based on the labeling information of the target area combination, and the reference relation parameters are determined by using the labeling information.
In one embodiment, the aforementioned step of "obtaining the reference relationship parameter of the target area combination by using the category parameter" specifically includes step S731 and step 7232 (not shown).
Step S731: and for each preset classification, obtaining the classification parameter difference of the target area combination relative to the preset classification based on the classification parameter of each target local area belonging to the preset classification.
For each preset classification, because the class parameter of each target area combination can reflect the classification condition of the class, the class parameter difference of the target area combination with respect to the preset classification can be obtained based on the class parameter of each target local area belonging to the preset classification. The category parameter differences may be considered to be a manifestation of different aspects of the target area combination with respect to the preset classification.
In one embodiment, the difference between the category parameters belonging to the preset classification between the target local regions may be used as the category parameter difference of the target region combination with respect to the preset classification. Specifically, the category parameter of the target classification can be obtained by the following formula (7).
Figure BDA0003521410870000171
Wherein the content of the first and second substances,
Figure BDA0003521410870000172
a category parameter indicating that the target local area i belongs to a preset category m,
Figure BDA0003521410870000173
class parameter indicating that target local area j belongs to preset class m,ΦmRepresenting the category parameter difference of the target area combination with respect to the preset classification m. Therefore, by taking the difference between the category parameters belonging to the preset classification between the target local regions as the category parameter difference of the target region combination with respect to the preset classification, the determination of the category parameter difference of the target region combination with respect to the preset classification using the category parameters is achieved.
In other specific embodiments, a ratio of the category parameters belonging to the preset classification between the target local regions may be used as the category parameter, or a difference between the category parameters may be determined by other calculation methods, which is not limited herein.
Step S732: and obtaining a reference relation parameter of the target area combination based on the category parameter difference of the target area combination about each preset classification.
After the class parameter difference corresponding to each preset classification is obtained, it means that the difference condition of the target area combination in each preset classification has been determined, and at this time, fusion processing may be performed based on the class parameter difference of the target area combination with respect to each preset classification, so as to obtain the reference relationship parameter of the target area combination. The fusion process is, for example, summation, weighted summation, averaging, or the like, and the calculation method is not limited.
Therefore, by obtaining the category parameter difference of the target area combination with respect to the preset classifications, the reference relationship parameter of the target area combination can be obtained based on the category parameter difference of the target area combination with respect to each preset classification, so that the reference relationship parameter can reflect the actual difference situation between the local areas in the target area combination.
In an embodiment, the step of "obtaining the reference relationship parameter of the target area combination based on the category parameter difference of the target area combination with respect to each preset classification" specifically includes step S7321 and step S7322 (not shown).
Step S7321: and acquiring a statistical value of the category parameter difference of the target area combination about each preset classification.
In this embodiment, the statistical value may be a general value such as a mean value, a median, a mode, and the like. In one example, the statistical value may be an average value.
Therefore, by acquiring the statistical values, it is possible to obtain the numerical difference situation of the target area combination with respect to the category parameter difference of each preset classification.
Step S7322: and obtaining the reference relation parameters of the target area combination by using the statistical values of the target area combination.
In the present embodiment, the reference relation parameter of the target area combination is inversely correlated with the statistical value of the target area combination. By setting the reference relation parameter to be negatively correlated with the statistical value of the target area combination, it is possible to make the reference relation parameter represent the manifestation of the same aspect of the target area combination with respect to the preset classification.
In one embodiment, the reference relationship parameter of the target region combination may be calculated by the following formula (8).
Figure BDA0003521410870000181
Wherein M represents all preset categories, M represents a specific preset category,
Figure BDA0003521410870000182
representing the difference in class parameter of the target area combination with respect to a preset classification m, wijA reference relation parameter representing a combination of target areas. In this embodiment, w may beijThe similarity weights of different target local regions in the target region combination are considered, and are used for reflecting the same aspect of each target local region in the target region combination with respect to all preset classifications.
Therefore, by obtaining the statistical value of the target area combination about the category parameter difference of each preset classification and obtaining the reference relation parameter of the target area combination according to the statistical value of the target area combination, the reference relation parameter for the actual difference situation between the local areas in the area combination can be obtained.
In an embodiment, the step "adjusting the network parameters of the feature extraction network by using the feature similarity of each group of the area combinations and the reference relationship parameter" mentioned in the above embodiments specifically includes steps S81 to S83 (not shown).
Step S81: respectively combining all the groups of area combinations into a target area combination, adjusting the feature similarity of the target area combination by using the reference relation parameters of the target area combination to obtain the reference feature similarity of the target area combination, and obtaining the first loss of the target area combination based on the reference feature similarity of the target area combination;
in one embodiment, the target region combination includes two target local regions, and the feature similarity of the target region combination can be calculated by the following formula (9).
Figure BDA0003521410870000183
Wherein v isiCharacteristic information of a target local area i representing a combination of target areas vjFeature information of the target local area j is indicated, and sim represents cosine similarity (cosine).
In other embodiments, the feature similarity of the target region combination may be determined by another feature similarity calculation method.
In this embodiment, the feature similarity of the target area combination is adjusted by using the reference relationship parameter of the target area combination, the reference relationship parameter may be used as a reference object, and after a certain processing is performed on the reference relationship parameter, the reference relationship parameter is multiplied by the feature similarity, so as to obtain the reference feature similarity of the target area combination.
In one embodiment, the product of the reference relationship parameter of the target region combination and the feature similarity of the target region combination may be used as the reference feature similarity of the target region combination. Therefore, the product of the reference relation parameter of the target area combination and the feature similarity of the target area combination is used as the reference feature similarity of the target area combination, so that the feature similarity is processed by using the reference relation parameter, and the obtained reference feature similarity can more accurately reflect the feature similarity between target local areas in the target area combination.
When the similarity of the reference features of the target region combination is obtained, which means that the similarity of the features between the target local regions in the target region combination has been determined, the first loss of the target region combination may be further determined by using a contrast learning method, so as to obtain the first loss of the target region combination in terms of similarity.
In one embodiment, the step of obtaining the first loss of the target region combination based on the similarity of the reference features of the target region combination specifically includes: step S811 and step S812.
Step S811: and correspondingly obtaining the auxiliary feature similarity of each auxiliary area combination by using the feature similarity and the auxiliary relation parameter of at least one auxiliary area combination.
In the present embodiment, at least one same local region exists in the auxiliary region combination and the target region combination. The auxiliary area combination is also composed of at least two local areas, which may contain the same number of local areas as the target area combination. In one embodiment, the target area combination and the auxiliary area combination each comprise two local areas. In an embodiment, one local area in the target area combination is a reference area, and the auxiliary area combination is an area combination including the reference area.
In the present embodiment, the auxiliary relationship parameter of the auxiliary area combination is obtained based on the reference relationship parameter of the auxiliary area combination. For a specific calculation method of the reference relationship parameters of the auxiliary area combination, reference may be made to the calculation method of the reference relationship parameters of the target area combination mentioned in the foregoing embodiment, which is not described herein again in a specific embodiment, a sum of the auxiliary relationship parameters of the auxiliary area combination and the reference relationship parameters of the auxiliary area combination is a preset value, and the preset value is, for example, 1. Therefore, the sum of the auxiliary relationship parameter of the auxiliary region combination and the reference relationship parameter of the auxiliary region combination is a preset value, so that the auxiliary relationship parameter of the auxiliary relationship parameter and the reference relationship parameter are in a negative correlation relationship, and the auxiliary relationship parameter and the reference relationship parameter of the auxiliary region combination can be used for representing the same aspect and the different aspect of each target local region in the target region combination with respect to all preset classifications. When the reference relationship parameter is used for embodying the same aspect of each target local area in the target area combination with respect to all preset classifications, the auxiliary relationship parameter can represent different aspects of each target local area in the target area combination with respect to all preset classifications.
After the auxiliary relationship parameters of the auxiliary area combination are obtained, the auxiliary feature similarity of the auxiliary area combination can be correspondingly obtained. In one example, the feature similarity of the auxiliary region combination may be calculated by using a feature similarity calculation method, and a product of the feature similarity of the auxiliary region combination and the auxiliary relationship parameter may be used as the auxiliary feature similarity.
Step S812: and obtaining a first loss corresponding to the target area combination based on the reference feature similarity and the auxiliary feature similarity.
The auxiliary relation parameters and the reference relation parameters of the auxiliary area combination can be used for representing the same aspects and different aspects of all target local areas in the target area combination about all preset classifications, and the auxiliary feature similarity and the reference feature similarity obtained based on the auxiliary relation parameters and the reference relation parameters can correspondingly represent the same aspects and different aspects of the target area combination and the auxiliary area combination. Therefore, the first loss corresponding to the target area combination can be obtained based on the reference feature similarity and the auxiliary feature similarity based on the comparison learning method. The first loss can be considered as a loss in feature similarity when the feature extraction network performs feature extraction on the same image information.
In a specific embodiment, the step of "obtaining the first loss corresponding to the target area combination based on the reference feature similarity and the assistant feature similarity" may specifically include steps S8121 to S8123 (not shown).
Step S8121: and performing preset operation on the reference feature similarity to obtain a first operation result.
Step S8122: and respectively carrying out preset operation on the auxiliary feature similarity of each auxiliary area combination to obtain a second operation result corresponding to each auxiliary area combination.
Step S8123: and obtaining a first loss corresponding to the target area combination based on the ratio of the first operation result to the sum of the second operation results corresponding to the auxiliary area combinations.
The first predetermined operation is, for example, an exponential operation with a natural constant e as a base, or any other operation, which is not limited herein.
In the case that the auxiliary area combination includes a plurality of area combinations, the sum of the second operation results corresponding to each auxiliary area combination may be further calculated to finally obtain the second operation results corresponding to all the auxiliary area combinations.
In one embodiment, the first loss L can be calculated by the following equation (10)1
Figure BDA0003521410870000201
Wherein s isijRepresenting combinations of target areas, sikIndicating a combination of auxiliary areas, wijDenotes a reference relationship parameter, 1-ikAnd expressing auxiliary relation parameters, N expressing all local areas, tau being an adjustable adjusting parameter, exp being a preset operation, and an exponential function with a natural constant e as a base.
In the case of equation (10), the same local region in the target region combination and the auxiliary region combination is the local region i. At this time, the auxiliary region combination is a region combination composed of the local region i and each of the local regions (including itself) in the entire local regions. When the reference relation parameter is used for representing the same aspect of each target local area in the target area combination with respect to all preset classifications, the target area combination s can be calculatedijThe first loss of the feature similarity and the feature dissimilarity of the auxiliary area combination realizes the comparison process in the comparison learning.
Therefore, the feature similarity and the assistant relation parameter of at least one assistant region combination are used to correspondingly obtain the assistant feature similarity of each assistant region combination, and further, the first loss corresponding to the target region combination can be obtained based on the reference feature similarity and the assistant feature similarity, so that the loss of the feature extraction network in the aspect of feature similarity when the same image information is subjected to feature extraction is obtained.
Step S82: obtaining a second loss of the feature extraction network based on the first loss of each group of regional combinations;
after each group of area combination is taken as the target area combination, the first loss corresponding to each group of area combination can be obtained. In this case, the first loss of each group of region combinations may be further used to calculate the loss of the similarity of the entire feature extraction network in extracting features from different local regions, so as to obtain the loss of the similarity of the entire sample feature map, thereby implementing the application of the contrast learning to one image.
In one embodiment, the second loss L may be calculated using the following equation (11)2
Figure BDA0003521410870000211
The meaning of each parameter of formula (11) can refer to the description of formula (10) above, and is not described herein again.
In equation (11), the local regions included in the second sample image may be considered to be combined two by two to obtain the auxiliary region combination, and in this case, the target region combination s may be considered to beijWill be compared with the entirety of the auxiliary area combinations composed of all the local areas, when the reference relation parameter is used to represent the same aspect of each target local area in the target area combination with respect to all the preset classifications, i.e. using the target area combination sijAre compared with different feature aspects of all the auxiliary area combinations, and therefore comparison learning is performed to improve the feature extraction network to the same graphThe accuracy of the extraction of image information.
Step S83: and adjusting the network parameters of the feature extraction network by using the second loss.
After the second loss is obtained, training of the feature extraction network can be realized by adjusting network parameters of the feature extraction network and based on a comparison learning method.
Therefore, the first loss of the target area combination is obtained by using the reference feature similarity of the target area combination, and further, the second loss of the feature extraction network in terms of the overall similarity of the sample feature map can be obtained based on the first loss of each group of area combination, so that the network parameters of the feature extraction network can be adjusted by using the second loss, a method of applying contrast learning on one image is realized, and the training of the feature extraction network is completed.
Referring to fig. 7, fig. 7 is a schematic diagram of an overall training flow of the training method of the image processing model of the present application. In fig. 7, the second sample image 21 is input into the image processing model 22, and then the detection result 23 of the second sample image 21 is output, so that the loss value can be obtained according to the detection result 23 of the second sample image 21 and the label information 24 of the second sample image 21, thereby realizing training of the image processing model. In fig. 7, the image processing model 22 includes an encoder 221 and a decoder 222, the encoder 221 including a 3-layer intermediate network layer 2211, and the decoder including a three-layer intermediate network layer 2222. For any layer of the intermediate network layer 2211 or the intermediate network layer 2222, the training method mentioned in the above-mentioned training method embodiment of the image processing model may be used for training.
Referring to fig. 8, fig. 8 is a schematic diagram of a frame of an embodiment of an image processing apparatus according to the present application. The image processing apparatus 30 includes an acquisition module 31, a processing module 32, and an output module 33. The obtaining module 31 is configured to perform feature extraction on the target image to obtain an original feature map; the processing module 32 is configured to perform feature optimization on corresponding feature regions in the original feature map by using uncertainty parameters corresponding to the feature regions in the original feature map to obtain a target feature map, where in the target feature map, the influence of target feature information corresponding to the feature regions is related to the uncertainty parameters corresponding to the feature regions; the output module 33 is configured to obtain a detection result of the target image based on the target feature map.
The processing module 32 is configured to perform feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map, so as to obtain a target feature map, and includes: obtaining the corresponding certainty parameters of each characteristic region based on the corresponding uncertainty parameters of each characteristic region; and correspondingly adjusting the original characteristic information of each characteristic region in the original characteristic diagram by using the corresponding deterministic parameter of each characteristic region to obtain the target characteristic information of each characteristic region in the target characteristic diagram.
The processing module 32 is configured to obtain the certainty parameter corresponding to each feature region based on the uncertainty parameter corresponding to each feature region, and includes: for each characteristic region, taking the difference between the first numerical value and the uncertain parameter corresponding to the characteristic region as the uncertain parameter corresponding to the characteristic region;
the processing module 32 is configured to correspondingly adjust the original feature information of each feature region in the original feature map by using the certainty parameter corresponding to each feature region, so as to obtain the target feature information of each feature region in the target feature map, and includes: for each characteristic region, acquiring the sum of the certainty parameter corresponding to the characteristic region and the second numerical value as the adjustment weight of the characteristic region; and weighting the original characteristic information of the characteristic region by using the adjustment weight of the characteristic region to obtain the target characteristic information of the characteristic region.
The image processing apparatus 30 further includes an uncertainty parameter determining module, where before the processing module 32 is configured to perform feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map to obtain the target feature map, the uncertainty parameter determining module is configured to determine the uncertainty parameter corresponding to each feature region based on the initial feature information in the original feature map.
The uncertainty parameter determining module is configured to determine uncertainty parameters corresponding to each feature region based on initial feature information in the original feature map, and includes: transforming the initial characteristic information in the original characteristic diagram to obtain a characteristic confidence coefficient corresponding to each characteristic region; and obtaining uncertainty parameters corresponding to the characteristic regions based on the characteristic confidence degrees corresponding to the characteristic regions.
The feature confidence corresponding to the feature region comprises a category confidence of a plurality of channels, and the category confidence of each channel represents the confidence that the feature region belongs to one corresponding category; the uncertainty parameter determining module is configured to obtain uncertainty parameters corresponding to each feature region based on the feature confidence corresponding to each feature region, and includes: and for each characteristic region, performing information entropy processing based on the category confidence of a plurality of channels corresponding to the characteristic region to obtain an uncertainty parameter corresponding to the characteristic region.
Each feature point in the original feature map is used as a feature area; and/or the image processing method is performed by the image processing model.
Referring to fig. 9, fig. 9 is a schematic diagram of a framework of an embodiment of an image processing model training apparatus according to the present application. The training device 40 of the image processing model comprises an obtaining module 41, a determining module 42 and an adjusting module 43, wherein the obtaining module 41 is configured to use the first sample image as a target image, and perform the method described in the above embodiment of the image processing method on the target image by using the image processing model to obtain a first detection result of the first sample image; the determining module 42 is configured to obtain a first loss value based on the first detection result and the first annotation information of the first sample image; the adjusting module 43 is configured to adjust a network parameter of the image processing model based on the first loss value.
The uncertainty parameter corresponding to each feature region in the original feature map of the first sample image is determined based on the feature confidence corresponding to each feature region, and the feature confidence corresponding to the feature region is used for determining the category to which the feature region belongs; the training apparatus 40 for image processing models further includes a second loss determining module, before the adjusting module 43 is configured to adjust the network parameters of the image processing model based on the first loss value, the second loss determining module is configured to obtain a second detection result of the first sample image based on the feature confidence degree corresponding to each feature region, where the second detection result represents a category to which the first sample image belongs; obtaining a second loss value by using second marking information and a second detection result corresponding to the original characteristic diagram; the adjusting module 43 is configured to adjust a network parameter of the image processing model based on the first loss value, and includes: network parameters of the image processing model are adjusted based on the first loss value and the second loss value.
The training device 40 for the image processing model further includes a pre-training module, where the pre-training module is configured to perform feature extraction on the second sample image by using a feature extraction network in the image processing model to obtain a sample feature map of the second sample image; determining feature similarity of a plurality of groups of region combinations by using a sample feature map, wherein the sample image comprises a plurality of local regions, and each group of region combinations comprises at least two local regions; determining a reference relation parameter corresponding to each group of area combination based on the labeling information of each group of area combination, wherein the reference relation parameter of the area combination represents the actual difference condition between local areas in the area combination; and adjusting the network parameters of the feature extraction network by using the feature similarity and the reference relation parameters of each group of regional combinations.
Referring to fig. 10, fig. 10 is a schematic frame diagram of an embodiment of an electronic device of the present application. The electronic device 50 comprises a memory 51 and a processor 52 coupled to each other, and the processor 52 is configured to execute program instructions stored in the memory 51 to implement any one of the image processing methods described above, or steps in an embodiment of the image processing model training method. In one particular implementation scenario, electronic device 50 may include, but is not limited to: a microcomputer, a server, and the electronic device 50 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
In particular, the processor 52 is configured to control itself and the memory 51 to implement any of the image processing methods described above, or steps in an embodiment of an image processing model training method. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
Referring to fig. 11, fig. 11 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application. The computer readable storage medium 50 stores program instructions 51 executable by the processor, the program instructions 51 being for implementing any of the image processing methods described above, or steps in an embodiment of an image processing model training method.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
According to the scheme, the corresponding characteristic regions in the original characteristic diagram are subjected to characteristic optimization by using the uncertainty parameters corresponding to the characteristic regions in the original characteristic diagram, so that the influence on the characteristic information of the characteristic regions is distinguished, the robustness of pixels with high uncertainty in the target image is improved, and the accuracy of the detection result of the target image is improved.
The foregoing description of the various embodiments is intended to highlight various differences between the embodiments, and the same or similar parts may be referred to each other, and for brevity, will not be described again herein.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
If the technical scheme of the application relates to personal information, a product applying the technical scheme of the application clearly informs personal information processing rules before processing the personal information, and obtains personal independent consent. If the technical scheme of the application relates to sensitive personal information, a product applying the technical scheme of the application obtains individual consent before processing the sensitive personal information, and simultaneously meets the requirement of 'express consent'. For example, at a personal information collection device such as a camera, a clear and significant identifier is set to inform that the personal information collection range is entered, the personal information is collected, and if the person voluntarily enters the collection range, the person is regarded as agreeing to collect the personal information; or on the device for processing the personal information, under the condition of informing the personal information processing rule by using obvious identification/information, obtaining personal authorization by modes of popping window information or asking a person to upload personal information of the person by himself, and the like; the personal information processing rule may include information such as a personal information processor, a personal information processing purpose, a processing method, and a type of personal information to be processed.

Claims (14)

1. An image processing method, comprising:
extracting the features of the target image to obtain an original feature map;
performing feature optimization on corresponding feature areas in the original feature map by using uncertainty parameters corresponding to the feature areas in the original feature map to obtain a target feature map, wherein in the target feature map, the influence of target feature information corresponding to the feature areas is related to the uncertainty parameters corresponding to the feature areas;
and obtaining a detection result of the target image based on the target feature map.
2. The method according to claim 1, wherein the performing feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map to obtain a target feature map comprises:
obtaining a certainty parameter corresponding to each characteristic region based on the uncertainty parameter corresponding to each characteristic region;
and correspondingly adjusting the original characteristic information of each characteristic region in the original characteristic diagram by using the corresponding certainty parameter of each characteristic region to obtain the target characteristic information of each characteristic region in the target characteristic diagram.
3. The method of claim 2, wherein obtaining the certainty parameter corresponding to each of the feature regions based on the uncertainty parameter corresponding to each of the feature regions comprises:
for each feature region, taking a difference value between a preset first numerical value and an uncertain parameter corresponding to the feature region as a deterministic parameter corresponding to the feature region;
the obtaining of the target feature information of each feature region in the target feature map by correspondingly adjusting the original feature information of each feature region in the original feature map by using the certainty parameter corresponding to each feature region includes:
for each characteristic region, acquiring the sum of the certainty parameter corresponding to the characteristic region and a second numerical value as the adjustment weight of the characteristic region;
and weighting the original characteristic information of the characteristic region by using the adjustment weight of the characteristic region to obtain the target characteristic information of the characteristic region.
4. The method according to any one of claims 1 to 3, wherein before the performing feature optimization on the corresponding feature region in the original feature map by using the uncertainty parameter corresponding to each feature region in the original feature map to obtain the target feature map, the method further comprises:
and determining uncertainty parameters corresponding to the characteristic regions based on the initial characteristic information in the original characteristic diagram.
5. The method of claim 4, wherein determining the uncertainty parameter corresponding to each of the feature regions based on the initial feature information in the original feature map comprises:
transforming the initial characteristic information in the original characteristic diagram to obtain a characteristic confidence coefficient corresponding to each characteristic region;
and obtaining uncertainty parameters corresponding to the characteristic regions based on the characteristic confidence degrees corresponding to the characteristic regions.
6. The method according to claim 5, wherein the feature confidence corresponding to the feature region comprises a class confidence of a plurality of channels, and the class confidence of each channel represents a confidence that the feature region belongs to a corresponding class;
the obtaining uncertainty parameters corresponding to each of the feature regions based on the feature confidence corresponding to each of the feature regions includes:
and for each characteristic region, performing information entropy processing based on the category confidence of a plurality of channels corresponding to the characteristic region to obtain uncertainty parameters corresponding to the characteristic region.
7. The method according to claim 1, wherein each feature point in the original feature map is taken as one of the feature regions;
and/or the image processing method is executed by an image processing model.
8. A method for training an image processing model, comprising:
taking a first sample image as a target image, and processing the target image by using the image processing model to execute the method of any one of claims 1 to 7 to obtain a first detection result of the first sample image;
obtaining a first loss value based on the first detection result and first labeling information of the first sample image;
adjusting a network parameter of the image processing model based on the first loss value.
9. The method according to claim 8, wherein the uncertainty parameter corresponding to each of the feature regions in the original feature map of the first sample image is determined based on a feature confidence corresponding to each of the feature regions, and the feature confidence corresponding to the feature regions is used for determining the category to which the feature regions belong;
prior to the adjusting network parameters of the image processing model based on the first penalty value, the method further comprises:
obtaining a second detection result of the first sample image based on the feature confidence corresponding to each feature region, wherein the second detection result represents the category to which the first sample image belongs;
obtaining a second loss value by using second marking information corresponding to the original characteristic diagram and the second detection result;
the adjusting network parameters of the image processing model based on the first loss value comprises:
adjusting a network parameter of the image processing model based on the first penalty value and the second penalty value.
10. The method according to claim 8 or 9, wherein before said processing the target image using the image processing model to perform the method of any of claims 1-7, the method further comprises:
performing feature extraction on a second sample image by using a feature extraction network in the image processing model to obtain a sample feature map of the second sample image;
determining feature similarity of a plurality of groups of region combinations by using the sample feature map, wherein the sample image comprises a plurality of local regions, and the feature similarity of each group of region combinations represents that the region combinations comprise at least two local regions; and the number of the first and second groups,
determining reference relation parameters corresponding to the area combinations of each group based on the labeling information of the area combinations of each group, wherein the reference relation parameters of the area combinations represent actual difference conditions among local areas in the area combinations;
and adjusting the network parameters of the feature extraction network by using the feature similarity and the reference relation parameters of each group of the regional combination.
11. An image processing apparatus characterized by comprising:
the acquisition module is used for extracting the features of the target image to obtain an original feature map;
the processing module is used for performing feature optimization on corresponding feature areas in the original feature map by using uncertainty parameters corresponding to the feature areas in the original feature map to obtain a target feature map, wherein in the target feature map, the influence of target feature information corresponding to the feature areas is related to the uncertainty parameters corresponding to the feature areas;
and the output module is used for obtaining the detection result of the target image based on the target characteristic diagram.
12. An apparatus for training an image processing model, comprising:
an obtaining module, configured to take a first sample image as a target image, and perform the method according to any one of claims 1 to 7 on the target image by using the image processing model to obtain a first detection result of the first sample image;
the determining module is used for obtaining a first loss value based on the first detection result and the first marking information of the first sample image;
an adjustment module to adjust a network parameter of the image processing model based on the first loss value.
13. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the image processing method of any one of claims 1 to 7 or to implement the training method of the image processing model of any one of claims 8 to 10.
14. A computer-readable storage medium, on which program instructions are stored, which program instructions, when executed by a processor, implement the image processing method of any one of claims 1 to 7, or implement the training method of the image processing model of any one of claims 8 to 10.
CN202210178699.5A 2022-02-25 2022-02-25 Image processing method and related model training method, device and equipment Withdrawn CN114549853A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210178699.5A CN114549853A (en) 2022-02-25 2022-02-25 Image processing method and related model training method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210178699.5A CN114549853A (en) 2022-02-25 2022-02-25 Image processing method and related model training method, device and equipment

Publications (1)

Publication Number Publication Date
CN114549853A true CN114549853A (en) 2022-05-27

Family

ID=81679579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210178699.5A Withdrawn CN114549853A (en) 2022-02-25 2022-02-25 Image processing method and related model training method, device and equipment

Country Status (1)

Country Link
CN (1) CN114549853A (en)

Similar Documents

Publication Publication Date Title
CN110399929B (en) Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium
Birenbaum et al. Longitudinal multiple sclerosis lesion segmentation using multi-view convolutional neural networks
CN110838125B (en) Target detection method, device, equipment and storage medium for medical image
US9330336B2 (en) Systems, methods, and media for on-line boosting of a classifier
CN112233117A (en) New coronary pneumonia CT detects discernment positioning system and computing equipment
CN110738235B (en) Pulmonary tuberculosis judging method, device, computer equipment and storage medium
Aranguren et al. Improving the segmentation of magnetic resonance brain images using the LSHADE optimization algorithm
Ganesan et al. Fuzzy-C-means clustering based segmentation and CNN-classification for accurate segmentation of lung nodules
WO2023138190A1 (en) Training method for target detection model, and corresponding detection method therefor
CN110930378B (en) Emphysema image processing method and system based on low data demand
Depeursinge et al. Comparative performance analysis of state-of-the-art classification algorithms applied to lung tissue categorization
WO2020161481A1 (en) Method and apparatus for quality prediction
CN114445356A (en) Multi-resolution-based full-field pathological section image tumor rapid positioning method
CN114693971A (en) Classification prediction model generation method, classification prediction method, system and platform
CN113240699B (en) Image processing method and device, model training method and device, and electronic equipment
CN111724356B (en) Image processing method and system for CT image pneumonia recognition
EP4115331A1 (en) Class-disparate loss function to address missing annotations in training data
CN112101456A (en) Attention feature map acquisition method and device and target detection method and device
Bocchi et al. Tissue characterization from X-ray images
CN114998980B (en) Iris detection method and device, electronic equipment and storage medium
CN114549853A (en) Image processing method and related model training method, device and equipment
CN114648658A (en) Training method, image processing method, related device, equipment and storage medium
CN111768367B (en) Data processing method, device and storage medium
Depeursinge et al. A classification framework for lung tissue categorization
CN114445679A (en) Model training method, related device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220527

WW01 Invention patent application withdrawn after publication