CN111028173B - Image enhancement method, device, electronic equipment and readable storage medium - Google Patents

Image enhancement method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN111028173B
CN111028173B CN201911258287.7A CN201911258287A CN111028173B CN 111028173 B CN111028173 B CN 111028173B CN 201911258287 A CN201911258287 A CN 201911258287A CN 111028173 B CN111028173 B CN 111028173B
Authority
CN
China
Prior art keywords
medical image
image
disease
determining
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911258287.7A
Other languages
Chinese (zh)
Other versions
CN111028173A (en
Inventor
郭佳昌
陈俊
黄海峰
陆超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201911258287.7A priority Critical patent/CN111028173B/en
Publication of CN111028173A publication Critical patent/CN111028173A/en
Application granted granted Critical
Publication of CN111028173B publication Critical patent/CN111028173B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The present disclosure provides an image enhancement method, apparatus, electronic device and readable storage medium for medical image, including: extracting image features included in the medical image, and determining disease information according to the image features; determining whether a condition exists in the medical image according to the disease information; if yes, determining attention weight according to disease information and image characteristics; and processing the medical image according to the attention weight to obtain an enhanced medical image. The method, the device, the electronic equipment and the readable storage medium provided by the embodiment carry out enhancement processing on the image according to the disease information and the image characteristics, so that the characteristics more conforming to the disease characteristics in the image are obviously enhanced, and the doctor can be helped to read the medical image.

Description

Image enhancement method, device, electronic equipment and readable storage medium
Technical Field
The present disclosure relates to image processing techniques, and in particular to image enhancement techniques.
Background
Currently, in the medical diagnosis process, reading medical images is one of the important diagnostic ways, for example, doctors can determine internal tissue, structural changes and lesion positions by reading images such as X-ray films, CT and the like.
In the prior art, doctors read the film according to experience and diagnose the patient. However, this method depends on the experience and working state of the doctor, and there is often a situation of leakage detection, and thus the accuracy of diagnosis is also affected.
Disclosure of Invention
The present disclosure provides an image enhancement method, apparatus, electronic device, and readable storage medium, thereby facilitating a doctor to read medical images.
A first aspect of the present disclosure provides an image enhancement method of a medical image, including:
extracting image features included in the medical image, and determining disease information according to the image features;
determining whether a condition exists in the medical image according to the disease information;
if yes, determining attention weight according to the disease information and the image characteristics;
and processing the medical image according to the attention weight to obtain an enhanced medical image.
In an alternative embodiment, the determining disease information according to the image features includes:
and inputting the image characteristics into a full-connection layer, and determining the disease information through a Sigmoid function in the full-connection layer.
In an alternative embodiment, the determining the disease information by a Sigmoid function in the fully connected layer includes:
Processing the image features through a Sigmoid function in the full-connection layer, and determining scores corresponding to the medical images and disease categories;
and classifying the medical image according to the Sigmoid function and the score corresponding to the medical image and the disease category, and determining the disease category to which the medical image belongs.
In the method provided by the embodiment, the medical image can be classified through the Sigmoid function, and the disease category to which the medical image possibly belongs is determined.
In an alternative embodiment, said determining whether a condition is present in said medical image based on said disease information comprises:
and determining whether a disease corresponding to the disease category exists in the medical image according to the threshold value corresponding to the disease category and the score value corresponding to the medical image and the disease category to which the medical image belongs.
In the method provided by the embodiment, the threshold value of each disease category is preset, so that whether the medical image really belongs to a certain disease category can be accurately identified.
In an alternative embodiment, the determining the attention weight according to the disease information and the image features includes:
Determining an activation weight corresponding to the characteristics of each channel according to the characteristics of each channel in the image characteristics, the score corresponding to the disease category to which the medical image belongs and the threshold corresponding to the disease category;
the attention weight is determined according to the characteristics of each channel and the corresponding activation weight.
In the method provided in this embodiment, the attention weight may be determined in combination with the determined score and the image feature of the medical image belonging to a certain disease category, so as to determine the weight including the disease feature information.
In an alternative embodiment, the determining the attention weight according to the feature of each channel and the corresponding activation weight includes:
determining a weighted result corresponding to each characteristic value according to the characteristic value in the characteristic of each channel and the activation weight corresponding to the characteristic of the channel;
and adding weighted results of the characteristic values corresponding to different channels to obtain the attention weight corresponding to each characteristic value.
In an alternative embodiment, the processing the medical image according to the attention weight to obtain an enhanced medical image includes:
and determining the corresponding relation between the attention weight and each pixel point in the medical image, and weighting the pixel value of the corresponding pixel point by using the attention weight to obtain the enhanced medical image.
Fusing the attention weighting including the disease feature into the medical image can enhance the potential disease feature in the medical image, thereby facilitating the doctor's reading of the medical image.
A second aspect of the present disclosure provides an image enhancement apparatus for medical images, comprising:
the extraction module is used for extracting image features included in the medical image and determining disease information according to the image features;
a judging module for determining whether a disease exists in the medical image according to the disease information;
the weight determining module is used for determining attention weight according to the disease information and the image characteristics if the disease exists;
and the enhancement module is used for processing the medical image according to the attention weight to obtain an enhanced medical image.
A third aspect of the present disclosure provides an electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the image enhancement method of any one of the medical images as described in the first aspect.
A fourth aspect of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform any of the image enhancement methods of the medical image according to the first aspect.
The image enhancement method, the device, the electronic equipment and the readable storage medium for the medical image provided by the disclosure comprise the following steps: extracting image features included in the medical image, and determining disease information according to the image features; determining whether a condition exists in the medical image according to the disease information; if yes, determining attention weight according to disease information and image characteristics; and processing the medical image according to the attention weight to obtain an enhanced medical image. The method, the device, the electronic equipment and the readable storage medium provided by the embodiment carry out enhancement processing on the image according to the disease information and the image characteristics, so that the characteristics more conforming to the disease characteristics in the image are obviously enhanced, and the doctor can be helped to read the medical image.
Drawings
The drawings are included to provide a better understanding of the present application and are not to be construed as limiting the application. Wherein:
FIG. 1 is a flow chart of a method of image enhancement of a medical image according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of a method of image enhancement of a medical image shown in another exemplary embodiment of the application;
FIG. 3 is a schematic representation of the characteristics of a plurality of channels according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram of attention weighting shown in an exemplary embodiment of the present application;
FIG. 5 is a schematic illustration of an enhanced medical image shown in accordance with an exemplary embodiment of the present application;
fig. 6 is a block diagram of an image enhancement apparatus of a medical image shown in an exemplary embodiment of the present application;
fig. 7 is a block diagram of an image enhancement apparatus of a medical image shown in another exemplary embodiment of the present application;
fig. 8 is a block diagram of an electronic device according to another exemplary embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present application are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Currently, doctors rely on medical images taken for patients when diagnosing the patients. The medical image may be, for example, an X-ray film, CT, nuclear magnetic resonance image, or the like. These medical images can reflect the condition of the internal tissues, structures of the patient, and the doctor can read the film empirically to determine the condition of the patient.
However, by empirically reading the film, inaccurate film reading, such as missed reading and missed reading, is likely to occur.
According to the scheme provided by the application, whether the medical image has symptoms can be identified, and the medical image is enhanced under the condition that the symptoms exist, so that the enhanced medical image is easier to read by doctors.
Fig. 1 is a flowchart of an image enhancement method of a medical image according to an exemplary embodiment of the present application.
As shown in fig. 1, the image enhancement method for a medical image provided in this embodiment includes:
step 101, extracting image features included in the medical image, and determining disease information according to the image features.
The method provided in this embodiment may be performed by an electronic device having computing capabilities. The electronic device may be a stand-alone device, such as a computer. It is also possible to have the computing unit integrated in other devices, for example in a medical image generation apparatus.
In particular, when the electronic device is a stand-alone device, the means for generating the medical image may be connected to the electronic device by wired or wireless means, and the means may send the generated medical image to the electronic device, so that the electronic device processes the received medical image.
If the electronic device is integrated in the medical image generating apparatus, the apparatus may send the medical image to the electronic device after the medical image is generated, so that the electronic device processes the received medical image.
Further, the method provided in the embodiment may be provided in the electronic device, so that the electronic device executes the method provided in the embodiment.
In practical application, the method provided by the embodiment can be further packaged in a preset model, the model can be arranged in electronic equipment, and the electronic equipment can carry out enhancement processing on the medical image through the model.
Wherein, after receiving the medical image, image features in the medical image may be extracted. In particular, different models may also be set for different types of medical images, for example, a model corresponding to a lung medical image, a model corresponding to a liver medical image, etc. may be set. The image may be processed using a corresponding model according to the category of the medical image.
Specifically, image features may be extracted by a convolution layer. A plurality of convolution layers may be provided, each of which may have a convolution kernel. The output characteristics of the convolution layer can be obtained by performing a careful convolution operation on the characteristic data input to the convolution layer and the convolution kernel of the layer.
Further, the input data of the first convolution layer may be a multi-channel characteristic value of the medical image, for example, a pixel value corresponding to three channels of each pixel point of the image. If the medical image is a black-and-white image, the medical image may also be a gray value corresponding to each pixel point of the image.
In practical application, disease information can be determined according to the extracted image features. The convolution layers in the model may be obtained by training.
Disease information may be determined based on the image characteristics output by the last convolutional layer. For example, a full connection layer may be provided, and the extracted image features may be input into the full connection layer, such that the full connection layer determines disease information. And a pooling layer can be arranged between the last convolution layer and the full connection layer, the characteristic data extracted by the convolution layer can be input into the pooling layer, and then the pooling layer outputs image characteristics.
The full-connection layer can classify the medical images according to the input image features and determine the disease category to which the medical images belong. The full link layer may be obtained by training.
Medical images may be collected in advance and annotated, for example, with the disease categories included in the images. The data with additional labeling information is then used to train the model so that the model can determine disease information.
In particular, the disease information may include a score that the medical image belongs to a certain disease category. Image features can be processed through the fully connected layer and the disease category to which the medical image belongs can be determined from these features. For example, the score corresponding to each disease category may be determined for the medical image, and the maximum score may be selected as the disease information.
The score corresponding to each disease category to which the medical object belongs can be determined specifically by a Sigmoid function.
Step 102, determining whether a disorder exists in the medical image according to the disease information.
Further, the full-connection layer classifies the medical images according to scores of disease categories to which the medical images may belong. If the medical image belongs to each disease category with a low score, a relatively high score can be selected from the medical image as disease information. In this case, there is a possibility that the patient corresponding to the medical image does not have a disease corresponding to the higher score, but only a few features corresponding to the disease exist in the medical image.
Therefore, in the method provided in the present embodiment, a threshold value corresponding to each disease category may also be set in advance. In this case, the disease information may further include a threshold value corresponding to the disease category.
After determining the disease category to which the medical image may belong through the full connection layer, whether the medical image actually has the characteristics corresponding to the corresponding disease category may also be determined according to the score of the medical image belonging to the disease category and the threshold corresponding to the disease category.
The score of a medical image belonging to a certain disease category may be compared with a threshold value corresponding to the disease category, and if the score is greater than the threshold value, the medical image may be considered to actually belong to the disease category.
Specifically, the threshold is set based on the test results of the test set in such a way that the maximum AUC (Area Under Curve) results are obtained in the test set.
Further, the threshold may be set at the full connection layer. The corresponding score of the medical image and each disease category can be determined through the full connection layer, and then the target score, such as the maximum score, is determined. And then obtaining the threshold value of the disease category corresponding to the target score, and comparing the target score with the obtained threshold value. Thereby determining whether a condition corresponding to the disease category is included in the medical image.
In practical application, the medical image can be marked. For example, if it is determined that a condition exists in the medical image, the medical image may be marked as 1, otherwise it is marked as 0.
If the medical image is marked 1, i.e. it is determined that the medical image belongs to an image in which a disease is present, step 103 may be performed.
And step 103, if yes, determining the attention weight according to the disease information and the image characteristics.
Specifically, if the medical image belongs to an image with a disease, for example, the image is marked as 1, disease information of the image can be transmitted back to the feature extraction layer, so that attention weight is determined together according to image features and disease information.
Further, the image feature may be an image feature extracted by a final convolution layer. The extracted image features may comprise a plurality of channels, for example features comprising k channels. The features for each channel can be considered as m x n dimensions and can be considered as a table with a feature value in each lattice. A corresponding attention weight may be determined for each feature value location.
In practical application, the influence degree of the characteristics of different channels on the attention weight is different, so that the activation weight corresponding to different channels can be determined according to the characteristics of each channel and the disease information. And then integrating the activation weight of each channel and the characteristics of each channel to determine the final attention weight.
The dimensions of the attention weights may be the same as the dimensions of the image features, e.g. all m×n.
And 104, processing the medical image according to the attention weight to obtain an enhanced medical image.
Specifically, if the image features are the same as the medical image in size, the image may be enhanced by directly multiplying the attention weight by the pixel values in the image.
If the size of the image feature is smaller than the size of the medical image, interpolation processing can be performed on the determined attention weight, and the attention weight identical to the size of the medical image is obtained. And then carrying out enhancement processing on the image according to the attention weight after the processing.
Specifically, the finally determined attention weight includes disease information and image feature information, for example, the more the image feature is consistent with the disease feature, the greater the determined corresponding attention weight is, and when the attention weight is used for processing the medical image, the enhancement effect on the image of the corresponding position is more obvious. Accordingly, the less the image features are consistent with the disease features, the less obvious the enhancement effect on the image at the corresponding location.
Further, if the medical image includes pixel values of three channels, the attention weight may be used to process the pixel values of each channel to obtain an enhanced image.
The electronic device can output the enhanced medical image, and a doctor can view the enhanced medical image when reading the film, and the doctor can obviously see the part in which the lesion occurs due to the enhanced information included in the image, so that the doctor can diagnose the disease for the patient more accurately.
The method provided by the present embodiment is used for enhancement processing of medical images, and is performed by a device provided with the method provided by the present embodiment, which is typically implemented in hardware and/or software.
The image enhancement method of the medical image provided by the embodiment comprises the following steps: extracting image features included in the medical image, and determining disease information according to the image features; determining whether a condition exists in the medical image according to the disease information; if yes, determining attention weight according to disease information and image characteristics; and processing the medical image according to the attention weight to obtain an enhanced medical image. According to the method provided by the embodiment, the image is enhanced according to the disease information and the image characteristics, so that the characteristics which are more consistent with the disease characteristics in the image are obviously enhanced, and doctors can read medical images.
Fig. 2 is a flow chart of an image enhancement method of a medical image according to another exemplary embodiment of the present application.
As shown in fig. 2, the image enhancement method for a medical image provided in this embodiment includes:
in step 201, image features included in a medical image are extracted.
Step 201 is similar to the specific principle and implementation of extracting the image features in step 101, and will not be described herein.
Step 202, inputting the image features into the full connection layer, and determining disease information through a Sigmoid function in the full connection layer.
Wherein the image feature may be pooled layer input full connection layer. The input image features are analyzed through the Sigmoid function in the full-connection layer, so that the medical images are classified, and the disease category to which the medical images possibly belong can be determined.
The full connection layer may output scores based on the input features, for example, may be scores for medical images belonging to various disease categories.
Specifically, an activation function Sigmoid function may be set at the full connection layer. The determined score is input into a Sigmoid function, which can map the input features into the range of 0-1, and medical images can be classified according to the result output by the Sigmoid function. For example, the value output by the Sigmoid function may be regarded as a probability, and it may be determined to which disease category the medical image belongs to, thereby obtaining disease information corresponding to the medical image. For example, the disease category to which the medical image belongs may be determined as the disease category to which the probability is greatest.
Further, the disease information may include a disease category to which the medical image most likely belongs, and may further include a score corresponding to the disease category for the medical image.
Step 203, determining whether a disease corresponding to the disease category exists in the medical image according to the threshold value corresponding to the disease category and the score value corresponding to the medical image and the disease category to which the medical image belongs.
In practical application, the threshold value corresponding to each disease category can be set.
Classifying the medical images through the Sigmoid function, and determining the target disease category to which the medical images belong. A threshold corresponding to the target disease category may also be obtained.
The score corresponding to the medical image and the target disease category can be compared with the acquired threshold value, if the score is larger than the threshold value, the medical image can be determined to truly belong to the target disease category, otherwise, the medical image is considered not to belong to the target disease category.
The threshold is set based on the test results of the test set in such a way that the maximum AUC results are obtained in the test set.
If so, step 204 may be performed.
Step 204, if yes, determining an activation weight corresponding to the feature of each channel according to the feature of each channel in the image feature, the score corresponding to the disease category to which the medical image belongs, and the threshold corresponding to the disease category.
If the medical image is determined to belong to a certain disease category, that is, if the patient is determined to have a certain disease, the medical image can be enhanced, so that the lesion position in the medical image is obviously enhanced, and the doctor can be helped to read the medical image.
If it is determined that the medical image belongs to the target disease category, the score corresponding to the medical image and the threshold corresponding to the target disease category may be returned to the upper layer, for example, the layer extracting the image features, so that the activation weight corresponding to the features is determined by combining the image features, the score and the threshold.
Specifically, the extracted features include features of a plurality of channels. For example, features of k channels may be extracted, and the dimensions of features of each channel may be m×n, i.e., the dimensions of extracted image features may be k×m×n.
Further, the degree of influence of the features of different channels on the final attention weight is different, so that the corresponding activation weights can be determined for the features of different channels. For example, if features of k channels are extracted in total, then corresponding k activation weights can be determined.
In practical application, the activation weight w can be determined by the following formula k
Wherein H represents the length and width of the last layer of feature map, i and j represent a pixel point of a certain channel feature layer respectively, F represents the extracted image feature, sigma represents the score corresponding to the medical image obtained by the Sigmoid function and the disease category to which the medical image belongs, and alpha represents the threshold corresponding to the medical image and the disease category to which the medical image belongs.
In step 205, attention weights are determined according to the characteristics of each channel and their corresponding activation weights.
Specifically, the influence degree of the features of different channels on the attention weight is different, so that the attention weight can be determined by combining the features of different channels and the corresponding activation weights.
Further, a weighted result corresponding to each feature value may be determined according to the feature value in the feature of each channel and the activation weight corresponding to the channel feature. Since the feature values are information related to the disease extracted from the medical image, weighting these features can enlarge the disease features therein.
Fig. 3 is a schematic diagram showing characteristics of a plurality of channels according to an exemplary embodiment of the present application.
As shown on the left side of fig. 3, it is assumed that three channel features are included in the co-extracted image features. The activation weight corresponding to the characteristic of the first channel is w1, the activation weight corresponding to the characteristic of the second channel is w2, and the activation weight corresponding to the characteristic of the third channel is w3.
The feature of each channel may include a plurality of feature values, such as dmn, pmn, qmn in the figure, e.g., the feature dimension is 3*3, and the feature of each channel may include 9 feature values. The eigenvalues may be multiplied by the corresponding activation weights resulting in a weighted result for each eigenvalue, as shown specifically on the right side of fig. 3.
The weighted results of the feature values corresponding to different channels may be added to obtain the attention weight corresponding to each feature value. Specifically, weighted results of feature values of the same position of features of different channels can be added to obtain the attention weight corresponding to each feature value.
The attention weight can be determined specifically by the following formula:
L w =ReLu(∑ k w k *F k )
k is used to represent the channel identity of the feature, e.g. the identity of the first feature channel, the identity of the second feature channel. F (F) k For characterizing the kth channel.
Fig. 4 is a schematic diagram of attention weighting according to an exemplary embodiment of the present application.
As shown in fig. 4, on the basis of the embodiment shown in fig. 3, the weighted results of the feature values whose positions are identical may be added to obtain the attention weight corresponding to each feature value.
In the method provided by the present embodiment, the attention weight is not one value. But rather includes a weight value corresponding to each feature value in the feature, and the enhanced medical image can be obtained by weighting the original medical image by the determined attention weight.
Step 206, determining the corresponding relation between the attention weight and each pixel point in the medical image, and weighting the pixel value of the corresponding pixel point by using the attention weight to obtain the enhanced medical image.
If the dimension of the attention weight is consistent with the dimension of the medical image, for example, m×n, the pixel value may be directly multiplied by the weight value at the same position, so as to enhance the medical image. In this case, the weight values having the same positions have a correspondence relationship with the pixel points.
Specifically, if the dimension of the attention weight is inconsistent with the size of the medical image, for example, the dimension of the attention weight is m×n, and the size of the medical image is m1×n1. Then the attention weights may be interpolated to obtain attention weights with dimensions m1 x n1. In this case, the medical image may be weighted according to the processed attention weight. Similar to the above, the pixel values may be multiplied by the weighting values at the same positions, thereby performing enhancement processing on the medical image.
Fig. 5 is a schematic view of an enhanced medical image shown in an exemplary embodiment of the application.
As shown in fig. 5, in the frame selection region, the original image is enhanced by the attention weight, and then the region is significantly enhanced.
In the method provided by the embodiment, the activation weight comprises the disease feature in the medical image, and the disease feature is fused into the medical image in a mode of enhancing the original medical image, so that the disease feature in the medical image is enhanced and displayed, and a doctor can read the medical image.
Fig. 6 is a block diagram of an image enhancement apparatus of a medical image according to an exemplary embodiment of the present application.
As shown in fig. 6, the image enhancement apparatus of the medical image shown in the present embodiment includes:
an extraction module 61 for extracting image features included in the medical image and determining disease information according to the image features;
a determining module 62, configured to determine whether a disorder exists in the medical image according to the disease information;
a weight determining module 63, configured to determine an attention weight according to the disease information and the image feature if a disease exists;
and the enhancement module 64 is configured to process the medical image according to the attention weight to obtain an enhanced medical image.
The image enhancement device of the medical image provided by the embodiment comprises an extraction module, a detection module and a processing module, wherein the extraction module is used for extracting image features included in the medical image and determining disease information according to the image features; the judging module is used for determining whether a disease exists in the medical image according to the disease information; the weight determining module is used for determining attention weight according to the disease information and the image characteristics if the disease exists; and the enhancement module is used for processing the medical image according to the attention weight to obtain an enhanced medical image. The device provided by the embodiment carries out enhancement processing on the image according to the disease information and the image characteristics, so that the characteristics which are more consistent with the disease characteristics in the image are obviously enhanced, and doctors can be helped to read medical images.
The specific principle and implementation of the image enhancement device for medical images provided in this embodiment are similar to those of the embodiment shown in fig. 1, and will not be described here again.
Fig. 7 is a block diagram of an image enhancement apparatus of a medical image according to another exemplary embodiment of the present application.
As shown in fig. 7, in the image enhancement device of the medical image shown in this embodiment, the extraction module 61 is specifically configured to:
and inputting the image characteristics into a full-connection layer, and determining the disease information through a Sigmoid function in the full-connection layer.
Optionally, the extracting module 61 includes:
a score determining unit 611, configured to process the image feature through a Sigmoid function in the fully connected layer, and determine a score corresponding to the medical image and a disease category;
and the classification unit 612 is configured to classify the medical image according to the Sigmoid function and the score corresponding to the medical image and the disease category, and determine the disease category to which the medical image belongs.
Optionally, the judging module 62 is specifically configured to:
and determining whether a disease corresponding to the disease category exists in the medical image according to the threshold value corresponding to the disease category and the score value corresponding to the medical image and the disease category to which the medical image belongs.
Optionally, the weight determining module 63 includes:
an activation weight determining unit 631 for determining an activation weight corresponding to the feature of each channel according to the feature of each channel in the image feature, the score corresponding to the disease category to which the medical image belongs, and the threshold corresponding to the disease category;
an attention weight determining unit 632 is configured to determine an attention weight according to the feature of each channel and the corresponding activation weight.
Optionally, the attention weight determining unit 632 is specifically configured to:
determining a weighted result corresponding to each characteristic value according to the characteristic value in the characteristic of each channel and the activation weight corresponding to the characteristic of the channel;
and adding weighted results of the characteristic values corresponding to different channels to obtain the attention weight corresponding to each characteristic value.
Optionally, the enhancing module 64 is specifically configured to:
and determining the corresponding relation between the attention weight and each pixel point in the medical image, and weighting the pixel value of the corresponding pixel point by using the attention weight to obtain the enhanced medical image.
The specific principle and implementation of the image enhancement device for medical images provided in this embodiment are similar to those of the embodiment shown in fig. 2, and will not be described here again.
According to an embodiment of the present application, the present application also provides an electronic device and a readable storage medium.
As shown in fig. 8, there is a block diagram of an electronic device of an image enhancement method of a medical image according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the applications described and/or claimed herein.
As shown in fig. 8, the electronic device includes: one or more processors 801, memory 802, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 801 is illustrated in fig. 8.
Memory 802 is a non-transitory computer readable storage medium provided by the present application. Wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to perform the image enhancement method of medical images provided by the present application. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to execute the image enhancement method of the medical image provided by the present application.
The memory 802 is used as a non-transitory computer readable storage medium for storing a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the extraction module 61, the judgment module 62, the weight determination module 63, and the enhancement module 63 shown in fig. 6) corresponding to an image enhancement method of a medical image in an embodiment of the present application. The processor 801 executes various functional applications of the server and data processing, i.e., implements the image enhancement method of the medical image in the above-described method embodiment, by running non-transitory software programs, instructions, and modules stored in the memory 802.
Memory 802 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created from the use of the image enhancement electronics of the medical image, and the like. In addition, memory 802 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 802 may optionally include memory remotely located with respect to processor 801, which may be connected to the image enhancement electronics of the medical image via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the image enhancement method of a medical image may further include: an input device 803 and an output device 804. The processor 801, memory 802, input devices 803, and output devices 804 may be connected by a bus or other means, for example in fig. 8.
The input device 803 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the image enhancement electronic device of the medical image, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, etc. input devices. The output device 804 may include a display apparatus, auxiliary lighting devices (e.g., LEDs), and haptic feedback devices (e.g., vibration motors), among others. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present application may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed embodiments are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present application. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present application should be included in the scope of the present application.

Claims (14)

1. A method of image enhancement of a medical image, comprising:
extracting image features included in the medical image, and determining disease information according to the image features;
determining whether a condition exists in the medical image according to the disease information;
if yes, determining attention weight according to the disease information and the image characteristics, wherein the attention weight comprises weight values corresponding to each characteristic value in the image characteristics, and the weight values corresponding to each characteristic value are obtained by adding weighted results of characteristic values of the same position of the characteristics of different channels;
and determining the corresponding relation between the attention weight and each pixel point in the medical image, and weighting the pixel value of the corresponding pixel point by using the attention weight to obtain the enhanced medical image.
2. The method of claim 1, wherein said determining disease information from said image features comprises:
and inputting the image characteristics into a full-connection layer, and determining the disease information through a Sigmoid function in the full-connection layer.
3. The method of claim 2, wherein the determining the disease information by a Sigmoid function in the fully connected layer comprises:
processing the image features through a Sigmoid function in the full-connection layer, and determining scores corresponding to the medical images and disease categories;
and classifying the medical image according to the Sigmoid function and the score corresponding to the medical image and the disease category, and determining the disease category to which the medical image belongs.
4. A method according to claim 3, wherein said determining whether a condition is present in said medical image based on said disease information comprises:
and determining whether a disease corresponding to the disease category exists in the medical image according to the threshold value corresponding to the disease category and the score value corresponding to the medical image and the disease category to which the medical image belongs.
5. The method of claim 4, wherein said determining an attention weight from said disease information, said image features, comprises:
Determining an activation weight corresponding to the characteristics of each channel according to the characteristics of each channel in the image characteristics, the score corresponding to the disease category to which the medical image belongs and the threshold corresponding to the disease category;
the attention weight is determined according to the characteristics of each channel and the corresponding activation weight.
6. The method of claim 5, wherein determining the attention weight based on the characteristics of each channel and its corresponding activation weight comprises:
determining a weighted result corresponding to each characteristic value according to the characteristic value in the characteristic of each channel and the activation weight corresponding to the characteristic of the channel;
and adding weighted results of the characteristic values corresponding to different channels to obtain the attention weight corresponding to each characteristic value.
7. An image enhancement device for medical images, comprising:
the extraction module is used for extracting image features included in the medical image and determining disease information according to the image features;
a judging module for determining whether a disease exists in the medical image according to the disease information;
the weight determining module is used for determining attention weight according to the disease information and the image characteristics if the disease exists, wherein the attention weight comprises weight values corresponding to each characteristic value in the image characteristics, and the weight values corresponding to each characteristic value are obtained by adding weighted results of characteristic values of the same position of the characteristics of different channels;
The enhancement module is used for processing the medical image according to the attention weight to obtain an enhanced medical image;
the enhancement module is specifically used for:
and determining the corresponding relation between the attention weight and each pixel point in the medical image, and weighting the pixel value of the corresponding pixel point by using the attention weight to obtain the enhanced medical image.
8. The apparatus of claim 7, wherein the extraction module is specifically configured to:
and inputting the image characteristics into a full-connection layer, and determining the disease information through a Sigmoid function in the full-connection layer.
9. The apparatus of claim 8, wherein the extraction module comprises:
the score determining unit is used for processing the image characteristics through a Sigmoid function in the full-connection layer and determining scores corresponding to the medical images and the disease categories;
and the classification unit is used for classifying the medical image according to the Sigmoid function and the score corresponding to the medical image and the disease category, and determining the disease category to which the medical image belongs.
10. The apparatus of claim 9, wherein the determining module is specifically configured to:
And determining whether a disease corresponding to the disease category exists in the medical image according to the threshold value corresponding to the disease category and the score value corresponding to the medical image and the disease category to which the medical image belongs.
11. The apparatus of claim 10, wherein the weight determination module comprises:
an activation weight determining unit, configured to determine an activation weight corresponding to the feature of each channel according to the feature of each channel in the image feature, a score corresponding to the disease category to which the medical image belongs, and a threshold corresponding to the disease category;
and the attention weight determining unit is used for determining the attention weight according to the characteristics of each channel and the corresponding activation weight.
12. The apparatus according to claim 11, wherein the attention weight determination unit is specifically configured to:
determining a weighted result corresponding to each characteristic value according to the characteristic value in the characteristic of each channel and the activation weight corresponding to the characteristic of the channel;
and adding weighted results of the characteristic values corresponding to different channels to obtain the attention weight corresponding to each characteristic value.
13. An electronic device, comprising:
at least one processor; and
A memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN201911258287.7A 2019-12-10 2019-12-10 Image enhancement method, device, electronic equipment and readable storage medium Active CN111028173B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911258287.7A CN111028173B (en) 2019-12-10 2019-12-10 Image enhancement method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911258287.7A CN111028173B (en) 2019-12-10 2019-12-10 Image enhancement method, device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111028173A CN111028173A (en) 2020-04-17
CN111028173B true CN111028173B (en) 2023-11-17

Family

ID=70205242

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911258287.7A Active CN111028173B (en) 2019-12-10 2019-12-10 Image enhancement method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111028173B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
WO2019062846A1 (en) * 2017-09-28 2019-04-04 北京西格码列顿信息技术有限公司 Medical image aided diagnosis method and system combining image recognition and report editing
CN109685819A (en) * 2018-12-11 2019-04-26 厦门大学 A kind of three-dimensional medical image segmentation method based on feature enhancing
CN110084794A (en) * 2019-04-22 2019-08-02 华南理工大学 A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks
CN110399907A (en) * 2019-07-03 2019-11-01 杭州深睿博联科技有限公司 Thoracic cavity illness detection method and device, storage medium based on induction attention

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130268203A1 (en) * 2012-04-09 2013-10-10 Vincent Thekkethala Pyloth System and method for disease diagnosis through iterative discovery of symptoms using matrix based correlation engine

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106204465A (en) * 2015-05-27 2016-12-07 美国西门子医疗解决公司 Knowledge based engineering ultrasonoscopy strengthens
WO2019062846A1 (en) * 2017-09-28 2019-04-04 北京西格码列顿信息技术有限公司 Medical image aided diagnosis method and system combining image recognition and report editing
CN107644419A (en) * 2017-09-30 2018-01-30 百度在线网络技术(北京)有限公司 Method and apparatus for analyzing medical image
CN109685819A (en) * 2018-12-11 2019-04-26 厦门大学 A kind of three-dimensional medical image segmentation method based on feature enhancing
CN110084794A (en) * 2019-04-22 2019-08-02 华南理工大学 A kind of cutaneum carcinoma image identification method based on attention convolutional neural networks
CN110399907A (en) * 2019-07-03 2019-11-01 杭州深睿博联科技有限公司 Thoracic cavity illness detection method and device, storage medium based on induction attention

Also Published As

Publication number Publication date
CN111028173A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
US11861829B2 (en) Deep learning based medical image detection method and related device
US11288550B2 (en) Data processing apparatus and method, recognition apparatus, learning data storage apparatus, machine learning apparatus, and program
CN110428475B (en) Medical image classification method, model training method and server
CN109191451B (en) Abnormality detection method, apparatus, device, and medium
US11900594B2 (en) Methods and systems for displaying a region of interest of a medical image
JP2019530488A (en) Computer-aided diagnostic system for medical images using deep convolutional neural networks
CN109074869B (en) Medical diagnosis support device, information processing method, and medical diagnosis support system
US10290101B1 (en) Heat map based medical image diagnostic mechanism
WO2021189848A1 (en) Model training method and apparatus, cup-to-disc ratio determination method and apparatus, and device and storage medium
US11574717B2 (en) Medical document creation support apparatus, medical document creation support method, and medical document creation support program
US10748282B2 (en) Image processing system, apparatus, method and storage medium
US20210125724A1 (en) Medical image processing apparatus, medical image processing method, machine learning system, and program
EP2054829A2 (en) Anatomy-related image-context-dependent applications for efficient diagnosis
EP3852011A2 (en) Method and apparatus for determining target anchor, device and storage medium
EP3905112A1 (en) Method and apparatus for recognizing text content and electronic device
US20210407637A1 (en) Method to display lesion readings result
CN113939844A (en) Computer-aided diagnosis system for detecting tissue lesions on microscopic images based on multi-resolution feature fusion
US20210145389A1 (en) Standardizing breast density assessments
CN111951214B (en) Method and device for dividing readable area in image, electronic equipment and storage medium
CN111028173B (en) Image enhancement method, device, electronic equipment and readable storage medium
CN111833239B (en) Image translation method and device and image translation model training method and device
CN112488126A (en) Feature map processing method, device, equipment and storage medium
JP7151464B2 (en) Lung image processing program, lung image processing method and lung image processing system
JP7350595B2 (en) Image processing device, medical image diagnostic device, and image processing program
CN111832609B (en) Training method and device for image processing model, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant