CN111507912A - Mammary gland image enhancement method and device and computer readable storage medium - Google Patents

Mammary gland image enhancement method and device and computer readable storage medium Download PDF

Info

Publication number
CN111507912A
CN111507912A CN202010271300.9A CN202010271300A CN111507912A CN 111507912 A CN111507912 A CN 111507912A CN 202010271300 A CN202010271300 A CN 202010271300A CN 111507912 A CN111507912 A CN 111507912A
Authority
CN
China
Prior art keywords
image
contrast
enhancement
median
mammary gland
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010271300.9A
Other languages
Chinese (zh)
Other versions
CN111507912B (en
Inventor
胡杰
李佳
叶超
蓝重洲
成富平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Angell Technology Co ltd
Original Assignee
Shenzhen Angell Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Angell Technology Co ltd filed Critical Shenzhen Angell Technology Co ltd
Priority to CN202010271300.9A priority Critical patent/CN111507912B/en
Publication of CN111507912A publication Critical patent/CN111507912A/en
Application granted granted Critical
Publication of CN111507912B publication Critical patent/CN111507912B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Processing (AREA)

Abstract

The application provides a mammary gland image enhancement method, which is used for carrying out normalization processing on pixel values of an original mammary gland image to obtain a first image; transforming and stretching the first image by adopting Gaussian-logarithm transformation to obtain a second image; calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after contrast normalization; and performing enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image, and performing gradient processing on the fourth image by using a gradient curve suitable for display to obtain a target mammary gland image. The method can effectively solve the problem of poor visibility of subcutaneous tissues and skin lines caused by image overexposure, effectively solve the problem of inconsistent brightness and contrast after image enhancement, effectively improve the layering sense of different tissues in the image, and is convenient to apply to clinical diagnosis.

Description

Mammary gland image enhancement method and device and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of image enhancement, in particular to a method and a device for enhancing a mammary gland image and a computer-readable storage medium.
Background
The mammary gland image has high spatial resolution and wide dynamic range, can extract rich clinical diagnosis information by carrying out image enhancement processing on the mammary gland image, and provides good diagnosis basis for diagnosing mammary gland diseases, particularly discovering early lesions. Because focus information such as details in the original mammary gland image is concentrated in a narrow gray scale range, but the useful gray scale range is wide in distribution, and human eyes can hardly distinguish diagnosis information from the focus information. Therefore, the original breast image needs detail enhancement processing to obtain diagnosable clinical images.
Unsharp masks are one of the earliest methods used for image enhancement. According to the method, an original image A is filtered by a low-pass filter to obtain a low-resolution image B, and then the original image A and the low-resolution image B are subtracted to obtain a high-frequency image C containing high-frequency information. The high-frequency image C and the original image A or the low-resolution image B are weighted by a weighting factor to obtain an enhanced image, wherein the weighting factor determines the enhancement degree, and the size k of a filter kernel of the low-pass filter determines the relative size of information scales in the low-resolution image B and the high-frequency image C. The enhancement effect of this method strongly depends on the choice of the filter kernel size k. All scale information smaller than k (high frequencies) in the original image a is enhanced and scale information larger than k (low frequencies) is compressed. If the filter kernel size is set too small, some of the details related to the diagnosis are masked.
The method first performs L og stretching on the image to improve the dynamic range occupied by a low-gray area, the stretched image is decomposed into multi-layer images with different frequencies and dimensions, generally 7 or 8 layers, the number of layers of decomposition depends on the size of the lowest dimension of the image, for each layer of decomposed image, the layer of image with the lowest frequency or the smallest dimension is called a Gaussian layer, the other layers of images are called multi-scale layers, the multi-scale images all contain positive and negative pixel values, the positive and negative pixel values represent the contrast of a structure with a certain frequency or dimension, therefore, the enhancement of each frequency image can be realized by modifying the pixel values of the multi-scale images, the multi-scale images are firstly processed by nonlinear transformation, the small pixel values are improved, the small pixel values are gradually increased according to the compression coefficient of the small pixel value, and the small pixel values are gradually increased according to the compression coefficient of the small frequency, and the small pixel values are gradually increased according to the compression coefficient of the small pixel value, and the dynamic range of the multi-scale images is gradually increased from 1 to the small pixel value.
The MUSICA multi-frequency enhancing method effectively improves the detail enhancing effect and simultaneously does not introduce artifacts. Music however tends to amplify noise in the image during the enhancement process. Inspired by MUSICA and unsharp masking methods, a multi-frequency enhancement method UNIQUE is proposed to suppress noise. UNIQUE decomposes an image into multiple layers of images of different frequencies using unsharp masking methods through two filter kernels of different sizes. For each layer of image, firstly, the image is enhanced according to the size of the pixel value through a nonlinear function, the smaller the pixel value is, the greater the enhancement degree is, and conversely, the smaller the pixel value is, the pixel value does not enhance any more when reaching a certain critical value. Meanwhile, for each layer of image, the features of density, activation degree, frequency and the like of different tissues in the image are utilized to carry out selective enhancement and inhibition, then the images in the two aspects are weighted to obtain an enhanced image, and the UNIQUE considers the gray distribution and the position distribution characteristics of noise in the enhancing process, so that the noise is effectively prevented from being amplified in the enhancing process.
In summary, a non-sharpening mask is simple and easy to implement, but the effect of enhancing fine nodes is not good, artifacts are easily generated, the music multi-frequency method can effectively enhance detail information of different scales in an image, however, after L og transformation is carried out on an original image, the stretching amplitude of noise in a low-gray-scale area is large, and in multi-scale enhancement after image decomposition, noise is not effectively suppressed, so that the noise in the enhanced image is obvious.
However, the above method still does not consider the following problems:
1) traditional log stretch changes compress high-grayscale regions such as subcutaneous tissue when stretching low-grayscale regions, resulting in fine details such as subcutaneous tissue being more difficult to handle in multi-frequency enhancement. Furthermore, over-compression of high grayscale regions can lead to poor skin line visibility when the image dose is too large.
2) Differences in tissue and radiation dose for different patients and irregular breast compression can lead to different tissue brightness and contrast in the original image, resulting in non-uniform brightness and contrast after image enhancement.
3) The image after enhancement has poor layering, and the contrast and brightness between tissues have no distinction, so the method is difficult to be applied to clinical diagnosis.
Disclosure of Invention
The embodiment of the application provides a mammary gland image enhancement method and equipment, which can solve the problems of poor visibility of subcutaneous tissues and skin lines caused by uneven image overexposure, inconsistent brightness and contrast after image enhancement and poor image layering in the prior art.
In a first aspect, an embodiment of the present application provides a breast image enhancement method, including:
normalizing the pixel value of the original mammary gland image to obtain a first image;
transforming and stretching the first image by adopting Gaussian-logarithm transformation to obtain a second image;
calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after contrast normalization;
performing enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image;
and performing gradient processing on the fourth image by using a gradient curve suitable for display to obtain a target mammary gland image.
Further, the performing transform stretching on the first image by using gaussian-log transform to obtain a second image includes:
performing transform stretching on the first image by adopting the following formula:
Figure BDA0002443248170000031
wherein y represents the pixel value in the second image, x represents the pixel value in the first image, the parameter c1 is used for controlling the stretching contrast of the high gray scale area, and the values are distributed in the range of (0,3000), and the parameter c2 is used for controlling the stretching contrast of the low gray scale area, and the values are distributed in the range of (0, 1).
Further, the calculating a contrast normalization factor according to the second image and performing laplacian image decomposition to obtain a third image after contrast normalization includes:
calculating the contrast of the second image by using the region of interest in the second image, and obtaining a contrast normalization factor according to the target contrast and the contrast of the second image;
performing Laplacian image decomposition on the second image by adopting a Laplacian pyramid to obtain a decomposed multi-scale layer image;
and multiplying the contrast normalization factor and the multi-scale layer image to obtain the third image.
Further, calculating a contrast of the second image by using the region of interest in the second image, and obtaining a contrast normalization factor according to the target contrast and the contrast of the second image, including:
jointly detecting the second image by utilizing a maximum variance threshold segmentation method and histogram analysis, and extracting an interested region from the second image;
calculating a median value of the histogram of the region of interest, and taking a gray value corresponding to the median value as the contrast of the second image;
and calculating a quotient value of the target contrast divided by the contrast of the second image, and taking the quotient value as the contrast normalization factor.
Further, the median of the histogram of the region of interest is calculated as follows:
Figure BDA0002443248170000041
where, Median represents the Median of the histogram of the region of interest, F is the cumulative frequency of the bars smaller than and close to the Median, l is the bar position corresponding to F, F is the position of the next bar corresponding to l, and w is the width of the bar closest to the Median.
Further, the enhancing the third image and reconstructing the laplacian image to obtain an enhanced fourth image includes:
performing detail enhancement processing, edge enhancement processing and dynamic range compression processing on the third image to obtain an enhanced multi-scale layer image;
and performing Laplace image reconstruction on the enhanced multi-scale layer image to obtain the fourth image.
Further, the performing a gradual change process on the fourth image by using a gradual change curve suitable for display to obtain a target breast image includes:
acquiring a minimum value t0 of the diagnosis related signal in the fourth image, a maximum value t1 of the diagnosis related signal in the fourth image and a median ta of a statistical histogram in the region of interest of the fourth image;
constructing an S-shaped gradual curve comprising foot branches, a trunk and shoulder branches by utilizing the minimum value t0, the maximum value t1 and the median value ta;
and performing gradient processing on the fourth image by using the S-shaped gradient curve to obtain a target mammary gland image.
Further, the S-shaped gradual curve is as follows:
a foot branch:
Figure BDA0002443248170000042
stem y (t) y0+ hf + gb (t-tf), t ∈ [ tf, ts ]
Shoulder branching:
Figure BDA0002443248170000051
wherein,
wf=((ta-t0)·gb-ya+y0)/(gb-gf)
hf=wf·gf
ws=((t1-ta)·gb-y1+ya)/(gb-gs)
hs=ws·gs
wherein y0, ya, y1, g0, gf, gb, gs, g1 are predefined constant values, t0 is the minimum value of the diagnostically relevant signal in the fourth image, t1 is the maximum value of the diagnostically relevant signal in the fourth image, ta is the median of the statistical histogram in the region of interest of the fourth image, wf represents the width of the foot branch, hf represents the height of the foot branch, ws represents the width of the shoulder branch, and hs represents the height of the shoulder branch.
In a second aspect, an embodiment of the present application provides a breast image enhancement device, including:
the normalization module is used for performing normalization processing on the pixel value of the original mammary gland image to obtain a first image;
the transformation stretching module is used for carrying out transformation stretching on the first image by adopting Gaussian-logarithm transformation to obtain a second image;
the contrast module is used for calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after the contrast normalization;
the enhancement reconstruction module is used for carrying out enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image;
and the gradual change module is used for performing gradual change processing on the fourth image by using a gradual change curve suitable for display to obtain a target mammary gland image.
In a third aspect, the present application provides a computer-readable storage medium, in which a computer executing instruction is stored, and when a processor executes the computer executing instruction, the method for enhancing a breast image as provided in the first aspect is implemented.
The method for enhancing the mammary gland image comprises the steps of carrying out normalization processing on pixel values of an original mammary gland image to obtain a first image; transforming and stretching the first image by adopting Gaussian-logarithm transformation to obtain a second image; calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after contrast normalization; and performing enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image, and performing gradient processing on the fourth image by using a gradient curve suitable for display to obtain a target mammary gland image. The problem of poor visibility of subcutaneous tissues and skin lines caused by image overexposure can be effectively solved by performing transformation stretching by using Gaussian-logarithmic transformation, the problem of inconsistent brightness and contrast after image enhancement can be effectively solved by performing contrast normalization on the image by using a contrast normalization factor, and in addition, the layering sense among different tissues in the image can be effectively improved by using a mode of performing gradient processing by using a gradient curve suitable for display, so that the method is convenient to apply to clinical diagnosis.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a breast image enhancement method according to an embodiment of the present application;
FIG. 2 is a graph of the G-L transformation in an embodiment of the present application;
FIG. 3 is a schematic diagram of a decomposition process of a Laplace pyramid image according to an embodiment of the present application;
FIG. 4 is a graph of three different non-linearity curves of the prior art;
FIG. 5 is a schematic diagram of a compound curve in an embodiment of the present application;
fig. 6 is a schematic diagram of a laplacian image reconstruction process in an embodiment of the present application;
FIG. 7 is a schematic view of an S-shaped transition curve in an embodiment of the present application;
FIG. 8 is a schematic structural diagram of a breast image enhancement device according to an embodiment of the present application;
fig. 9 is a schematic diagram of a structure of an electronic device in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Please refer to fig. 1, which is a schematic flow chart of a breast image enhancement method according to an embodiment of the present application, including:
step 101, carrying out normalization processing on pixel values of an original mammary gland image to obtain a first image;
in an embodiment of the present application, the breast enhancement method is implemented by a breast enhancement device, which is a program module and stored in a computer readable storage medium, and a processor can call and operate the breast enhancement device from the readable storage medium to implement the breast enhancement method in the present application.
The method comprises the steps of firstly obtaining an original mammary gland image, carrying out normalization processing on all pixel values in the original mammary gland image to obtain a first image in order to accurately evaluate various parameters and calculate various parameters subsequently, wherein all the pixel values in the original mammary gland image are converted into a range of [0,1], and all subsequent calculations adopt floating point type data.
102, performing transformation and stretching on the first image by adopting Gaussian-logarithm transformation to obtain a second image;
after the first image with normalized pixel values is obtained, the first image is subjected to transformation stretching by adopting Gaussian-logarithm (G-L) transformation to obtain a second image, wherein the G-L transformation can simultaneously stretch a low-gray area (gland) and a high-gray area (skin) and only compress middle gray, and because the mammary gland is positioned in a low-gray dynamic range and skin lines are distributed in a high-gray dynamic range, the G-L transformation can improve the contrast of the gland and the fat layer, keep the skin lines still clear and visible, have better visibility and eliminate the influence caused by over-exposure and uneven compression of the image.
Wherein, adopting the Gaussian-logarithm transformation to transform and stretch the first image to obtain a second image, comprises:
the first image is subjected to transform stretching by adopting the following formula:
Figure BDA0002443248170000071
wherein y represents a pixel value in the second image, x represents a pixel value in the first image, the parameter c1 is used for controlling the stretching contrast of the high gray scale region, and values thereof are distributed in the range of (0,3000), the parameter c2 is used for controlling the stretching contrast of the low gray scale region, and values thereof are distributed in the range of (0,1), and in practical application, values of c1 and c2 can be determined according to practical needs, which is not limited herein.
Referring to fig. 2, which is a graph of G-L transformation when c1 and c2 are different, wherein the abscissa represents normalized image pixel values, and the ordinate represents G-L transformed image pixel values, it can be seen from fig. 2 that G-L transformation stretches both low-gray and high-gray areas and compresses the middle gray area, the larger c1 is, the stronger the high-gray area stretches, actually, the higher c1 corresponds to a skin line, therefore, the sharper the skin line is, and the larger c2 is, the stronger the stretching degree of the low-gray area is, it can be understood that the problem that the skin line is not sharp after the overexposed image is enhanced can be effectively solved by transforming and stretching the first image using G-L transformation.
103, calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after contrast normalization;
in the embodiment of the present application, the average brightness of different images is not consistent due to the difference of radiation dose of different patients and tissues, and not all signals in the original breast image are related to diagnosis, which affects the contrast and brightness of the enhanced image. To avoid the effect of dose fluctuations on the image enhancement process, the images need to be normalized. Specifically, after the transformed and stretched second image is obtained, a contrast normalization factor may be calculated and laplacian image decomposition may be performed according to the second image, where the steps specifically include the following three steps:
step A, calculating the contrast of a second image by using the region of interest in the second image, and obtaining a contrast normalization factor according to the target contrast and the contrast of the second image;
the image normalization follows a constant contrast criterion, so that the consistency of the contrast between the glands and the fat in different images can be maintained by means of bringing the images with different contrasts to the same target contrast, for which reason the corresponding contrast normalization factors need to be calculated for different images.
In the present application, considering that the region of interest needs to be detected in order to calculate the contrast normalization factor, the present application calculates the contrast normalization factor using the second image transformed by G-L in order to avoid interpolation.
And calculating a contrast normalization factor, wherein a mammary gland effective region, namely a region of interest in the second image, needs to be detected first, and the mammary gland effective region can be extracted from the second image by jointly detecting the second image by using a maximum variance threshold segmentation method and histogram analysis. The maximum variance threshold segmentation and histogram combined detection method can achieve 100% of accuracy, smooth edges can be obtained, accurate detection can be achieved no matter whether the images are real breast images or body membranes or images shot by acrylic plates and other shot objects, and the method has good robustness.
In one possible implementation, the detection of the region of interest in the second image comprises the steps of:
1) calculating an initial segmentation point S0 in the second image by a maximum threshold segmentation method, specifically, the initial segmentation point S0 is a rough gray scale estimation for distinguishing different tissues in the image according to the difference of gray scale distribution;
2) calculating the maximum gray value Smax in the second image;
3) counting a histogram H in a range of [ S0, Smax ] from the second image;
4) carrying out normalization processing on the histogram H, and normalizing the histogram H into Hn;
5) counting the maximum occurrence frequency Hm of the histogram Hn, wherein the maximum occurrence frequency Hm refers to the frequency with the maximum occurrence frequency in the histogram;
6) selecting an occurrence frequency t less than Hm, wherein t can be specifically set to F × Hm, and F is in a range between (0, 1);
7) determining a position S1 corresponding to the occurrence frequency t in the histogram;
8) determining a final segmentation point S-S1 + d, wherein d is a preset offset value;
9) the segmentation point S is compared pixel by pixel with the second image. If the value of a certain pixel in the second image is larger than S, the pixel is marked as 0, otherwise, the pixel is marked as 1, so that the second image is segmented, and the region of interest of the second image is obtained.
In the embodiment of the present application, the image contrast is defined as a gray value corresponding to a median value of a histogram in a breast effective region (i.e. a region of interest), wherein the histogram median value is calculated as follows:
Figure BDA0002443248170000091
where, Median represents the Median of the histogram of the region of interest, F is the cumulative frequency of the bars smaller than and close to the Median, l is the bar position corresponding to F, F is the position of the next bar corresponding to l, and w is the width of the bar closest to the Median.
After the histogram median is obtained, the gray value corresponding to the median is determined, i.e. the contrast of the second image is determined. Further, a quotient of the calculated target contrast divided by the contrast of the second image is used as a contrast normalization factor, and the formula is as follows:
Gc=Tc/Medianh
wherein G iscRepresenting the contrast normalization factor, Tc representing the target contrast, Medianh representing the contrast of the second image.
It can be understood that, in the embodiment of the present application, the contrast normalization factor is calculated by using the preset target contrast, so that the contrasts of the plurality of different images can be normalized to the target contrast, which can ensure that the contrasts of the glands and the fat layer in the plurality of different images are maintained at a constant level.
B, performing Laplacian image decomposition on the second image by adopting a Laplacian pyramid to obtain a decomposed multi-scale layer image;
it should be noted that there is no precedence order between the step of calculating the contrast normalization factor and the step of decomposing the image using the laplacian pyramid, the calculation of the contrast normalization factor and the image decomposition may be performed simultaneously, or the image decomposition may be performed first and then the contrast normalization factor is calculated, and in the actual processing, the setting may be performed based on the needs, and no limitation is made.
In the embodiment of the present application, a laplacian pyramid is used to perform laplacian image decomposition to obtain a decomposed multi-scale layer image, and the method specifically includes the following steps:
the second image obtained after the G-L transformation is used as a bottom layer image G0 of the laplacian pyramid, a preset number of gaussian kernels (such as 5 × 5) are used for performing convolution on the second image, the convolved image is subjected to down-sampling to remove even-numbered rows and columns to obtain an image G1, the process is called Reduce reduction operation, then the down-sampled image G1 is subjected to 0 interpolation and up-sampling on the even-numbered rows and the even-numbered columns and then is convolved with the same gaussian kernels, the process is called Expand amplification operation, finally the image obtained by subtracting the Expand operation from the G0 is the first layer image L0 of the laplacian pyramid, the image G1 is used as input, the Reduce reduction operation and the Expand operation are repeated, the iteration is repeated for multiple times, and a laplacian pyramid structure is finally formed to complete the decomposition of the second image, so that a multi-scale layer image is obtained.
In the embodiment of the present application, the gaussian kernel is the only kernel that can generate a multi-scale space, and the scale space L (x, y, σ) of an image is defined as a convolution operation of the original image I (x, y) with a 2-dimensional gaussian function G (x, y, σ) with variable scale, i.e. the expression formula in the form of scale space is as follows:
L(x,y,s)=G(x,y,s)*I(x,y)
wherein,
Figure BDA0002443248170000101
wherein, the parameter sigma represents the width of the surface envelope corresponding to the Gaussian kernel and controls the local influence range of the Gaussian kernel function.
The Reduce reduction operation is a process of performing interlaced alternate column downsampling (i.e., removing even rows and even columns) on the image after convolution filtering with a gaussian kernel. Suppose GlRepresenting the first layer Gaussian image, Gl-1Representing the image after Reduce reduction (the upper layer of the gaussian image), then
Figure BDA0002443248170000102
W(m,n)=W(m)*W(n)
Figure BDA0002443248170000103
W (r) is a one-dimensional gaussian convolution kernel with a diameter of 5, i is the image row index, j is the image column index, m is the row index of the convolution kernel, n is the column index of the convolution kernel, and a is a one-dimensional gaussian convolution kernel parameter with a value of 0.375. The run reduction operation is equivalent to the process of gaussian pyramid generation, and the dimension of the image is reduced by one half every time the Reduce is executed.
The Expand amplification operation is a process of interpolating an image with 0, up-sampling, and then filtering. Suppose GlRepresenting the first layer Gaussian image, El+1Represents GlAn expanded image, then
Figure BDA0002443248170000104
Where W is the same Gaussian kernel as in Reduce operation. i is the image row index, j is the image column index, m is the row index of the convolution kernel, and n is the column index of the convolution kernel. Every time the image is amplified once by Expand, the dimension of the image is changed to 2 times of the original dimension. From the above description, the mathematical expression of the laplacian pyramid is as follows:
Ll=Gl-expand(Gl+1)
to better understand the decomposition process of the laplacian pyramid image, it is assumed that the image is decomposed into 7-layer laplacian pyramid images, the decomposition process is shown in fig. 3, and in fig. 3, G is shown in the figure1-G7Representing images of layers of a Gaussian pyramid L0-L6Indicating a detail signal, G7Representing an approximation signal (last layer of a gaussian image), Expand being an enlargement operator, Reduce being a reduction operator, by the above process, a multi-scale layer image L can be obtained after a second image is subjected to laplacian image decomposition0-L6
And step C, multiplying the contrast normalization factor by the multi-scale layer image to obtain a third image.
After obtaining the contrast normalization factor and the decomposed multi-scale layer image, the contrast normalization factor is combined with the multi-scale layer image L0-L6Multiplying to obtain a third image, assuming xiRepresenting the i-th layer multi-scale layer image, yiRepresents a normalized multi-scale layer image, then
yi=Gcxi
Wherein G iscRepresenting a contrast normalization factor. In the case of underexposure or overexposure, the contrast normalization factor can ensure that the contrast of the relevant tissues is mapped to the same level, and maintain the contrast of the tissues in different images to be consistent.
It should be noted that instead of contrast normalization, brightness normalization or noise normalization may be used. And the processing mode of brightness normalization or noise normalization is similar to the mode of using contrast normalization, and is not repeated here.
104, performing enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image;
in this embodiment of the application, the third image may be subjected to detail enhancement processing, edge enhancement processing, and dynamic range compression processing to obtain an enhanced multi-scale layer image, and the enhanced multi-scale layer image may be subjected to laplacian image reconstruction to obtain a fourth image. Which will be described separately below.
(I) detail enhancement processing
The detail enhancement is realized by carrying out nonlinear transformation on the third image (multi-scale layer image) after the contrast normalization, in the multi-scale layer image contained in the third image, small coefficients represent micro details, and in order to improve the visibility of the micro details, the micro details need to be amplified, on the other hand, large coefficients represent strong edges, occupy a main dynamic range, and the compression of the micro details does not cause information loss, so that the overall contrast of the image can be improved.
Specifically, the detail enhancement is realized by the following nonlinear transformation:
Figure BDA0002443248170000121
-M<x<M
wherein x represents a coefficient in the multi-scale layer image, y represents a coefficient in the enhanced multi-scale layer image, M is a maximum value of the coefficient in the multi-scale layer image, and a is a global magnification factor. and a is used for restraining the dynamic range of the enhanced multi-scale layer image and keeping the dynamic range consistent with the dynamic range of the original multi-scale layer image. In order to attenuate the coefficients in such a way that the strong edges are compressed with increasing detail, this function curve is designed as a non-monotonically decreasing non-linear function. The parameter p controls the non-linearity of the function curve. Referring to fig. 4, fig. 4 shows the function curves of three different non-linearities in the prior art.
As can be seen from fig. 4, the small coefficient near the origin may be excessively amplified, and the noise is generally concentrated in the small coefficient, which may cause the noise in the image to be amplified, in the embodiment of the present application, in order to suppress the noise, a modified non-linear curve will be used for detail enhancement, and the modified non-linear curve is as follows:
Figure BDA0002443248170000122
Figure BDA0002443248170000123
-M<x<M
0<xc<<M
the improved non-linear curve is a composite curve composed of a linear straight line near the origin and a power function curve, and the intersection point xcThe jump points of the linear segments and the power function are defined, and a proper jump point is selected, so that the micro-detail can be kept improved without excessively amplifying noise, and the compound curve is shown in figure 5.
(II) edge enhancement processing
The edge enhancement processing refers to multiplying the multi-scale layer image by a scale-size dependent coefficient. In order to make the image softer after processing, the edge coefficient gradually increases with increasing scale, which is different from the conventional edge enhancement method, in the embodiment of the present application, when the edge enhancement coefficient gradually increases with increasing scale, the image tends to be soft, and when the edge enhancement coefficient gradually decreases with increasing scale, the image tends to be sharp. For a better understanding, ae is assumedkRepresenting the edge enhancement coefficient of the k-th layer image, then:
aek=1,k<L-ne
Figure BDA0002443248170000124
wherein f iseRepresenting a function of controlling the degree of edge enhancement, neIs the number of layers of edge enhancement, k denotes the k-th layer picture, and L denotes the total number of layers of the multi-scale layer picture.
(III) dynamic Range compression processing
The dynamic range compression process also multiplies the multi-scale layer image by a scale-dependent coefficient, specifically, say alkRepresenting the dynamic range compression factor of the k-th layer image, then:
alk=1,k<L-nl
Figure BDA0002443248170000131
wherein f islRepresenting winter range compressive strength, nlIndicating the number of dynamic range compression layers, L indicating the total number of layers of the multi-scale layer image.
And (3) reconstructing a Laplace image:
in the embodiment of the application, after the detail enhancement processing, the edge enhancement processing and the dynamic range compression processing are performed on the third image, the laplacian image reconstruction is performed on the processed multi-scale layer image, so as to obtain an enhanced fourth image.
It can be understood that the image decomposition process of the laplacian pyramid and the laplacian image reconstruction process are inverse operations, and a reconstructed image can be obtained by adding each layer of the enhanced multi-scale layer image and the approximate layer. Please refer to fig. 6, which is a schematic diagram illustrating a reconstruction process of a laplacian image in an embodiment of the present application, assuming that an enhanced third image includes 8 multi-scale layer images, where G is7(counting from layer 0) is the last layer image of Gaussian pyramid, namely the approximate signal, and the pyramid is from the bottom G7Start of reconstitution L0-L6Representing the enhanced laplacian image. Expand denotes the zoom operator.
It should be noted that, details of the image are less and less in the decomposition process, a series of tower layer coefficient differences express details on respective scales, each scale corresponds to each original frequency band in the frequency domain, and the frequency bands of each tower layer also have overlapping portions. The Laplace image reconstruction process is that the image is interpolated and adjusted to the scale of the previous layer from the image with the maximum scale, namely from the approximate image obtained finally, and then the interpolation is carried out with the detail signal of the current layer. And iterating in an interpolation and addition mode until the size of the original image is recovered, and finishing the reconstruction process. It is understood that if the sampling in the process of the image of the laplacian pyramid and the interpolation in the reconstruction are corresponding, the reconstructed image and the original image should be consistent.
And 105, performing gradient processing on the fourth image by using a gradient curve suitable for display to obtain a target mammary gland image.
In the embodiment of the present application, the fourth image after enhancement can be obtained by using the laplacian image reconstruction, but the problem of poor image layering is still present, the contrast and brightness between tissues are not differentiated, and the image cannot be applied to clinical diagnosis. And performing gradient processing on the fourth image obtained by reconstruction by using a gradient curve suitable for display to obtain a target mammary gland image.
The process of performing the gradation process using a gradation curve suitable for display may be referred to as a gradation process. The layering process creates an S-shaped gradient curve from three consecutive straight or curved segments, a foot branch, a trunk, and a shoulder branch, wherein the foot branch and the shoulder branch are non-linear and the trunk is nominally linear. Specifically, the S-shaped gradual change curve is shown in fig. 7: the curve includes three branches, each branch containing a characteristic point, t0, ta and t 1. At each branch junction, it is ensured that the values and slopes of the two branches match in order to produce a mathematically continuous curve.
Correspondingly, in the fourth image, t0 is the minimum value related to the diagnosis signal in the fourth image, t1 is the maximum value of the diagnosis related signal in the fourth image, and ta is the median of the statistical histogram in the region of interest in the fourth image. It can be understood that the determination manner of the region of interest in the fourth image is similar to the determination manner of the region of interest in the second image, and the detailed description thereof is omitted here.
In the embodiment of the application, a minimum value t0, a maximum value t1 and a median ta of a statistical histogram in a region of interest of the fourth image in the fourth image are obtained, an S-shaped gradient curve including foot branches, a trunk and shoulder branches is constructed by using the minimum value t0, the maximum value t1 and the median ta, and gradient processing is performed on the fourth image by using the S-shaped gradient curve to obtain a target breast image.
Where the minimum t0 and maximum t1 define a range of associated pixel gray levels that are to be mapped to a predefined output range, denoted as [ y0, y1] in fig. 7. Input pixels outside the gray level t0, t1 will typically map to a fixed maximum or minimum output value. The median value ta may be referred to as a saddle point, and the slope of the saddle point is fixed in all images and may be a preset slope value. The consistency of the contrast of the relevant gray scale of the image can be ensured by fixing the slope of the straight line segment of the main body.
In the embodiment of the present application, the mathematical definition of the S-shaped gradation curve (gradation curve) is as follows:
a foot branch:
Figure BDA0002443248170000141
stem y (t) y0+ hf + gb (t-tf), t ∈ [ tf, ts ]
Shoulder branching:
Figure BDA0002443248170000142
wherein,
wf=((ta-t0)·gb-ya+y0)/(gb-gf)
hf=wf·gf
ws=((t1-ta)·gb-y1+ya)/(gb-gs)
hs=ws·gs
wherein y0, ya, y1, g0, gf, gb, gs, g1 are predefined constant values, t0 is the minimum value of the diagnostically relevant signal in the fourth image, t1 is the maximum value of the diagnostically relevant signal in the fourth image, ta is the median of the statistical histogram in the region of interest of the fourth image, wf represents the width of the foot branch, hf represents the height of the foot branch, ws represents the width of the shoulder branch, and hs represents the height of the shoulder branch.
In order to better understand the technical solution of the embodiment of the present application, the calculation processes of the width wf and the height hf of the foot branch and the calculation processes of the width ws and the height hs of the shoulder branch will be described below.
The mathematical definition of the foot branches is as follows:
Figure BDA0002443248170000151
Figure BDA0002443248170000152
Figure BDA0002443248170000153
the width wf and height hf of the foot branches can be obtained by solving the following two equations, which specify the height of the left side of the foot branches and the trunk, respectively. The method comprises the following specific steps:
ya-y0-hf=(ta-t0-wf)·gb
hf=wf·gf
solving the above equation of a first order of two for the width wf and height hf of the foot branches yields:
wf=((ta-t0)·gb-ya+y0)/(gb-gf)
hf=wf·gf
similarly, the width ws and height hs of the shoulder branches can also be obtained by solving the following two equations that specify the heights to the right of the shoulder branches and the trunk branches, respectively, as follows:
y1-ya-hs=(t1-ta-ws)·gb
hs=ws·gs
solving the above-described equation of a two-dimensional equation for the width ws and height hs of the shoulder branches yields:
ws=((t1-ta)·gb-y1+ya)/(gb-gs)
hs=ws·gs
in the embodiment of the present application, the minimum value t0, the maximum value t1 and the median value ta are three important feature points, which depend on the essential features of each image, wherein, in the breast image, the saddle points correspond to the partial pixel values in the gland.
In the embodiment of the application, the problem of poor visibility of subcutaneous tissues and skin lines caused by image overexposure can be effectively solved by performing transformation stretching by using Gaussian-logarithmic transformation, the problem of inconsistent brightness and contrast after image enhancement can be effectively solved by performing contrast normalization on the image by using a contrast normalization factor, and in addition, the layering sense among different tissues in the image can be effectively improved by using a mode of performing gradient processing by using a gradient curve suitable for display, so that the application to clinical diagnosis is facilitated.
It should be noted that, in the embodiment of the present application, laplacian image decomposition is an optional way to perform image decomposition, and in practical applications, other image decomposition methods may also be used, for example, UNIQUE decomposition methods may be used instead of the laplacian image decomposition methods, and the laplacian image decomposition methods may be determined specifically based on practical situations.
Please refer to fig. 8, which is a schematic structural diagram of a breast image enhancement device in an embodiment of the present application, including:
a normalization module 801, configured to perform normalization processing on a pixel value of an original breast image to obtain a first image;
a transformation stretching module 802, configured to perform transformation stretching on the first image by using gaussian-logarithmic transformation to obtain a second image;
a contrast module 803, configured to calculate a contrast normalization factor according to the second image and perform laplacian image decomposition to obtain a third image after contrast normalization;
an enhancement reconstruction module 804, configured to perform enhancement processing and laplacian image reconstruction on the third image to obtain an enhanced fourth image;
and a gradual change module 805, configured to perform gradual change processing on the fourth image by using a gradual change curve suitable for display, so as to obtain a target breast image.
In the embodiment of the present application, the normalization module 801, the transformation and stretching module 802, the contrast module 803, the enhancement reconstruction module 804, and the tapering module 805 are respectively similar to the contents described in steps 101 to 105 in the embodiment shown in fig. 1, and specific reference may be made to the relevant contents in the embodiment shown in fig. 1, which are not repeated herein.
In the embodiment of the application, the problem of poor visibility of subcutaneous tissues and skin lines caused by image overexposure can be effectively solved by performing transformation stretching by using Gaussian-logarithmic transformation, the problem of inconsistent brightness and contrast after image enhancement can be effectively solved by performing contrast normalization on the image by using a contrast normalization factor, and in addition, the layering sense among different tissues in the image can be effectively improved by using a mode of performing gradient processing by using a gradient curve suitable for display, so that the application to clinical diagnosis is facilitated.
Further, based on the content described in the foregoing embodiments, an electronic device is also provided in the embodiments of the present application, where the electronic device includes at least one processor and a memory; wherein the memory stores computer execution instructions; the at least one processor executes computer-executable instructions stored in the memory to implement the aspects described in the embodiments of the breast image enhancement method.
The electronic device provided in this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects are similar, which are not described herein again.
The embodiment of the present application further provides a computer-readable storage medium, in which a computer executing instruction is stored, and when the processor executes the computer executing instruction, the breast image enhancement method in the above embodiment is implemented.
For better understanding of the embodiment of the present application, referring to fig. 9, fig. 9 is a schematic diagram of a hardware structure of an electronic device according to the embodiment of the present application. The electronic device may be the electronic device described above.
As shown in fig. 9, the electronic apparatus 90 of the present embodiment includes: a processor 901 and a memory 902; wherein
A memory 902 for storing computer-executable instructions;
the processor 901 is configured to execute computer-executable instructions stored in the memory to implement the steps performed by the electronic device in the above embodiments.
Alternatively, the processor 901 is configured to execute computer-executable instructions stored in the memory to implement the steps performed by the network device in the foregoing embodiments.
Reference may be made in particular to the description relating to the method embodiments described above.
Alternatively, the memory 902 may be separate or integrated with the processor 901.
When the memory 902 is provided separately, the device further comprises a bus 903 for connecting said memory 902 and the processor 901.
Embodiments of the present application further provide a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when a processor executes the computer-executable instructions, the steps performed by the electronic device in the above embodiments are implemented.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules is only one logical division, and other divisions may be realized in practice, for example, a plurality of modules may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one unit. The unit formed by the modules can be realized in a hardware form, and can also be realized in a form of hardware and a software functional unit.
The integrated module implemented in the form of a software functional module may be stored in a computer-readable storage medium. The software functional module is stored in a storage medium and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute some steps of the methods according to the embodiments of the present application.
It should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in the incorporated application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in the processor.
The memory may comprise a high-speed RAM memory, and may further comprise a non-volatile storage NVM, such as at least one disk memory, and may also be a usb disk, a removable hard disk, a read-only memory, a magnetic or optical disk, etc.
The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
The storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks. A storage media may be any available media that can be accessed by a general purpose or special purpose computer.
An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Of course, the processor and the storage medium may reside as discrete components in an electronic device or host device.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method of breast image enhancement, the method comprising:
normalizing the pixel value of the original mammary gland image to obtain a first image;
transforming and stretching the first image by adopting Gaussian-logarithm transformation to obtain a second image;
calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after contrast normalization;
performing enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image;
and performing gradient processing on the fourth image by using a gradient curve suitable for display to obtain a target mammary gland image.
2. The method of claim 1, wherein the performing transform stretching on the first image by using the gaussian-to-log transform to obtain a second image comprises:
performing transform stretching on the first image by adopting the following formula:
Figure FDA0002443248160000011
wherein y represents the pixel value in the second image, x represents the pixel value in the first image, the parameter c1 is used for controlling the stretching contrast of the high gray scale area, and the values are distributed in the range of (0,3000), and the parameter c2 is used for controlling the stretching contrast of the low gray scale area, and the values are distributed in the range of (0, 1).
3. The method of claim 1, wherein the calculating a contrast normalization factor from the second image and performing laplacian image decomposition to obtain a contrast-normalized third image comprises:
calculating the contrast of the second image by using the region of interest in the second image, and obtaining a contrast normalization factor according to the target contrast and the contrast of the second image;
performing Laplacian image decomposition on the second image by adopting a Laplacian pyramid to obtain a decomposed multi-scale layer image;
and multiplying the contrast normalization factor and the multi-scale layer image to obtain the third image.
4. The method of claim 3, wherein calculating the contrast of the second image using the region of interest in the second image and deriving a contrast normalization factor based on the contrast of the target and the second image comprises:
jointly detecting the second image by utilizing a maximum variance threshold segmentation method and histogram analysis, and extracting an interested region from the second image;
calculating a median value of the histogram of the region of interest, and taking a gray value corresponding to the median value as the contrast of the second image;
and calculating a quotient value of the target contrast divided by the contrast of the second image, and taking the quotient value as the contrast normalization factor.
5. The method of claim 4, wherein the median of the histogram of the region of interest is calculated as follows:
Figure FDA0002443248160000021
where, Median represents the Median of the histogram of the region of interest, F is the cumulative frequency of the bars smaller than and close to the Median, l is the bar position corresponding to F, F is the position of the next bar corresponding to l, and w is the width of the bar closest to the Median.
6. The method according to claim 1, wherein the enhancing and laplacian image reconstructing the third image to obtain an enhanced fourth image comprises:
performing detail enhancement processing, edge enhancement processing and dynamic range compression processing on the third image to obtain an enhanced multi-scale layer image;
and performing Laplace image reconstruction on the enhanced multi-scale layer image to obtain the fourth image.
7. The method according to claim 1, wherein the performing a gradation process on the fourth image by using a gradation curve suitable for display to obtain a target breast image comprises:
acquiring a minimum value t0 of the diagnosis related signal in the fourth image, a maximum value tl of the diagnosis related signal in the fourth image and a median ta of a statistical histogram in the region of interest of the fourth image;
constructing an S-shaped gradual curve comprising foot branches, a trunk and shoulder branches by utilizing the minimum value t0, the maximum value t1 and the median value ta;
and performing gradient processing on the fourth image by using the S-shaped gradient curve to obtain a target mammary gland image.
8. The method according to claim 7, characterized in that the sigmoidal tapering curve is as follows:
a foot branch:
Figure FDA0002443248160000022
stem y (t) y0+ hf + gb (t-tf), t ∈ [ tf, ts ]
Shoulder branching:
Figure FDA0002443248160000023
wherein,
wf=((ta-t0)·gb-ya+y0)/(gb-gf)
hf=wf·gf
ws=((t1-ta)·gb-y1+ya)/(gb-gs)
hs=ws·gs
wherein y0, ya, y1, g0, gf, gb, gs, g1 are predefined constant values, t0 is the minimum value of the diagnostically relevant signal in the fourth image, t1 is the maximum value of the diagnostically relevant signal in the fourth image, ta is the median of the statistical histogram in the region of interest of the fourth image, wf represents the width of the foot branch, hf represents the height of the foot branch, ws represents the width of the shoulder branch, and hs represents the height of the shoulder branch.
9. A breast image enhancement device, characterized in that the device comprises:
the normalization module is used for performing normalization processing on the pixel value of the original mammary gland image to obtain a first image;
the transformation stretching module is used for carrying out transformation stretching on the first image by adopting Gaussian-logarithm transformation to obtain a second image;
the contrast module is used for calculating a contrast normalization factor according to the second image and performing Laplace image decomposition to obtain a third image after the contrast normalization;
the enhancement reconstruction module is used for carrying out enhancement processing and Laplace image reconstruction on the third image to obtain an enhanced fourth image;
and the gradual change module is used for performing gradual change processing on the fourth image by using a gradual change curve suitable for display to obtain a target mammary gland image.
10. A computer-readable storage medium, wherein a computer-executable instruction is stored in the computer-readable storage medium, and when being executed by a processor, the computer-readable storage medium implements the breast image enhancement method according to any one of claims 1 to 8.
CN202010271300.9A 2020-04-08 2020-04-08 Mammary gland image enhancement method and device and computer readable storage medium Active CN111507912B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010271300.9A CN111507912B (en) 2020-04-08 2020-04-08 Mammary gland image enhancement method and device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010271300.9A CN111507912B (en) 2020-04-08 2020-04-08 Mammary gland image enhancement method and device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111507912A true CN111507912A (en) 2020-08-07
CN111507912B CN111507912B (en) 2023-03-24

Family

ID=71875927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010271300.9A Active CN111507912B (en) 2020-04-08 2020-04-08 Mammary gland image enhancement method and device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111507912B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256548A (en) * 2021-06-08 2021-08-13 汪知礼 Multi-scale pattern recognition method and system
CN114298916A (en) * 2021-11-11 2022-04-08 电子科技大学 X-Ray image enhancement method based on gray stretching and local enhancement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102142133A (en) * 2011-04-19 2011-08-03 西安电子科技大学 Mammary X-ray image enhancement method based on non-subsampled Directionlet transform and compressive sensing
CN107578374A (en) * 2017-07-28 2018-01-12 深圳市安健科技股份有限公司 The drawing process and computer-readable recording medium of x-ray image
CN109035160A (en) * 2018-06-29 2018-12-18 哈尔滨商业大学 The fusion method of medical image and the image detecting method learnt based on fusion medical image
CN110232668A (en) * 2019-06-17 2019-09-13 首都师范大学 A kind of multi-scale image Enhancement Method
CN110610498A (en) * 2019-08-13 2019-12-24 上海联影智能医疗科技有限公司 Mammary gland molybdenum target image processing method, system, storage medium and equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1347414A1 (en) * 2002-02-22 2003-09-24 Agfa-Gevaert Method for enhancing the contrast of an image.
CN102142133A (en) * 2011-04-19 2011-08-03 西安电子科技大学 Mammary X-ray image enhancement method based on non-subsampled Directionlet transform and compressive sensing
CN107578374A (en) * 2017-07-28 2018-01-12 深圳市安健科技股份有限公司 The drawing process and computer-readable recording medium of x-ray image
CN109035160A (en) * 2018-06-29 2018-12-18 哈尔滨商业大学 The fusion method of medical image and the image detecting method learnt based on fusion medical image
CN110232668A (en) * 2019-06-17 2019-09-13 首都师范大学 A kind of multi-scale image Enhancement Method
CN110610498A (en) * 2019-08-13 2019-12-24 上海联影智能医疗科技有限公司 Mammary gland molybdenum target image processing method, system, storage medium and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
金炜等: "基于抗混叠轮廓波的乳腺图像降噪与增强方法", 《电路与系统学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113256548A (en) * 2021-06-08 2021-08-13 汪知礼 Multi-scale pattern recognition method and system
CN114298916A (en) * 2021-11-11 2022-04-08 电子科技大学 X-Ray image enhancement method based on gray stretching and local enhancement

Also Published As

Publication number Publication date
CN111507912B (en) 2023-03-24

Similar Documents

Publication Publication Date Title
EP2380132B1 (en) Denoising medical images
US8290292B2 (en) Method of generating a multiscale contrast enhanced image
KR20130038794A (en) Method of noise reduction in digital x-ray frames series
JP2014505491A (en) Reduction of non-linear resolution of medical images
JP2004503029A (en) Increased contrast in undistorted images
CN102428478A (en) Multi-scale image normalization and enhancement
US20110205227A1 (en) Method Of Using A Storage Switch
CN111507912B (en) Mammary gland image enhancement method and device and computer readable storage medium
Mustafa et al. Image enhancement technique on contrast variation: a comprehensive review
EP2026278A1 (en) Method of enhancing the contrast of an image.
US8442340B2 (en) Method of generating a multiscale contrast enhanced image
EP2232434A1 (en) Method of generating a multiscale contrast enhanced image.
WO2021094471A1 (en) Method and apparatus for contrast enhancement
Nayak et al. Super resolution image reconstruction using penalized-spline and phase congruency
Pham Kriging-weighted laplacian kernels for grayscale image sharpening
Bhandari et al. Gamma corrected reflectance for low contrast image enhancement using guided filter
Paranjape Fundamental enhancement techniques
Radhika et al. An adaptive optimum weighted mean filter and bilateral filter for noise removal in cardiac MRI images
US20230133074A1 (en) Method and Apparatus for Noise Reduction
Prakoso et al. Enhancement methods of brain MRI images: A Review
Cui et al. Attention-guided multi-scale feature fusion network for low-light image enhancement
CN108876734B (en) Image denoising method and device, electronic equipment and storage medium
Deng et al. Halo-free image enhancement through multi-scale detail sharpening and single-scale contrast stretching
EP1933273B1 (en) Generating a contrast enhanced image using multiscale analysis
CN109377451B (en) Method, terminal and storage medium for removing grid shadow of X-ray image grid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant