CN111242879A - Image processing method, image processing apparatus, electronic device, and medium - Google Patents

Image processing method, image processing apparatus, electronic device, and medium Download PDF

Info

Publication number
CN111242879A
CN111242879A CN202010056778.XA CN202010056778A CN111242879A CN 111242879 A CN111242879 A CN 111242879A CN 202010056778 A CN202010056778 A CN 202010056778A CN 111242879 A CN111242879 A CN 111242879A
Authority
CN
China
Prior art keywords
histogram
edge
image
segments
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010056778.XA
Other languages
Chinese (zh)
Other versions
CN111242879B (en
Inventor
林予松
赵国桦
满盼盼
刘彩薇
李龙飞
刘琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou University
Original Assignee
Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou University filed Critical Zhengzhou University
Priority to CN202010056778.XA priority Critical patent/CN111242879B/en
Publication of CN111242879A publication Critical patent/CN111242879A/en
Application granted granted Critical
Publication of CN111242879B publication Critical patent/CN111242879B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

The present disclosure provides an image processing method, including: acquiring an original image, wherein the original image comprises an original histogram; processing the original image to obtain an edge image and a non-edge image; processing the edge image and the non-edge image respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image; processing the edge histogram and the non-edge histogram to obtain a fused histogram; and processing the original histogram based on the fused histogram to obtain a processed histogram, so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image. The present disclosure also provides an image processing apparatus, an electronic device, and a computer-readable storage medium.

Description

Image processing method, image processing apparatus, electronic device, and medium
Technical Field
The present disclosure relates to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
The medical imaging system plays a significant role in clinical work, and the improvement of the medical image quality is very important for helping doctors to acquire more patient information. In general, due to lack of professional knowledge of an operator, irregularity of image capturing apparatuses, uneven light, and the like, image quality is significantly degraded, for example, an image is dark or image contrast is low.
Disclosure of Invention
One aspect of the present disclosure provides an image processing method, including: the method comprises the steps of obtaining an original image, processing the original image to obtain an edge image and a non-edge image, processing the edge image and the non-edge image respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image, processing the edge histogram and the non-edge histogram to obtain a fusion histogram, processing the original histogram based on the fusion histogram to obtain a processed histogram so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image.
Optionally, the processing the edge histogram and the non-edge histogram to obtain a fused histogram includes: respectively carrying out segmentation processing on the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram, wherein M is an integer larger than 1, N is an integer larger than 1, respectively processing the M segments to obtain a processed edge histogram, respectively processing the N segments to obtain a processed non-edge histogram, and fusing the processed edge histogram and the processed non-edge histogram to obtain a fused histogram.
Optionally, the processing the M segments respectively to obtain a processed edge histogram includes: determining a segment shift distance of each of the M segments to obtain M segment shift distances, performing shift processing on the M segments based on the M segment shift distances, and sequentially determining each of the M segments as a first segment, where the first segment includes M gray values, each gray value of the M gray values has a corresponding number of pixels, M is an integer greater than or equal to 1, determining a gray value shift distance of each gray value of the M gray values to obtain M gray value shift distances, and performing shift processing on the M gray values based on the M gray value shift distances to obtain the processed edge histogram.
Optionally, the processing the N segments respectively to obtain a processed non-edge histogram includes: determining a segment shift distance of each of the N segments to obtain N segment shift distances, performing shift processing on the N segments based on the N segment shift distances, and sequentially determining each of the N segments as a second segment, where the second segment includes N gray values, each gray value of the N gray values has a corresponding number of pixels, N is an integer greater than or equal to 1, determining a gray value shift distance of each gray value of the N gray values to obtain N gray value shift distances, and performing shift processing on the N gray values based on the N gray value shift distances to obtain the processed non-edge histogram.
Optionally, the determining the segment shift distance of each of the M segments comprises: and calculating the segment shift distance of each segment in the M segments according to the number of pixels in each segment in the M segments, the number M of the segments of the edge histogram and the total number of pixels of the edge histogram. The determining a gray value shift distance for each of the m gray values comprises: and calculating the gray value shift distance of each gray value in the m gray values according to the number of pixels of each gray value in the m gray values, the number m of the gray values of the first segment and the total number of pixels of the first segment.
Optionally, the determining the segment shift distance of each of the N segments includes: and calculating the segment shift distance of each segment in the N segments according to the number of pixels in each segment in the N segments, the number N of the segments of the edge histogram and the total number of pixels of the edge histogram. The determining a gray value shift distance for each of the n gray values comprises: and calculating the gray value shift distance of each gray value in the n gray values according to the number of pixels of each gray value in the n gray values, the number n of the gray values of the first segment and the total number of pixels of the first segment.
Optionally, the performing the segmentation processing on the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram includes: calculating a first sparse value of the edge histogram and a second sparse value of the non-edge histogram, wherein the first sparse value is used for representing the deviation degree of each pixel number in the edge histogram, the second sparse value is used for representing the deviation degree of the number of each pixel in the non-edge histogram, determining p gray values corresponding to the number of the pixels smaller than the first sparse value in the edge histogram, wherein p is an integer greater than or equal to 1, determining q gray values in the non-edge histogram corresponding to the number of pixels less than the second sparse value, and q is an integer greater than or equal to 1, the p gray values are used as breakpoints, the edge histogram is segmented to obtain M segments, the q gray values are used as breakpoints, and the non-edge histogram is segmented to obtain N segments.
Optionally, the processing the original histogram based on the fused histogram to obtain a processed histogram includes: determining a cumulative distribution function of the fused histogram, determining a cumulative distribution function of the original histogram, calculating the cumulative distribution function of the fused histogram and the cumulative distribution function of the original histogram to obtain a gray value change relation, wherein the gray value change relation comprises enhanced gray values corresponding to all gray values in the original histogram, and moving all gray values in the original histogram to corresponding enhanced gray values based on the gray value change relation to obtain a processed histogram.
Optionally, the processing the original image to obtain an edge image and a non-edge image includes: calculating the gradient value of each pixel in the original image, determining the pixel with the gradient value larger than a preset gradient value as the pixel in the edge image, and determining the pixel with the gradient value smaller than or equal to the preset gradient value as the pixel in the non-edge image.
Optionally, the method further includes: before processing the original image to obtain an edge image and a non-edge image, filtering the original image by using a Gaussian filtering mode so as to remove at least part of noise information in the original image.
Another aspect of the present disclosure provides an image processing apparatus including: the device comprises an acquisition module, a first processing module, a second processing module, a third processing module and a fourth processing module. The acquisition module acquires an original image, wherein the original image comprises an original histogram. And the first processing module is used for processing the original image to obtain an edge image and a non-edge image. And the second processing module is used for respectively processing the edge image and the non-edge image to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image. And the third processing module is used for processing the edge histogram and the non-edge histogram to obtain a fused histogram. And the fourth processing module is used for processing the original histogram based on the fused histogram to obtain a processed histogram so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image.
Optionally, the processing the edge histogram and the non-edge histogram to obtain a fused histogram includes: respectively carrying out segmentation processing on the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram, wherein M is an integer larger than 1, N is an integer larger than 1, respectively processing the M segments to obtain a processed edge histogram, respectively processing the N segments to obtain a processed non-edge histogram, and fusing the processed edge histogram and the processed non-edge histogram to obtain a fused histogram.
Optionally, the processing the M segments respectively to obtain a processed edge histogram includes: determining a segment shift distance of each of the M segments to obtain M segment shift distances, performing shift processing on the M segments based on the M segment shift distances, and sequentially determining each of the M segments as a first segment, where the first segment includes M gray values, each gray value of the M gray values has a corresponding number of pixels, M is an integer greater than or equal to 1, determining a gray value shift distance of each gray value of the M gray values to obtain M gray value shift distances, and performing shift processing on the M gray values based on the M gray value shift distances to obtain the processed edge histogram.
Optionally, the processing the N segments respectively to obtain a processed non-edge histogram includes: determining a segment shift distance of each of the N segments to obtain N segment shift distances, performing shift processing on the N segments based on the N segment shift distances, and sequentially determining each of the N segments as a second segment, where the second segment includes N gray values, each gray value of the N gray values has a corresponding number of pixels, N is an integer greater than or equal to 1, determining a gray value shift distance of each gray value of the N gray values to obtain N gray value shift distances, and performing shift processing on the N gray values based on the N gray value shift distances to obtain the processed non-edge histogram.
Optionally, the determining the segment shift distance of each of the M segments comprises: and calculating the segment shift distance of each segment in the M segments according to the number of pixels in each segment in the M segments, the number M of the segments of the edge histogram and the total number of pixels of the edge histogram. The determining a gray value shift distance for each of the m gray values comprises: and calculating the gray value shift distance of each gray value in the m gray values according to the number of pixels of each gray value in the m gray values, the number m of the gray values of the first segment and the total number of pixels of the first segment.
Optionally, the determining the segment shift distance of each of the N segments includes: and calculating the segment shift distance of each segment in the N segments according to the number of pixels in each segment in the N segments, the number N of the segments of the edge histogram and the total number of pixels of the edge histogram. The determining a gray value shift distance for each of the n gray values comprises: and calculating the gray value shift distance of each gray value in the n gray values according to the number of pixels of each gray value in the n gray values, the number n of the gray values of the first segment and the total number of pixels of the first segment.
Optionally, the performing the segmentation processing on the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram includes: calculating a first sparse value of the edge histogram and a second sparse value of the non-edge histogram, wherein the first sparse value is used for representing the deviation degree of each pixel number in the edge histogram, the second sparse value is used for representing the deviation degree of the number of each pixel in the non-edge histogram, determining p gray values corresponding to the number of the pixels smaller than the first sparse value in the edge histogram, wherein p is an integer greater than or equal to 1, determining q gray values in the non-edge histogram corresponding to the number of pixels less than the second sparse value, and q is an integer greater than or equal to 1, the p gray values are used as breakpoints, the edge histogram is segmented to obtain M segments, the q gray values are used as breakpoints, and the non-edge histogram is segmented to obtain N segments.
Optionally, the processing the original histogram based on the fused histogram to obtain a processed histogram includes: determining a cumulative distribution function of the fused histogram, determining a cumulative distribution function of the original histogram, calculating the cumulative distribution function of the fused histogram and the cumulative distribution function of the original histogram to obtain a gray value change relation, wherein the gray value change relation comprises enhanced gray values corresponding to all gray values in the original histogram, and moving all gray values in the original histogram to corresponding enhanced gray values based on the gray value change relation to obtain a processed histogram.
Optionally, the processing the original image to obtain an edge image and a non-edge image includes: calculating the gradient value of each pixel in the original image, determining the pixel with the gradient value larger than a preset gradient value as the pixel in the edge image, and determining the pixel with the gradient value smaller than or equal to the preset gradient value as the pixel in the non-edge image.
Optionally, the apparatus further comprises: and the fifth processing module is used for filtering the original image by using a Gaussian filtering mode before processing the original image to obtain an edge image and a non-edge image so as to remove at least part of noise information in the original image.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method as above.
Another aspect of the disclosure provides a non-transitory readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Another aspect of the present disclosure provides a computer-readable storage medium storing computer-executable instructions for implementing the method as above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure;
2-3 schematically illustrate a schematic diagram of a segmentation process for an edge histogram according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a schematic diagram of a shift process on M segments according to an embodiment of the disclosure;
FIG. 5 schematically shows a schematic diagram of a shift process of the gray values in each of the M segments according to an embodiment of the disclosure;
fig. 6 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure; and
FIG. 7 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.).
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable control apparatus to produce a machine, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
An embodiment of the present disclosure provides an image processing method, including: acquiring an original image, wherein the original image comprises an original histogram, and processing the original image to obtain an edge image and a non-edge image. Then, processing the edge image and the non-edge image respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image, and processing the edge histogram and the non-edge histogram to obtain a fused histogram. Thereafter, the original histogram is processed based on the fused histogram to obtain a processed histogram, such that a processed image is obtained from the processed histogram, wherein the contrast of the processed image is higher than the contrast of the original image.
Fig. 1 schematically shows a flow chart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing method includes operations S110 to S150, for example.
In operation S110, an original image is acquired, wherein the original image includes an original histogram.
According to the embodiment of the present disclosure, the original image may be, for example, a medical image, and when the medical image is displayed, enhancement processing is often required to be performed on the medical image to increase the difference between different tissues and organs in the medical image, so as to improve the visual effect of the medical image.
According to the embodiment of the disclosure, before processing the original image to obtain the edge image and the non-edge image, the original image may be filtered by using a gaussian filtering method, so as to remove at least part of noise information in the original image.
For example, the original image includes an original histogram, which may be smoothed with a two-dimensional gaussian function to eliminate noise interference. The noise interference means that, in practical applications, a medical image has noise due to random fluctuation of gray scale or influence of non-uniform illumination, so that sparse peaks and valleys are generated in an image histogram. The two-dimensional gaussian function is, for example, as shown in formula (1).
Figure BDA0002371618480000091
Where x, y in equation (1) represents the coordinate position of a pixel in the original image, and σ represents the standard deviation. In one embodiment, the size of the gaussian convolution kernel may be set to 3x3 with a standard deviation σ of 0.85. It is understood that the size of the gaussian convolution kernel and the value of the standard deviation can be set by those skilled in the art according to the actual application.
In operation S120, the original image is processed to obtain an edge image and a non-edge image, and the specific process is described as follows.
For example, a gradient value is first calculated for each pixel in the original image. For example, the image function of the original image is f (x, y), and the gradient vector of the original image is defined as formula (2).
Figure BDA0002371618480000092
Wherein G isxRepresenting a transverse gray-scale difference approximation, G, of each pixel in the original imageyRepresenting longitudinal gray difference approximations of individual pixels in the original image. As shown in formula (3), GxAnd GyCan be obtained by performing convolution calculation on the original image by using a set of 3x3 filters, wherein M isgRepresenting the original image.
Figure BDA0002371618480000101
Then, for each pixel in the original image, the lateral gray difference approximation G for each pixel may be combinedxAnd longitudinal gray difference approximation GyThe gradient value G of each pixel is obtained by calculation, and the calculation process of the gradient value G is shown in formula (4).
Figure BDA0002371618480000102
Finally, pixels having gradient values greater than a preset gradient value may be determined as pixels in the edge image, and pixels having gradient values less than or equal to the preset gradient value may be determined as pixels in the non-edge image. In one embodiment, the preset gradient value may be 1, for example. If the gradient value G of a certain pixel in the original image is larger than 1, the pixel is determined to be an edge point, and if the gradient value G of the certain pixel in the original image is smaller than or equal to 1, the pixel is determined to be a non-edge point, so that an edge image and a non-edge image are obtained. It is understood that a person skilled in the art can set the value of the preset gradient value according to the actual application.
In operation S130, the edge image and the non-edge image are processed respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image. For example, after obtaining the edge image and the non-edge image, the edge image and the non-edge image may be processed to obtain a corresponding edge histogram and a corresponding non-edge histogram, respectively.
In operation S140, the edge histogram and the non-edge histogram are processed to obtain a fused histogram, which is described as follows.
For example, the edge histogram and the non-edge histogram are respectively segmented to obtain M segments of the edge histogram and N segments of the non-edge histogram, where M is an integer greater than 1 and N is an integer greater than 1.
Fig. 2 to 3 schematically show schematic diagrams of segmentation processing of an edge histogram according to an embodiment of the present disclosure.
The following describes M segments of the edge histogram obtained by performing segmentation processing on the edge histogram with reference to fig. 2 to 3.
First, a first sparse value of the edge histogram is calculated, and the first sparse value is used for representing the deviation degree of the number of each pixel in the edge histogram. Wherein the first sparse value E is:
Figure BDA0002371618480000111
wherein i represents the gray value of the edge histogram, i ranges from 0 to 255, for example, h (i) represents the number of pixels in the edge histogram having the gray value of iThe number of the first and second groups is,
Figure BDA0002371618480000112
and the mean value of the number of pixels corresponding to each gray value in the edge histogram is represented. In the case of an edge histogram, it is,
Figure BDA0002371618480000113
the average value of the pixels in the edge histogram is represented, and for example, if i is 0 to 255, the number of pixels corresponding to i being 0, 1, 2, 3, … …, and 255 is H (0), H (1), H (2), H (3), … …, and H (255), respectively, then the average value is represented by
Figure BDA0002371618480000114
Referring to fig. 2, p gray values corresponding to the number of pixels smaller than the first sparse value in the edge histogram are determined, where p is an integer greater than or equal to 1. The P gray-scale values include, for example, a gray-scale value i equal to 5, a gray-scale value i equal to 13, a gray-scale value i equal to 20, and so on. And then, taking the p gray values as break points, and carrying out segmentation processing on the edge histogram to obtain M segments.
Referring to fig. 3, in one embodiment, the number of pixels corresponding to the p gray-scale values may be set to 0. For example, the number of pixels corresponding to a gradation value i of 5 is H (5), the number of pixels corresponding to a gradation value i of 13 is H (13), the number of pixels corresponding to a gradation value i of 20 is H (20), for example, H (5) is 0, H (13) is 0, and H (20) is 0, and the edge histogram is divided into M segments with the ends of the gradation value i of 5, the gradation value i of 13, and the gradation value i of 20 as break points. The M segments include, for example, 1 st segment, 2 nd segment, 3 rd segment, … …, M th segment.
Similarly, the specific process of performing segmentation processing on the non-edge histogram to obtain N segments of the non-edge histogram is similar to the processing process of the edge histogram, for example, a second sparse value of the non-edge histogram is calculated, and the second sparse value is used for representing the deviation degree of the number of each pixel in the non-edge histogram. And then determining q gray values corresponding to the number of pixels smaller than the second sparse value in the non-edge histogram, wherein q is an integer greater than or equal to 1, and performing segmentation processing on the non-edge histogram to obtain N segments by taking the q gray values as breakpoints. The detailed process is not described herein.
After obtaining M segments of the edge histogram, the M segments are processed respectively to obtain a processed edge histogram, and a specific process is described with reference to fig. 4 to 5 below. In addition, the N segments may be processed respectively to obtain a processed non-edge histogram, and a specific process is the same as or similar to a process of processing the M segments, which is not described herein again. And finally, fusing the processed edge histogram and the processed non-edge histogram to obtain a fused histogram.
A specific process of processing the M segments to obtain a processed edge histogram will be described below with reference to fig. 4 to 5.
Fig. 4 schematically shows a schematic diagram of a shift process on M segments according to an embodiment of the present disclosure. Fig. 5 schematically shows a schematic diagram of the shift processing of the gray values in each of the M segments according to an embodiment of the present disclosure.
As shown in fig. 4, the process of processing M segments includes, for example: firstly, determining the segment shift distance of each segment in the M segments to obtain M segment shift distances. For example, the segment shift distance of each of the M segments is calculated according to the number of pixels in each of the M segments, the number M of the segments of the edge histogram, and the total number of pixels of the edge histogram. Specifically, the segment shift distance of each of the M segments is, for example, as shown in equation (6).
Figure BDA0002371618480000121
Wherein j represents a segment, M represents the total number of segments of the edge histogram, D (j) represents the segment shift distance of the jth segment, x (j) represents the number of pixels included in the jth segment,
Figure BDA0002371618480000122
representing the total number of pixels of the edge histogram (i.e., the total number of pixels of the M segments).
After the segment shift distance of each of the M segments is obtained, the M segments are respectively shifted based on the M segment shift distances. That is, each of the M segments is remapped in order to shift the grayscale value in each segment from a low grayscale value of the image to a high grayscale value of the image. It will be appreciated that the greater the number of pixels contained within each segment, the greater the distance of remapping (shifting) of the segment.
As shown in fig. 4, for example, it is first determined that the segment shift distance of the 1 st segment of the M segments is D (1), the segment shift distance of the 2 nd segment is D (2), the segment shift distance of the 3 rd segment is D (3), and so on. Then, the 1 st segment, the 2 nd segment, the 3 rd segment, and so on are subjected to shift processing to the direction in which the gradation value is high. The process of translating the 1 st segment by a segment shift distance D (1) is only schematically shown in fig. 4. It is understood that the shifting process of the 2 nd segment, the 3 rd segment, … …, and the mth segment is the same as or similar to the shifting process of the 1 st segment, and is not repeated herein.
As shown in fig. 5, after each of the M segments is shifted by the corresponding segment shift distance, each of the M segments is continuously processed. For example, each of the M segments is sequentially determined as a first segment, where the first segment includes M gray-scale values, each gray-scale value of the M gray-scale values has a corresponding number of pixels, and M is an integer greater than or equal to 1.
According to the embodiment of the present disclosure, for example, a gray value shift distance of each of m gray values in the first segment may be determined, resulting in m gray value shift distances. For example, the gray value shift distance of each gray value in the m gray values is calculated according to the number of pixels of each gray value in the m gray values, the number m of the gray values of the first segment, and the total number of pixels of the first segment. Specifically, the gradation value shift distance of each of the m gradation values is, for example, as shown in formula (7):
Figure BDA0002371618480000131
where m denotes the width of the first segment (i.e., the number of gray-scale values of the first segment), k is 1, 2, … …, m, k denotes the k-th gray-scale value of the m gray-scale values, l (k) denotes the transform intensity of the k-th gray-scale value (i.e., the gray-scale value shift distance of the k-th gray-scale value), h (k) denotes the number of pixels corresponding to the k-th gray-scale value, and x (j) denotes the number of pixels included in the j-th segment (see formula (6)), where x (j) denotes the total number of pixels of the first segment.
As shown in fig. 5, for example, the 1 st segment of the M segments is determined as the first segment, and the 1 st segment includes, for example, the 1 st gray value, the 2 nd gray value, … …, the kth gray value, … …, and the mth gray value. The shift distances of the 1 st, 2 nd, … …, k-th, … …, and m-th gray scale values are L (1), L (2), … …, L (k), … …, and L (m), respectively, and then each of the m gray scale values is shifted in a direction of higher gray scale value according to the corresponding shift distance of gray scale value.
Similarly, the 2 nd segment, the 3 rd segment, … … rd segment, among the M segments may be sequentially determined, where the mth segment is used as the first segment, and a process of shifting each gray value of the multiple gray values in the first segment is the same as or similar to a process of shifting the M gray values in the 1 st segment, and is not described herein again. After shift processing is performed on a plurality included in each of the M segments, a processed edge histogram is obtained.
According to the embodiment of the disclosure, the edge histogram is processed by remapping (shifting) the plurality of segments of the edge histogram, so that each segment is used as a whole to process the edge histogram, the contrast between different segments is enhanced, the contrast of different parts in the original image is enhanced, and the overall brightness of the original image is improved. Secondly, in order to obtain a clearer image, the embodiment of the present disclosure further performs a subdivision transformation on the gray-scale value inside each segment to enhance the contrast of the image.
According to an embodiment of the disclosure, the shift processing procedure for the non-edge histogram may include processing the N segments, respectively, to obtain a processed non-edge histogram. The specific process is the same as or similar to the process of performing shift processing on the M segments of the edge histogram.
For example, the segment shift distance for each of the N segments is determined, resulting in N segment shift distances. That is, the segment shift distance of each of the N segments is calculated according to the number of pixels in each of the N segments, the number N of segments of the edge histogram, and the total number of pixels of the edge histogram. Then, shift processing is performed on the N segments based on the N segment shift distances, respectively.
And then, sequentially determining each of the N segments as a second segment, wherein the second segment comprises N gray-scale values, each gray-scale value in the N gray-scale values has a corresponding pixel number, and N is an integer greater than or equal to 1. And determining the gray value shift distance of each gray value in the n gray values to obtain the n gray value shift distances.
For example, the gray value shift distance of each gray value in the n gray values is calculated according to the number of pixels of each gray value in the n gray values, the number n of gray values of the first segment, and the total number of pixels of the first segment.
And finally, respectively carrying out shift processing on the n gray values based on the shift distances of the n gray values to obtain a processed non-edge histogram. The specific process may refer to M segments of the edge histogram to perform the shift processing, which is not described herein again.
And after the edge histogram is subjected to shift processing to obtain a processed edge histogram and the non-edge histogram is subjected to shift processing to obtain a processed non-edge histogram, fusing the processed edge histogram and the processed non-edge histogram to obtain a fused histogram. Operation S150 may then proceed as follows.
In operation S150, the original histogram is processed based on the fused histogram to obtain a processed histogram, so that a processed image is obtained according to the processed histogram, such that the contrast of the processed image is higher than that of the original image.
For example, the cumulative distribution function of the fused histogram is determined and the cumulative distribution function of the original histogram is determined. Then, histogram specified mapping is performed based on the cumulative distribution function of the fusion histogram and the cumulative distribution function of the original histogram, and a gray value change relation is obtained. The gray value variation relationship includes, for example, enhanced gray values corresponding to respective gray values in the original histogram. And finally, based on the gray value change relation, moving each gray value in the original histogram to the corresponding enhanced gray value to obtain a processed histogram.
For example, the calculation process of the histogram normalization map is as shown in formula (8) and formula (9).
Cinput(i)=Cdesired(s) (8)
Figure BDA0002371618480000151
Where i denotes the grey value of the original histogram, Cinput(i) Representing the cumulative distribution function of the original histogram. s represents the gray value of the fused histogram, Cdesired(s) represents the cumulative distribution function of the fused histogram. Wherein, s may also represent an enhanced gray value, for example, and the embodiment of the present disclosure may move each gray value in the original histogram to a corresponding enhanced gray value to obtain a processed histogram.
For example, in the original histogram, the number of pixels corresponding to gray scale values i of 1, 2, and 3 is H (1), H (2), and H (3), and the calculated enhancement gray scale values s are, for example, s of 5, 7, and 11, respectively. Then, the original histogram is processed to obtain a final processed histogram, which is specifically, for example: the original histogram is shifted to a gray value of 5 where the gray value i is 1, so that the number of pixels corresponding to the gray value of 5 in the processed histogram is H (1). The original histogram with gray value i-2 is moved to gray value 7, so that the number of pixels with gray value 7 in the processed histogram is H (2). The original histogram with gray value i-3 is moved to gray value 11, so that the number of pixels corresponding to gray value 11 in the processed histogram is H (3). Then, a processed image is obtained according to the processed histogram, so that the contrast of the processed image is higher than that of the original image, and the enhancement effect of the image is realized.
The method and the device for processing the edge histogram and the non-edge histogram are used for obtaining the edge image and the non-edge image by splitting the original image, and performing segmentation processing and shift processing on the edge image and the non-edge image respectively to obtain the processed edge histogram and the processed non-edge histogram. Then, the processed edge histogram and the processed non-edge histogram are fused to obtain a fused histogram, and histogram specified mapping is carried out based on the fused histogram and the original histogram to obtain an enhanced original image. It can be understood that the medical image is processed by the image processing method of the embodiment of the disclosure, so that the exposure of the medical image is improved, the contrast between light and shade is more exquisite, the details of the darker part of the image are richer, excessive enhancement can not be generated, extra noise can not be generated, the image quality is greatly improved, and medical workers can observe the state of an illness and judge the state of an illness more easily through the medical image.
Another aspect of the present disclosure provides an electronic device including: one or more processors; memory for storing one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the methods described in FIGS. 1-5.
Fig. 6 schematically shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 6, the image processing apparatus 600 includes an acquisition module 610, a first processing module 620, a second processing module 630, a third processing module 640, and a fourth processing module 650.
The acquisition module 610 may be configured to acquire an original image, wherein the original image comprises an original histogram. According to the embodiment of the present disclosure, the obtaining module 610 may, for example, perform the operation S110 described above with reference to fig. 1, which is not described herein again.
The first processing module 620 may be configured to process the original image to obtain an edge image and a non-edge image. According to the embodiment of the present disclosure, the first processing module 620 may, for example, perform operation S120 described above with reference to fig. 1, which is not described herein again.
The second processing module 630 may be configured to process the edge image and the non-edge image respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image. According to the embodiment of the present disclosure, the second processing module 630 may, for example, perform operation S130 described above with reference to fig. 1, which is not described herein again.
The third processing module 640 may be configured to process the edge histogram and the non-edge histogram to obtain a fused histogram. According to the embodiment of the present disclosure, the third processing module 640 may perform, for example, the operation S140 described above with reference to fig. 1, which is not described herein again.
The fourth processing module 650 may be configured to process the original histogram based on the fused histogram to obtain a processed histogram, so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image. According to the embodiment of the present disclosure, the fourth processing module 650 may, for example, perform operation S150 described above with reference to fig. 1, which is not described herein again.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any number of the obtaining module 610, the first processing module 620, the second processing module 630, the third processing module 640, and the fourth processing module 650 may be combined and implemented in one module, or any one of the modules may be split into a plurality of modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the disclosure, at least one of the obtaining module 610, the first processing module 620, the second processing module 630, the third processing module 640, and the fourth processing module 650 may be at least partially implemented as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementations of software, hardware, and firmware, or by a suitable combination of any several of them. Alternatively, at least one of the obtaining module 610, the first processing module 620, the second processing module 630, the third processing module 640 and the fourth processing module 650 may be at least partially implemented as a computer program module, which when executed, may perform a corresponding function.
FIG. 7 schematically shows a block diagram of a computer system for implementing image processing according to an embodiment of the present disclosure. The computer system illustrated in FIG. 7 is only one example and should not impose any limitations on the scope of use or functionality of embodiments of the disclosure.
As shown in fig. 7, a computer system 700 implementing image processing includes a processor 701, a computer-readable storage medium 702. The system 700 may perform a method according to an embodiment of the present disclosure.
In particular, the processor 701 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 701 may also include on-board memory for caching purposes. The processor 701 may be a single processing unit or a plurality of processing units for performing the different actions of the method flows according to embodiments of the present disclosure.
Computer-readable storage medium 702 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 702 may comprise a computer program 703, which computer program 703 may comprise code/computer-executable instructions that, when executed by the processor 701, cause the processor 701 to perform a method according to an embodiment of the disclosure, or any variant thereof.
The computer program 703 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 703 may include one or more program modules, including for example 703A, modules 703B, … …. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 701 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 701.
According to an embodiment of the present disclosure, at least one of the obtaining module 610, the first processing module 620, the second processing module 630, the third processing module 640, and the fourth processing module 650 may be implemented as a computer program module described with reference to fig. 7, which, when executed by the processor 701, may implement the respective operations described above.
The present disclosure also provides a computer-readable medium, which may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system. The above-mentioned computer-readable medium carries one or more programs which, when executed, implement the above-mentioned image processing method.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (13)

1. An image processing method comprising:
acquiring an original image, wherein the original image comprises an original histogram;
processing the original image to obtain an edge image and a non-edge image;
processing the edge image and the non-edge image respectively to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image;
processing the edge histogram and the non-edge histogram to obtain a fused histogram; and
processing the original histogram based on the fused histogram to obtain a processed histogram, so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image.
2. The method of claim 1, wherein said processing said edge histogram and said non-edge histogram to obtain a fused histogram comprises:
respectively carrying out segmentation processing on the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram, wherein M is an integer larger than 1, and N is an integer larger than 1;
processing the M segments respectively to obtain processed edge histograms;
processing the N segments respectively to obtain processed non-edge histograms; and
and fusing the processed edge histogram and the processed non-edge histogram to obtain the fused histogram.
3. The method of claim 2, wherein said processing the M segments separately to obtain processed edge histograms comprises:
determining the segment shift distance of each of the M segments to obtain M segment shift distances;
respectively carrying out shift processing on the M segments based on the M segment shift distances;
sequentially determining each of the M segments as a first segment, wherein the first segment comprises M gray values, each gray value of the M gray values has a corresponding pixel number, and M is an integer greater than or equal to 1;
determining the gray value shift distance of each gray value in the m gray values to obtain m gray value shift distances; and
and respectively carrying out shift processing on the m gray values based on the shift distances of the m gray values to obtain the processed edge histogram.
4. The method of claim 2 or 3, wherein the processing the N segments to obtain processed non-edge histograms respectively comprises:
determining the segment shift distance of each segment in the N segments to obtain N segment shift distances;
shifting the N segments based on the N segment shifting distances respectively;
sequentially determining each of the N segments as a second segment, wherein the second segment comprises N gray values, each gray value of the N gray values has a corresponding pixel number, and N is an integer greater than or equal to 1;
determining a gray value shift distance of each gray value in the n gray values to obtain n gray value shift distances; and
and respectively carrying out shift processing on the n gray values based on the shift distances of the n gray values to obtain the processed non-edge histogram.
5. The method of claim 3, wherein:
the determining a segment shift distance for each of the M segments comprises: calculating the segment shift distance of each segment in the M segments according to the number of pixels in each segment in the M segments, the number M of the segments of the edge histogram and the total number of pixels of the edge histogram;
the determining a gray value shift distance for each of the m gray values comprises: and calculating the gray value shift distance of each gray value in the m gray values according to the number of pixels of each gray value in the m gray values, the number m of the gray values of the first segment and the total number of pixels of the first segment.
6. The method of claim 4, wherein:
the determining a segment shift distance for each of the N segments comprises: calculating the segment shift distance of each segment in the N segments according to the number of pixels in each segment in the N segments, the number N of the segments of the edge histogram and the total number of pixels of the edge histogram;
the determining a gray value shift distance for each of the n gray values comprises: and calculating the gray value shift distance of each gray value in the n gray values according to the number of pixels of each gray value in the n gray values, the number n of the gray values of the first segment and the total number of pixels of the first segment.
7. The method of claim 2, wherein the segmenting the edge histogram and the non-edge histogram to obtain M segments of the edge histogram and N segments of the non-edge histogram comprises:
calculating a first sparse value of the edge histogram and a second sparse value of the non-edge histogram, wherein the first sparse value is used for representing the deviation degree of each pixel number in the edge histogram, and the second sparse value is used for representing the deviation degree of each pixel number in the non-edge histogram;
determining p gray values corresponding to the situation that the number of pixels in the edge histogram is smaller than the first sparse value, wherein p is an integer larger than or equal to 1;
determining q gray values corresponding to the situation that the number of pixels in the non-edge histogram is smaller than the second sparse value, wherein q is an integer larger than or equal to 1;
taking the p gray values as breakpoints, and carrying out segmentation processing on the edge histogram to obtain M segments; and
and taking the q gray values as break points, and carrying out segmentation processing on the non-edge histogram to obtain N segments.
8. The method of claim 1, wherein said processing said original histogram based on said fused histogram resulting in a processed histogram comprises:
determining a cumulative distribution function of the fused histogram;
determining a cumulative distribution function of the original histogram;
calculating the cumulative distribution function of the fused histogram and the cumulative distribution function of the original histogram to obtain a gray value change relation, wherein the gray value change relation comprises enhanced gray values corresponding to all gray values in the original histogram; and
and moving each gray value in the original histogram to a corresponding enhanced gray value based on the gray value change relation to obtain a processed histogram.
9. The method of claim 1, wherein the processing the original image to obtain an edge image and a non-edge image comprises:
calculating a gradient value of each pixel in the original image;
determining pixels with gradient values larger than a preset gradient value as pixels in the edge image; and
and determining the pixels with the gradient values smaller than or equal to the preset gradient values as the pixels in the non-edge image.
10. The method of claim 1, further comprising:
before processing the original image to obtain an edge image and a non-edge image, filtering the original image by using a Gaussian filtering mode so as to remove at least part of noise information in the original image.
11. An image processing apparatus comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an original image, and the original image comprises an original histogram;
the first processing module is used for processing the original image to obtain an edge image and a non-edge image;
the second processing module is used for respectively processing the edge image and the non-edge image to obtain an edge histogram of the edge image and a non-edge histogram of the non-edge image;
the third processing module is used for processing the edge histogram and the non-edge histogram to obtain a fused histogram; and
and the fourth processing module is used for processing the original histogram based on the fused histogram to obtain a processed histogram so as to obtain a processed image according to the processed histogram, wherein the contrast of the processed image is higher than that of the original image.
12. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-10.
13. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 10.
CN202010056778.XA 2020-01-17 2020-01-17 Image processing method, device, electronic equipment and medium Active CN111242879B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010056778.XA CN111242879B (en) 2020-01-17 2020-01-17 Image processing method, device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010056778.XA CN111242879B (en) 2020-01-17 2020-01-17 Image processing method, device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN111242879A true CN111242879A (en) 2020-06-05
CN111242879B CN111242879B (en) 2023-05-16

Family

ID=70872744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010056778.XA Active CN111242879B (en) 2020-01-17 2020-01-17 Image processing method, device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN111242879B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105160682B (en) * 2015-09-11 2018-07-03 四川华雁信息产业股份有限公司 Method for detecting image edge and device
CN108846319A (en) * 2018-05-25 2018-11-20 平安科技(深圳)有限公司 Iris image Enhancement Method, device, equipment and storage medium based on histogram

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115631122A (en) * 2022-11-07 2023-01-20 北京拙河科技有限公司 Image optimization method and device for edge image algorithm

Also Published As

Publication number Publication date
CN111242879B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
Zhang et al. Underwater image enhancement via extended multi-scale Retinex
Ancuti et al. Single-scale fusion: An effective approach to merging images
Perez et al. A deep learning approach for underwater image enhancement
Guo et al. An efficient fusion-based defogging
US8774555B2 (en) Image defogging method and system
US9582726B2 (en) Systems and methods for image processing in a deep convolution network
Ulutas et al. Underwater image enhancement using contrast limited adaptive histogram equalization and layered difference representation
Vasamsetti et al. Wavelet based perspective on variational enhancement technique for underwater imagery
US9754356B2 (en) Method and system for processing an input image based on a guidance image and weights determined thereform
US20090245689A1 (en) Methods and apparatus for visual sub-band decomposition of signals
US9478015B2 (en) Exposure enhancement method and apparatus for a defogged image
US20200364829A1 (en) Electronic device and method for controlling thereof
Almutiry et al. Underwater images contrast enhancement and its challenges: a survey
US12106334B2 (en) Artificial intelligence-based system and method for grading collectible trading cards
CN111445496B (en) Underwater image recognition tracking system and method
Zhu et al. Generative adversarial network-based atmospheric scattering model for image dehazing
CN111242879A (en) Image processing method, image processing apparatus, electronic device, and medium
Zhu et al. Low-light image enhancement network with decomposition and adaptive information fusion
Pandey et al. A fast and effective vision enhancement method for single foggy image
CN116210022A (en) Image processing apparatus and method of operating the same
CN110288691B (en) Method, apparatus, electronic device and computer-readable storage medium for rendering image
US9600713B2 (en) Identification and processing of facial wrinkles in a digital image
Shi et al. Underwater image enhancement based on adaptive color correction and multi-scale fusion
CN111311610A (en) Image segmentation method and terminal equipment
US20230069072A1 (en) Image processing apparatus and operation method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant