CN105427257A - Image enhancement method and apparatus - Google Patents
Image enhancement method and apparatus Download PDFInfo
- Publication number
- CN105427257A CN105427257A CN201510799661.XA CN201510799661A CN105427257A CN 105427257 A CN105427257 A CN 105427257A CN 201510799661 A CN201510799661 A CN 201510799661A CN 105427257 A CN105427257 A CN 105427257A
- Authority
- CN
- China
- Prior art keywords
- information
- image
- frequency part
- component
- enhanced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 7
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 230000002708 enhancing effect Effects 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 claims description 7
- 238000000605 extraction Methods 0.000 claims description 5
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 238000003786 synthesis reaction Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 8
- 238000012544 monitoring process Methods 0.000 description 8
- 230000000694 effects Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000000354 decomposition reaction Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 125000001475 halogen functional group Chemical group 0.000 description 1
- 238000003707 image sharpening Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image enhancement method. The method comprises: converting a target image to a YUV color space to obtain image information of the target image; extracting first Y component information from the image information; extracting high-frequency part information and low-frequency part information of the first Y component information, and performing enhancement; synthesizing the enhanced high-frequency part information and low-frequency part information of the first Y component information to obtain enhanced information of the first Y component information; reconstructing the enhanced information of the first Y component information to obtain second Y component information; superposing the second Y component information with UV component information in the image information of the target image to obtain enhanced information of the image information; converting the enhanced information of the image information to an RGB color space to obtain an enhanced image; and outputting the enhanced image of the target image. The invention furthermore provides an image enhancement apparatus. The technical problem that detailed information will become fuzzy when the image brightness is enhanced in the prior art can be solved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an image enhancement method and device.
Background
Nowadays, intelligent image monitoring is widely applied to the fields of road traffic, city security, electric power monitoring and the like. When a camera in a complex environment (such as a field power monitoring environment) acquires images, the camera is easily affected by factors such as rain and snow weather, light change in the early morning and at night and the like, so that the image quality of the output images of the camera after coding and compression cannot meet the requirements of users. Therefore, before the image is analyzed, it is necessary to perform certain enhancement and sharpening operations on the image.
Generally, image enhancement mainly includes edge enhancement, texture enhancement, target region enhancement, and contrast enhancement. Depending on the processing space, there can be a separation into spatial domain-based enhancement and frequency domain-based enhancement.
Spatial domain based methods typically directly perform pixel-level processing on the image. The method can be divided into point operation and field denoising. The purpose of the point operation methods, i.e., gray level correction, gray scale conversion, histogram correction, and the like, includes making an image uniform, expanding the dynamic range of the image, and expanding the contrast. The field denoising method is divided into two types, namely image smoothing and sharpening. Smoothing is generally used to eliminate image noise, but also tends to cause blurring of edges. The purpose of sharpening is to highlight the edge contour of an object, facilitating target recognition.
The frequency domain based method is a method of indirect enhancement in which some correction is made to the transform coefficient values of an image in some transform domain of the image. The frequency domain based method considers an image as a two-dimensional signal, performs two-dimensional fourier transform on the signal, directly operates on transform coefficients (such as frequency domain components) of the image, and then obtains an enhancement effect of the image through inverse fourier transform. The noise in the image can be removed by low-pass filtering, i.e. passing only low-frequency signals. And high-pass filtering is adopted, so that high-frequency signals such as edges and the like can be enhanced, and a blurred image becomes clear.
The Retinex algorithm is a commonly used image enhancement method based on frequency domain. It is a model proposed by Land et al as to how the human visual system adjusts the perceived color and brightness of objects, assuming that the image has color constancy, i.e., the same object is color constant under different light sources or rays. According to Retinex theory, the image of an object observed by the human eye (or a camera or other collection device) is mainly determined by two elements, namely incident light and reflected light. Accordingly, an image can be seen as being composed of two parts, an illumination image and a reflection image. Under the condition of constant color, the illumination image and the reflection image can be respectively adjusted to achieve the purpose of image enhancement. The method has excellent performances such as contrast enhancement effect, noise suppression, calculation efficiency and the like. The method is suitable for processing images with low local gray values, can enhance the detail parts in dark places and can keep the original brightness to a certain extent while compressing the image contrast. However, the image enhancement method based on Retinex is easy to generate halo phenomenon in a highlight area, and the global brightness approaches to the mean value, so that the contrast of local detail information is insufficient.
In the application of intelligent video analysis in complex environments such as field power monitoring, the intelligent image monitoring has higher requirements on the overall brightness of an image and the local detail contrast of an individual region, so how to provide an image processing method which can improve the overall brightness of the image and maintain and even improve the contrast of image details. It is an urgent technical problem for those skilled in the art to solve.
Disclosure of Invention
In view of the above, the present invention provides an image enhancement method and apparatus, by which not only the overall brightness of an image can be improved, but also the contrast of image details can be maintained or even improved.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides an image enhancement method, where the method includes: converting a target image into a YUV color space to obtain image information of the target image; extracting first Y component information from the image information of the target image; extracting high-frequency part information and low-frequency part information of the first Y component information; respectively enhancing the high-frequency part information and the low-frequency part information of the first Y component information; synthesizing the high-frequency part information and the low-frequency part information of the enhanced first Y component information to obtain enhanced information of the first Y component information; reconstructing the synthesized enhanced information of the first Y component information to obtain second Y component information; superposing the second Y component information and UV component information in the image information of the target image to obtain enhanced information of the image information of the target image; converting the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image; and outputting the enhanced image of the target image.
In a second aspect, an embodiment of the present invention further provides an image enhancement apparatus, where the image enhancement apparatus includes: the first color space conversion module is used for converting a target image into a YUV color space to obtain the image information of the target image; the first extraction module is used for extracting first Y component information from the image information of the target image; the second extraction module is used for extracting high-frequency part information and low-frequency part information of the first Y component information; the enhancement module is used for respectively enhancing the high-frequency part information and the low-frequency part information of the first Y component information; the synthesis module is used for synthesizing the high-frequency part information and the low-frequency information of the enhanced first Y component information to obtain enhanced information of the first Y component information; the reconstruction module is used for reconstructing the synthesized enhanced information of the first Y component information to obtain second Y component information; the superposition module is used for superposing the second Y component information and UV component information in the image information of the target image to obtain enhanced information of the image information of the target image; the second color space conversion module is used for converting the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image; and the output module is used for outputting the enhanced image of the target image.
By the image enhancement method, the brightness component of the image is decomposed into high-frequency part information and low-frequency part information, the high-frequency part information and the low-frequency part information are respectively enhanced and then synthesized, and finally, color information is superposed to obtain a final enhanced image. The effect of improving the overall brightness of the image and maintaining or even improving the contrast of the image details is realized.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for a user of ordinary skill in the art, other related drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flow chart illustrating an image enhancement method according to a first embodiment of the present invention;
FIG. 2 is a diagram illustrating wavelet decomposition of a two-dimensional image to generate a subband image according to a first embodiment of the present invention;
FIG. 3 shows a schematic diagram of the Retinex algorithm in a first embodiment of the present invention;
fig. 4 is a block diagram of an image enhancement apparatus according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
First embodiment
Fig. 1 is a flowchart of an image enhancement method according to a first embodiment of the present invention. As shown in fig. 1, the flowchart of the image enhancement method in the present embodiment may include the following steps.
Step S110, converting a target image into YUV color space to obtain image information of the target image.
The target image can be an image which is acquired in a complex environment (such as a field power monitoring environment) and needs image enhancement processing. The image obtained in the intelligent monitoring is an image in an RGB color space in which color and luminance information are mixed together, while a Y component in a YUV space represents luminance information and a UV component represents color information. This particular embodiment preferably performs image enhancement on the Y component. The formula for converting the RGB color space to the YUV color space is as follows:
in step S120, first Y component information is extracted from the image information of the target image.
First Y component information is extracted from the image information of the converted target image, in this embodiment, image enhancement mainly acts on the luminance Y component, and subsequent operations process the Y component information without changing the color UV component of the image, which remains.
Step S130, extracting high frequency part information and low frequency part information of the first Y component information.
In this step, it is preferable to convert the first Y component information from a spatial domain to a frequency domain by using a discrete wavelet transform, and extract high frequency part information and low frequency part information of the first Y component information. The high-frequency part information represents image information at a position with intense image intensity change, and the low-frequency part information represents image information at a position with gentle image intensity change.
The discrete wavelet transform of an arbitrary function f (t) can be represented in the form:
wherein,representing the complex conjugate of ψ (x).
Converting the second image from a spatial domain to a frequency domain by a discrete wavelet transform to perform enhancement on the first Y component information in the frequency domain. After wavelet decomposition of a two-dimensional image, one low-frequency subband image and 3 high-frequency subband images are typically generated, as shown in fig. 2. In FIG. 2, a1 represents the low frequency subband images, and h1, v1, and d1 represent the three high frequency subband images. The low-frequency subband image (a1) at the upper left corner represents that the approximate value is extremely similar to the original image, but the size is reduced, the low-frequency subband image contains most energy of the original image and has a large influence on the quality of the restored image, and wavelet coefficients of the other high-frequency subband images (h1 represents horizontal detail, v1 represents vertical detail and d1 represents diagonal detail) are mostly smaller, but contain much detail information of the original image.
Step S140 enhances the high frequency part information and the low frequency part information of the first Y component information, respectively.
In the specific embodiment provided by the present invention, preferably, the Retinex algorithm is adopted to enhance the high frequency part information and the low frequency part information respectively. It should be understood that, in other embodiments, the method for enhancing the high frequency part information and the low frequency part information of the first Y component information respectively may also be other known image enhancement algorithms.
The Retinex algorithm includes SSR (single scale Retinex) and MSR (multiscale Retinex).
a) SSR (Single Scale Retinex )
A given image S (x, y) may be decomposed into two distinct parts: the reflection image R (x, y) and the luminance image L (x, y) (which may also be referred to as an incident image) are shown in fig. 3.
Namely: the final image observed by a human or camera is described by the following formula:
S(x,y)=R(x,y)·L(x,y)。
since the final objective of Retinex estimation is to obtain R (x, y), the general calculation method is:
r(x,y)=logS(x,y)-log[F(x,y)*S(x,y)]
here, r (x, y) is the output image, a convolution sign, and F (x, y) is the center-surround function, typically in the form of gaussian low-pass filtering:
wherein c represents the gaussian surround scale, λ is a scale factor whose value must satisfy:
∫∫F(x,y)dxdy=1
b) MSR (MultiScale Retinex )
The method is developed on the basis of SSR, and the calculation formula is as follows:
where K is typically taken to be 3, i.e.: the high frequency, medium frequency and low frequency components are considered to be enhanced and weighted respectively, so as to form the final enhancement effect.
In this embodiment, it is preferable to respectively enhance the high frequency part information and the low frequency part information of the second image by using single-scale Retinex.
Step S150, synthesizing the enhanced high-frequency part information and low-frequency part information of the first Y component information to obtain enhanced information of the first Y component information.
The enhancement information of the first Y component information is enhancement information in a frequency domain.
Step S160, reconstructing the synthesized enhancement information of the first Y component information to obtain second Y component information.
In this embodiment, it is preferable that the enhancement information in the frequency domain is reconstructed by using an inverse wavelet transform to obtain the second Y component information containing the Y component in the spatial domain.
The inverse wavelet transform, also known as wavelet reconstruction, can be expressed as the following formula:
here, ,is called psim,nCan be constructed from a basic waveletBy displacement and telescoping:
in general, the inverse transform of a discrete wavelet transform requires the use of a sequence of discrete wavelets { ψ }j,k(t)}j,k∈ZTo construct the wavelet frame, let the upper and lower bounds of the wavelet frame be a and B, respectively, and when a ═ B (tight frame), it can be known from the concept of the wavelet frame that the inverse transform of the discrete wavelet transform is:
when A, B are not equal, but are relatively close, the inverse of the discrete wavelet transform may be approximated as:
step S170, superimposing the second Y component information and the UV component information in the image information of the target image to obtain the enhancement information of the image information of the target image.
In this step, the second Y component information and the unprocessed UV component information in the image information of the target image are superimposed to obtain enhancement information of the image information of the target image in the YUV color space.
Step S180, converting the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image.
And step S190, outputting the enhanced image of the target image.
And outputting the enhanced target image to facilitate the client to acquire enough image detail information from the brightness enhanced target image for analysis so as to acquire accurate monitoring information and provide reference for the client to make a decision.
By the image enhancement method, the brightness component of the image is decomposed into high-frequency part information and low-frequency part information, the high-frequency part information and the low-frequency part information are respectively enhanced and then synthesized, and finally, color information is superposed to obtain a final enhanced image. The effect of improving the overall brightness of the image and maintaining or even improving the contrast of the image details is realized.
Second embodiment
Fig. 4 is a block diagram of an image enhancement apparatus according to a second embodiment of the present invention, and referring to fig. 4, the image enhancement apparatus 200 includes
The first color space conversion module 210 is configured to convert a target image into a YUV color space to obtain image information of the target image.
The first extracting module 220 is configured to extract first Y component information and first UV component information from the image information of the target image, respectively.
A second extracting module 230, configured to extract high frequency part information and low frequency part information of the first Y component information.
In this embodiment, the second extracting module 230 converts the first Y component information from the spatial domain to the frequency domain by using wavelet transform, and obtains the high frequency part information and the low frequency part information of the first Y component information.
And an enhancing module 240, configured to enhance the high frequency part information and the low frequency part information of the first Y component information respectively.
In this embodiment, preferably, a Retinex algorithm is used to enhance the high frequency part information and the low frequency part information respectively.
A synthesizing module 250, configured to synthesize the high-frequency part information and the low-frequency information of the enhanced first Y component information to obtain enhanced information of the first Y component information;
and a reconstructing module 260, configured to reconstruct the synthesized enhancement information of the first Y component information to obtain second Y component information.
In this embodiment, it is preferable that the enhancement information of the first Y component information in the frequency domain is reconstructed by using an inverse wavelet transform to obtain second Y component information containing a luminance Y component in the spatial domain.
And a superimposing module 270, configured to superimpose the second Y component information and the UV component information in the image information of the target image to obtain enhancement information of the image information of the target image.
A second color space conversion module 280, configured to convert the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image.
An output module 290, configured to output an enhanced image of the target image.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In addition, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
Claims (10)
1. A method of image enhancement, the method comprising:
converting a target image into a YUV color space to obtain image information of the target image;
extracting first Y component information from the image information of the target image;
extracting high-frequency part information and low-frequency part information of the first Y component information;
respectively enhancing the high-frequency part information and the low-frequency part information of the first Y component information;
synthesizing the high-frequency part information and the low-frequency part information of the enhanced first Y component information to obtain enhanced information of the first Y component information;
reconstructing the synthesized enhanced information of the first Y component information to obtain second Y component information;
superposing the second Y component information and UV component information in the image information of the target image to obtain enhanced information of the image information of the target image;
converting the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image;
and outputting the enhanced image of the target image.
2. The image enhancement method of claim 1, wherein: the high frequency part information refers to image information at a position where the image intensity changes drastically, and the low frequency part information refers to image information at a position where the image intensity changes moderately.
3. The image enhancement method according to claim 1, wherein the step of extracting the high frequency part information and the low frequency part information of the first Y component information includes:
and extracting high-frequency part information and low-frequency part information of the first Y component information through wavelet transformation.
4. The image enhancement method according to claim 3, wherein the step of enhancing the high frequency part information and the low frequency part information of the first Y component information respectively comprises:
and respectively enhancing the high-frequency part information and the low-frequency part information of the first Y component information by a Retinex algorithm.
5. The image enhancement method according to claim 4, wherein the step of reconstructing the synthesized enhancement information of the first Y component information to obtain second Y component information comprises:
and reconstructing the synthesized enhanced information of the first Y component information through wavelet inverse transformation to obtain the second Y component information.
6. An image enhancement apparatus, characterized in that the image enhancement apparatus comprises:
the first color space conversion module is used for converting a target image into a YUV color space to obtain the image information of the target image;
the first extraction module is used for extracting first Y component information from the image information of the target image;
the second extraction module is used for extracting high-frequency part information and low-frequency part information of the first Y component information;
the enhancement module is used for respectively enhancing the high-frequency part information and the low-frequency part information of the first Y component information;
the synthesis module is used for synthesizing the high-frequency part information and the low-frequency information of the enhanced first Y component information to obtain enhanced information of the first Y component information;
the reconstruction module is used for reconstructing the synthesized enhanced information of the first Y component information to obtain second Y component information;
the superposition module is used for superposing the second Y component information and UV component information in the image information of the target image to obtain enhanced information of the image information of the target image;
the second color space conversion module is used for converting the enhancement information of the image information of the target image into an RGB color space to obtain an enhanced image of the target image; and
and the output module is used for outputting the enhanced image of the target image.
7. The image enhancement apparatus according to claim 6, characterized in that:
the high frequency part information refers to image information at a position where the image intensity changes drastically, and the low frequency part information refers to image information at a position where the image intensity changes moderately.
8. The image enhancement apparatus according to claim 7, characterized in that: the second extraction module extracts high frequency part information and low frequency part information of the first Y component information through wavelet transform.
9. The image enhancement apparatus according to claim 7, characterized in that: and the enhancement module respectively enhances the high-frequency partial image and the low-frequency partial image of the first Y component information through a Retinex algorithm.
10. The image enhancement apparatus according to claim 7, characterized in that: and the reconstruction module reconstructs the enhanced information of the superposed first Y component information through wavelet inverse transformation to obtain the second Y component information.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510799661.XA CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510799661.XA CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105427257A true CN105427257A (en) | 2016-03-23 |
Family
ID=55505438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510799661.XA Pending CN105427257A (en) | 2015-11-18 | 2015-11-18 | Image enhancement method and apparatus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105427257A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025631A (en) * | 2017-03-13 | 2017-08-08 | 深圳市嘉和顺信息科技有限公司 | A kind of image processing method, device and the equipment of golf course figure |
CN107680064A (en) * | 2017-10-31 | 2018-02-09 | 长沙准光里电子科技有限公司 | Computer-readable recording medium |
CN108680137A (en) * | 2018-04-24 | 2018-10-19 | 天津职业技术师范大学 | Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar |
CN109389560A (en) * | 2018-09-27 | 2019-02-26 | 深圳开阳电子股份有限公司 | A kind of adaptive weighted filter image denoising method, device and image processing equipment |
CN110278425A (en) * | 2019-07-04 | 2019-09-24 | 潍坊学院 | Image enchancing method, device, equipment and storage medium |
CN110365914A (en) * | 2019-07-24 | 2019-10-22 | 中国人民解放军国防科技大学 | Image dynamic range widening method and system |
CN110599406A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
CN111225278A (en) * | 2020-03-02 | 2020-06-02 | 新疆大学 | Method and device for enhancing video under low illumination |
CN113255571A (en) * | 2021-06-16 | 2021-08-13 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
CN113359734A (en) * | 2021-06-15 | 2021-09-07 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113747046A (en) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
WO2023098251A1 (en) * | 2021-12-03 | 2023-06-08 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method, device, and readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090074317A1 (en) * | 2007-09-19 | 2009-03-19 | Samsung Electronics Co., Ltd. | System and method for reducing halo effect in image enhancement |
CN103440623A (en) * | 2013-08-02 | 2013-12-11 | 中北大学 | Method for improving image definition in foggy days based on imaging model |
-
2015
- 2015-11-18 CN CN201510799661.XA patent/CN105427257A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090074317A1 (en) * | 2007-09-19 | 2009-03-19 | Samsung Electronics Co., Ltd. | System and method for reducing halo effect in image enhancement |
CN103440623A (en) * | 2013-08-02 | 2013-12-11 | 中北大学 | Method for improving image definition in foggy days based on imaging model |
Non-Patent Citations (1)
Title |
---|
张红颖 等: "基于YUV色彩空间的Retinex夜间图像增强算法", 《科学技术与工程》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107025631A (en) * | 2017-03-13 | 2017-08-08 | 深圳市嘉和顺信息科技有限公司 | A kind of image processing method, device and the equipment of golf course figure |
CN107680064A (en) * | 2017-10-31 | 2018-02-09 | 长沙准光里电子科技有限公司 | Computer-readable recording medium |
CN108680137A (en) * | 2018-04-24 | 2018-10-19 | 天津职业技术师范大学 | Earth subsidence detection method and detection device based on unmanned plane and Ground Penetrating Radar |
CN109389560A (en) * | 2018-09-27 | 2019-02-26 | 深圳开阳电子股份有限公司 | A kind of adaptive weighted filter image denoising method, device and image processing equipment |
CN109389560B (en) * | 2018-09-27 | 2022-07-01 | 深圳开阳电子股份有限公司 | Adaptive weighted filtering image noise reduction method and device and image processing equipment |
CN110599406A (en) * | 2019-03-18 | 2019-12-20 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
US11915392B2 (en) | 2019-03-18 | 2024-02-27 | Shanghai Linkchip Semiconductor Technology Co., Ltd. | Image enhancement method and apparatus |
CN110599406B (en) * | 2019-03-18 | 2022-05-03 | 上海立可芯半导体科技有限公司 | Image enhancement method and device |
CN110278425A (en) * | 2019-07-04 | 2019-09-24 | 潍坊学院 | Image enchancing method, device, equipment and storage medium |
CN110365914A (en) * | 2019-07-24 | 2019-10-22 | 中国人民解放军国防科技大学 | Image dynamic range widening method and system |
CN111225278A (en) * | 2020-03-02 | 2020-06-02 | 新疆大学 | Method and device for enhancing video under low illumination |
CN113747046A (en) * | 2020-05-29 | 2021-12-03 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN113359734B (en) * | 2021-06-15 | 2022-02-22 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113359734A (en) * | 2021-06-15 | 2021-09-07 | 苏州工业园区报关有限公司 | Logistics auxiliary robot based on AI |
CN113255571B (en) * | 2021-06-16 | 2021-11-30 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
CN113255571A (en) * | 2021-06-16 | 2021-08-13 | 中国科学院自动化研究所 | anti-JPEG compression fake image detection method |
WO2023098251A1 (en) * | 2021-12-03 | 2023-06-08 | 腾讯音乐娱乐科技(深圳)有限公司 | Image processing method, device, and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105427257A (en) | Image enhancement method and apparatus | |
CN109685728B (en) | Digital image processing method based on local time-frequency domain transformation | |
CN111583123A (en) | Wavelet transform-based image enhancement algorithm for fusing high-frequency and low-frequency information | |
CN103077508B (en) | Transform domain non local and minimum mean square error-based SAR (Synthetic Aperture Radar) image denoising method | |
CN103295204B (en) | A kind of image self-adapting enhancement method based on non-down sampling contourlet transform | |
CN105096280A (en) | Method and device for processing image noise | |
US20190279369A1 (en) | Phase Transform for Object and Shape Detection in Digital Images | |
CN110322404B (en) | Image enhancement method and system | |
CN113724164A (en) | Visible light image noise removing method based on fusion reconstruction guidance filtering | |
Zhong et al. | Image enhancement based on wavelet transformation and pseudo-color coding with phase-modulated image density processing | |
Pai et al. | Medical color image enhancement using wavelet transform and contrast stretching technique | |
Hanumantharaju et al. | Natural color image enhancement based on modified multiscale retinex algorithm and performance evaluation using wavelet energy | |
Abd-Elhafiez | Image compression algorithm using a fast curvelet transform | |
Kumari et al. | Image fusion techniques based on pyramid decomposition | |
Soni et al. | De-noising of Gaussian noise using discrete wavelet transform | |
Guo et al. | An Image Denoising Algorithm based on Kuwahara Filter | |
Amro et al. | General shearlet pansharpening method using Bayesian inference | |
CN106327440B (en) | Picture breakdown filtering method containing non-local data fidelity term | |
Hamici et al. | Pavement Images Denoising with Cracks Detection and Classification Using 2D Discrete Wavelet Transform and Savitzky-Golay Filters | |
CN112508829B (en) | Pan-sharpening method based on shear wave transformation | |
Peracaula et al. | Segmenting extended structures in radio astronomical images by filtering bright compact sources and using wavelets decomposition | |
Song et al. | Image denoising method based on non-uniform partition and wavelet transform | |
Zhang | A novel enhancement algorithm for low-illumination images | |
Soni et al. | Problem of denoising in digital image processing and its solving techniques | |
CN114429470A (en) | Stage target detection algorithm based on attention area multidirectional adjustable filtering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160323 |
|
RJ01 | Rejection of invention patent application after publication |